Yuichi Tei(Ung-il Chung), M.D., Ph.D.作品一覧
検索のヒント
検索のヒント
■キーワードの変更・再検索
記号を含むキーワードや略称は適切に検索できない場合があります。 略称は正式名称の一部など、異なるキーワードで再検索してみてください。
■ひらがな検索がおすすめ!
ひらがなで入力するとより検索結果に表示されやすくなります。
おすすめ例
まどうし
つまずきやすい例
魔導士
「魔導師」や「魔道士」など、異なる漢字で検索すると結果に表示されない場合があります。
■並び順の変更
人気順や新着順で並び替えると、お探しの作品がより前に表示される場合があります。
■絞り込み検索もおすすめ!
発売状況の「新刊(1ヶ月以内)」にチェックを入れて検索してみてください。
-
-With rapid advancement in development of artificial intelligences (AIs) and robots in the current world, some people predict that a society where robots and human beings coexist is approaching in the near future. However, I simply wonder if we could actually get along with robots in this world, where we cannot accept diversity even among the same human beings. If we had robots in this deeply divided world, would not it merely end up causing even greater chaos? I recently started researching on morality engine to control behavior of robots. Simply put, I study how to make robots distinguish good and evil by themselves for the upcoming future when robots and human beings coexist. The concept of morality for robots is not anything new. Back in 1940s, for example, an American science fiction writer Isaac Asimov started introducing his famous Three Laws of Robotics in his novels. The Three Laws are very well known, and some people even treat them like golden rules for robots to observe. However, to me, these Laws seem to hold significant problems and therefore to be unsuitable for practical purposes. As you read this book, you will be able to figure out the fundamental defect in the Laws. In order to study the moral engine with which to regulate robots, we need to first describe the moral framework of human beings. We can make possible such an attempt to model an abstract concept, by using an engineering way of thinking as a tool. In this book, I would like to think about this framework together with you, using as simple and easy words as possible. If we can model human morality, we will be able to install it onto brains of robots. If we can build a moral system that robots and human beings – mutually different existences – can share, it will in turn help us to overcome divisions resulting from differences in standpoints among human beings, and to further develop an inclusive and diverse society. Using such a new moral system, I would like to establish alternative principles to Asimov’s Three Laws of Robotics and to think about the possibility of a society where human beings and robots coexist. Morality and robots may seem to have nothing in common — but by looking at the point where these two areas actually cross, we will be able to see principles of a future society that we human beings should aim at. Throughout this intensive seminar, we are going to freely and widely develop our arguments. I plan to also provide you with a summary and practice exercises at the end of each session to help deepen your understanding. Let us make ourselves ready for thinking outside the box, digging deep into our imagination. Contents Introduction Session 1. Is the “You Shall Not Kill” Rule Universal? Session 2. Classifying Prior Moral Thoughts Session 3. You Shall Not Kill… Whom? Session 4. Modeling the Basic Principle of Morality Session 5. Classifying Hierarchy of Morality Session 6. Installing Morality onto Robots Afterword Hints for Practice Exercises References