原推:Unfriendly-AI risk continues to be probably the biggest thing that could seriously derail humanity’s ascent to the stars over the next 1-2 centuries. Highly recommend more eyes on this problem. https://t.co/G248XzRFaD
A list from @ESYudkowsky of reasons AGI appears likely to cause an existential catastrophe, and reasons why he thinks the current research community — MIRI included — isn’t succeeding at preventing this from happening. lesswrong.com/posts/uMQ3cqWD…
https://twitter.com/VitalikButerin/status/1534581283271102467