从此走进深度人生 Deepoo net, deep life.

作者: deepoo

  • 吴良宝:战国文字地名考证及相关问题

    战国文字中的地名资料,主要集中在当时的简牍官文书、官印、兵器、封泥、铜器等载体中,从内容上可以分为人文地理、自然地理两大类,包括行政区划名、山川湖泽等。我们本次讲座所关注的战国文字地名资料,专指其中战国时期的地理资料,楚简等资料中的战国以前地名不在讨论之列。

    随着新资料的不断公布以及战国文字研究的发展,战国文字地名的整理与研究已经取得了丰硕成果。本次讲座将对这些内容加以介绍,同时也总结其中的成功经验与有待提高之处。

    研究概况

    所谓“行政区划”,是“为国家行政机关实行分级管理而进行的区域划分”,也就是指在国土上建立一个由若干等级、不同幅员的行政区域所组成的体系。我国历史上曾经出现的政区单位包括“郡、县、道、州、府、行省”等,而我们所关注的战国时期文字中常见的政区单位主要是“郡、县”二级,县级以下的则有“乡、州、里、遂”等,从时间上来说,各种政区单位资料的时代集中在战国中晚期。

    历史地理学家谭其骧《秦郡新考》(1947年)是秦郡研究的集大成之作,得出了秦郡46郡郡目,《中国历史地图集》(1985年)在此基础上又增加了“鄣郡、庐江郡”,这是历史地理学界长期流行的主流意见。关于战国时期各诸侯国设郡的情况,杨宽《战国史》(1955年第一版,后经两次增订)“战国郡表”曾有过统计,共计六国之郡24个、秦郡36个;后晓荣《战国政区地理》(2013年)认为六国置郡31个(含齐国5都)、秦国统一之前置郡7个;何慕《出土文献所见秦郡名考》(2019年)统计秦封泥、秦简等出土文献的秦郡共29个、待定3个。

    李学勤《战国题铭概述(上)》(1959年)、李家浩《先秦文字中的“县”》(1987年)先后释出齐文字以及三晋、燕国文字中的若干字,郝本性《新郑“郑韩故城”发现一批战国铜兵器》(1972年)公布了一百多件韩国有铭兵器、黄盛璋《试论三晋兵器的国别和年代及其相关问题》(1974年)系统整理了三晋各国都城与地方各县监造的兵器,以上工作都推动了对战国县制的深入研究。

    战国文字资料中“县”的判断标准问题,以往学界较少注意。陈伟《包山楚简初探》(1996年)对包山简中楚县的判定进行了讨论,提出“司法职权”“名籍管理”“职官设置”等三项标准;颜世铉《包山楚简地名研究》(1997年)在此基础上提出了八项标准。吴良宝《战国文字所见三晋置县辑考》(2002年)、《谈战国时期齐国的置县问题》(2017年)对三晋文字、齐文字中的县进行梳理,使用了多项判断标准。李晓杰《中国行政区划通史·先秦卷》(2009年)也对战国文字中“县”的判定作了阐述,表达了不同的意见。

    对于各国置县的情况,目前学界意见多有不同。后晓荣《战国政区地理》(2013年)曾统计传世文献与出土文献中秦、六国置县的数量,共计567个县、27个可能是县的地名;李晓杰《中国行政区划通史·先秦卷》(2017年)统计出六国设置了183个县名、196个疑似县名,以及秦国置县共365个。受限于各种条件与因素,战国时期秦郡名目、各国的置县数量,相当长时间内都无法给出一个明确的统计数字。

    历史自然地理特别是古代的水道径流、海岸线等的研究难度极大,目前基本上无法复原或准确画出,并导致学界相关的争论。比如,《山海经》《尚书·禹贡》等文献记载的黄河下游水道孰早孰晚,学界尚存在争议。从事战国文字地理研究的学者,多数侧重于地名的文字释读以及国别年代判断、地望考证、诸侯国的疆域变迁等内容,对于战国文字中的自然地理研究相对薄弱。目前对战国文字自然地理资料的讨论,主要是水道名(比如新蔡葛陵楚简的“沮漳”、上博简《容城氏》的“蒌水”等)、山名(郭店简的“鬲山”、清华简《楚居》的“畏山、穴穷、乔山”)的考释,上博简《容城氏》“竞州、莒州”等“九州”的讨论也可归入此类研究。

    学术价值

    战国文字地名研究的学术价值是多方面的,下面试作说明。

    首先是提供了不见于传世文献记载的政区地理资料。

    里耶秦简、岳麓秦简、秦封泥中出现的“苍梧、洞庭、江胡、巫黔中”等,都是不见于史籍记载的秦郡名;三晋文字中的“邯坵、龙阳、氶、辛市、狐泉、蔑”、齐文字中的“奠昜、清陵、慎鱼、武寻”、燕文字中的“东阳、中阳”、秦文字中的“义阳、门浅、定楬、蓬”等,属于县、道之名,这些都是可以用于补充传世文献记载的第一手材料。

    其次是提供了更明确的地理沿革信息。

    例如,某些秦郡的始置时间一直存在争议,而通过出土秦文字资料的考察,可以为这些讨论提供相对明确的年代下限。据《汉志》班固自注,泰山郡、武陵郡都是西汉“高帝置”,但是岳麓秦简、里耶秦简中已经出现了“泰山守”“武陵泰守”,说明至晚在秦代这两个郡就已经设置,早于文献中的班固注说;谭其骧认为衡山郡是秦统一之后方由九江郡析置,而岳麓秦简出现了秦王政二十五年(公元前222年)“行戍衡山郡”的记载,这证明了衡山郡的设置不得晚于秦王政二十五年,结合传世史书中的秦始皇二十八年(公元前219年)巡省、途径衡山郡相关记载,说明衡山郡是秦始皇三十六郡之一,并非秦统一之后才析置而来。里耶秦简的一份秦始皇三十四年(公元前213年)文书中出现了“苍梧为郡九岁”的内容,由此我们可以推算苍梧郡始设于秦王政二十五年,由此推断苍梧郡、洞庭郡均为秦王政二十五年王翦“定荆江南地”的结果。里耶秦简“琅邪尉徙治即默”的内容,证明了即墨郡是秦统一之后析自琅邪郡,由此纠正了以往即墨郡置于秦灭齐之时、名为“胶东郡”等误说。

    秦简对郡治研究亦有推进作用。里耶秦简所记秦洞庭郡“新武陵别四道,以次传”的文书传递路线,揭示出新武陵是洞庭郡治,而里耶木牍又可以为我们继续探讨新武陵、门浅、上衍等秦县的地望提供线索。上述这些问题,如果仅靠传世文献记载,是无法圆满解决的。

    另外,有些地名上可以溯源到春秋时期、下可以系联到西汉时期,帮助我们补足县级政区沿革的链条。比如,韩国铜壶上的“隋”、三晋官印“上阳守”的“上阳”等地名,即《左传·隐公五年》记载的晋地“随”(在今山西介休市东)、《左传·僖公五年》所记之“上阳”(在今河南三门峡市东),这些都不见于战国时期的文献记载;秦官印中的“束州”(今河北河间市东北),见于《汉志》勃海郡,正因为官印的文字记录,束州建置可据此提前到秦代。

    战国文字中的地名资料,也为我们研究战国与秦代的郡县制提供了丰富的资料。这些地名有相当一部分是当时的县名。黄盛璋共考证了兵器中出现的三晋县名47个(分别为韩12个、魏17个、赵9个,国别待定9个)。李学勤《东周与秦代文明》更进一步指出:“战国古玺不少是县一级官吏的玺印。在这时的兵器上,每每也刻有郡县官吏署名的铭文。同时,许多货币上的地名,也是县的地名。这些古文字文物,都是郡县制发展普及的实物证据。”

    这些资料还提供了研究战国时期诸侯国疆域及其变迁的史料。在这方面,带有纪年信息的兵器、铜器的价值尤为明显。铸造时间比较明确的尖足布币、三孔布币也是重要的资料。例如,通过结合三孔布币、官印等资料中的“且居、崞、当城、上艾、余吾、扶柳”等地名,我们可以勾勒出公元前249年之后在今山西省北部与河北省内的赵国疆域的轮廓;战国晚期韩国桓惠王、韩王安时的兵器出现地名有黄河以北的“野王、长子、彘、屯留、介”、黄河以南的“安成、襄城、新城、蔑、釐、阳翟、高都、梁”等,据此我们能够大致描绘出这一时期在今山西、河南省境内的韩国疆域轮廓。

    最后是能帮助进一步校订古代典籍文献中的地名讹误。例如,秦封泥中出现了“方渠除丞”,学者据此指出,《汉志》北地郡的“方渠”“除道”两个县、道其实是秦“方渠除”一个道名的误拆。秦兵器的置用地名“徒淫”,旧多误释为《汉志》西河郡属县“徒经”,而根据秦封泥、张家山汉简《秩律》等来分析,“徒经”乃“徒淫”之讹误。云梦睡虎地秦简《编年记》载,秦昭王三十三年(公元前274年)“攻祭、中阳”(三三壹简),简文的“祭”通假为“蔡”,整理小组据此指出“疑《秦本纪》‘蔡阳’系‘蔡、中阳’之误”。

    这里试举一例,学者通过清华简曾解决了一个学界长期聚讼的问题。《左传·隐公元年》记载的著名故事“郑伯克段于鄢”的“鄢”、宋庠本《国语·郑语》虢郐十邑的“鄢”,其实都是“邬”之误字。考虑到乌、焉两个字或偏旁在战国文字阶段字形区别明显,推测这个讹混应该发生于中古时期。既知“郑伯克段于鄢”的“鄢”实为“邬”,叔段一败于京、再败于邬,最后逃窜到黄河以北的共地,其败逃过程与路线由此就得以厘清,无需牵扯到今天的河南柘城、鄢陵等地。

    再试举一个后世对《战国策》原文误改的例子。《战国策·燕策一》“苏代谓燕昭王”章记载:“秦不出崤塞,齐不出营丘,楚不出疏章”,意在论证如果安于现状、不思进取,那么秦、齐、楚等国当年就不会离开他们的始居地,也就不会有今日之盛。而在马王堆帛书《战国纵横家书》“苏秦谓燕王”章出现了类似的内容:“自复而足,楚将不出雎章,秦将不出商阉,齐不出吕遂,燕将不出屋、注”,比较以上两则材料,记载的秦不出的“崤塞”和“商阉”明显是两个不同的地名。参考清华简《系年》第三章有“成王伐商盖,杀飞廉,西迁商盖之民于邾吾,以御奴卢之戎,是秦之先,世作周卫”的内容(第14—16号简),李学勤指出,秦人始居之地本是东方的商阉(今山东省曲阜市一带),后被谪迁到西垂的邾吾(今甘肃省甘谷县一带),后世不了解这一史实,遂将策文“商阉”误改为今传本的“崤塞”。

    不足之处

    战国文字中的地名,相当一部分见于《竹书纪年》《战国策》《史记》等传世文献记载之中,这方面的资料整理与研究已取得很可观的成绩,但也存在一些不足。主要表现为文字释读有误、通假破读不当、地望考订有误等。

    战国文字地名资料中的疑难字不在少数,燕国官印中的所谓“洀汕”(《古玺汇编》0363),至今已有“洀谷”“湶水”“潮汕”等多种释读意见,尚未达成一致。形近混同导致的误释,也会给地名研究制造困难,比如楚简地名“酂”长期被误认为“ ”字,实际上“赞、鬲”二者的上部笔画区别还是比较明显的。虽然楚国金币上的“”早在1980年就被正确地释出,但并未在区分“”“酂”时发挥应有的作用。

    地名文字释读有误,有的因图版不够清晰导致。深圳南山博物馆藏的韩国七年令雍氏戈,内部加刻的置用地名所谓“崤山”,实际应是“比阳”二字,这是后续依靠清晰照片得以纠正的。

    楚国鄂君启舟节中的“鄂”地长期被定在今湖北鄂州市,加之“由”“古”二字时有形近相混的情况,导致“油”字被误释为“沽(湖)”,由此误释进而提出了节铭有三条水运路线的观点,将“逾”理解为“更换水路”或采用“水陆联运”的方式。直到陈伟正确地释读为“油(淯)”,即今汉水的支流白河,舟节上半段的线路才基本得以理顺。20世纪70年代之前,学界未将节铭的鄂定为《汉志》南阳郡的西鄂,在当时释字水平情况下,误释“油”为“沽(湖)”是一个不可忽视的因素——南阳盆地没有大型的湖泊水面,定为西鄂就难以解释节铭的“逾湖”之说,即便想到可能是“油”字,非历史地理领域的学者也不一定会联想到此字可以破读为《山海经》(《文选》注引)、《水经注》里的淯水。

    黄盛璋在《试论三晋兵器的国别与年代及其相关问题》名作中说,新郑“郑韩故城”窖藏兵器的地名“平匋”就是赵国的“平匋(陶)”,在今山西文水县西南。黄文之所以会将韩国兵器上的地名“平匋”判断为赵国之“平陶”,除受限于当时材料之外,无疑是受到地名“少曲”的合文先被误释为“肖”字,进而被误解为“趙”(赵)国之“趙”的负面影响,且赵国恰好有“平匋”地名,使他产生了“郑韩故城”窖藏兵器里也有赵国兵器的错觉。1985年李家浩将“肖”改释为“少曲”之后,这个疏误就此得以纠正。河南荥阳市广武镇南城遗址发掘出土的“坪匋市、平匋仓”等陶文资料,田成方、高靖尧定为见于《水经·济水注》的“平桃城”,此说甚是,由此可见韩兵、陶文的“平匋”并非赵地之“平匋”,属于“同名异地”。

    通假手段使用得当,可以为解决地理疑难问题提供钥匙。古文字学家于省吾曾提出“律例兼备”的标准,“在利用古音知识去解决考释古文字中的具体问题时,单从理论上说明某两个字可以有双声叠韵关系是不够的,最好能举出实际的例证。”“于省吾先生借用古代司法术语,把前者称为‘律’,后者称为‘例’,认为只有律、例兼备,所下的判断才能令人信服。”(转引自林沄《古文字学简论》)陈伟、朱德熙、李家浩将上述所列鄂君启舟节的“油”读为“淯”、“芸昜”读为“旬阳”,使得节铭的汉水上中游经行线路豁然明朗,是地名研究中运用通假手段的典型例证。

    通假破读使用不当,也是地名考证中的常见问题。我们以《利簋》“柬”地而言,或读为“管”(在今河南郑州市),符合“律例兼备”的条件,长期被学界信从。但新邑鼎铭文说周王从今天洛阳市的“新邑”历时“二旬又四日”才抵达柬地,坂方鼎铭文记载商王到“殷”之地祭祀之后当天返回柬地,这两则路程信息都证明了“柬”地是不可能在郑州市一带的。可见,“律例兼备”之外还有一些其他制约需要我们重视,否则就容易导致似是而非的判断。再比如,《古玺汇编》0147“啺攻帀鉨”齐国官印的“啺”旧多读为今山东曹县的“唐”,从这枚官印与“不其市节”同钤于一件陶器来看,“不其”在今山东青岛市即墨区,相应的“啺”地似乎不应远在曹县一带(今鱼台或曹县一带战国早中期属于宋国的领土,齐国只在湣王灭宋之后短暂占据过这一带)。孙刚指出,齐地“啺”应读为《左传》襄公六年“王湫帅师及正舆子、棠人军齐师”的“棠”,在今青岛市即墨区南部,这符合其对应“不其”地点的逻辑,其说可信。

    地望考订有误,最主要的表现是把时代较晚的地望误当作早期地望。这里以见于秦兵器的“平周”、三晋文字的“平州(周)”为例:《汉志》西河郡有“平周”县,在黄河以东今山西省介休市西,学界据此认为战国时期的“平周”也应是此处。但是,根据陕西米脂县出土的汉代画像石题记所载:“永和四年九月十日癸酉,河内山阳尉西河平周寿贵里牛寄平造作千万岁室宅”,这里的“河内山阳尉”是墓主人生前任职的地点、所任的官职,“西河平周寿贵里”是他的籍贯,墓主人死后归葬故里,据此吴镇烽提出“平周”应在今天的米脂县境内。又根据张家山247号墓汉简《秩律》中的“平周、西都、中阳”排在《汉志》上郡之列,周振鹤据此指出,“平周、中阳”二县过去被定位于黄河以东可能是错误的。陕西神木市栏杆堡出土“中阳”秦陶文,陕西榆林市桥头峁城址出土“西都”秦陶文(《文博》2019.5),为地望考订提供了重要线索,同时也为东汉永和五年(公元140年)陕北地区被匈奴占据、部分县治从黄河以西迁至今山西省境内提供了有力的佐证。

    “同名异地”与“同地异名”的现象也需要我们格外留意。这一现象在传世文献、出土文献中不在少数,比如“安阳、平阳、武城、安平、平舒、新城”等,赵国境内有东、西两个安阳、武城,燕国境内的“当城”地名还加了“东、北”这样的方位词加以限定,之前讨论的韩国与赵国同名“平陶”之地也是一例。我们再以战国后期赵国流通货币“三孔布币”所载地名“武阳”为例,说明辨析同名异地的必要性。在传世文献中我们可以找到多个“武阳”之地,出土的燕、赵两国的兵器、货币、玺印也有这一地名。由于今河北易县燕下都在《水经注》中被称作“武阳”,因此学界一直相信三孔布币的“武阳”就是燕下都,这也与文献《史记·赵世家》记载的孝成王十九年(公元前247年)燕国把“葛、武阳、平舒”交换给赵国之事相符。但是,我们按照正常的军事地理逻辑来审视,燕、赵这次交换土地之后,两国沿着徐水、易水一线分界,燕国在北、赵国在南,形成相对更易维持的军事对峙。如果按照易县“燕下都”是武阳的推断,此次燕国把易水以北的“武阳”换给了赵国,相当于彻底打破了燕赵沿徐水、易水对峙的军事态势,赵国成功楔入了燕国防线。而且从《史记》中“葛、武阳、平舒”的叙述顺序看,燕国换给赵国的武阳不可能是“燕下都”,只能位于今河北任丘市一带。其实赵国另有“武阳”之邑,即《史记·赵世家》记载的孝成王十一年(公元前255年)武阳君郑安平死后被收回的封地,即《水经·浊漳水注》长芦水的武阳城,在今河北省衡水市冀州区小寨乡,这才是三孔布币与玺印中的“武阳”。

    本文转自《光明日报》( 2024年11月02日 10版)

  • 马特·里德利《基因组》(节选)

    目录
    第一号染色体 生命
    第二号染色体 物种
    第三号染色体 历史
    第四号染色体 命运
    第五号染色体 环境
    第六号染色体智慧
    第七号染色体本能
    X和Y染色体 冲突
    第八号染色体自身利益
    第九号染色体疾病
    第十号染色体压力
    第十一号染色体个性
    第十二号染色体自我组装
    第十三号染色体史前
    第十四号染色体永生
    第十五号染色体性别
    第十六号染色体记忆
    第十七号染色体死亡
    第十八号染色体疗法
    第十九号染色体预防
    第二十号染色体政治
    第二十一号染色体优化人种论
    第二十二号染色体自由意志

    第一号染色体 生命

    一切归于腐朽之物皆源于他方 一个接一个地,我们抓住生命的气息而后死亡 如同产生于物质之海的泡沫上升、破裂、重归海洋 ——亚历山大·波普:《论人类》

    太初有“词”。这个词以自己携带的信息充斥了整个海洋,永不停息地复制它自己。这个词发现了如何重组化学物质,以便抓住熵的潮流中微小的逆流并给它们以生命。这个词把我们这个星球上的陆地从布满灰尘的地狱变成了郁郁葱葱的天堂。最终,这个词到达了鼎盛期,巧夺天工地造出了一种粥样的、被称为人脑的机器。这个机器发现并意识到了这个词的存在。

    每次我这么想的时候,我的那个粥样的机器就翻腾个不停。地球有40亿年的历史,我却幸运地活在当今这个时代;地球上有500万个物种,我却幸运地生为一个有意识的人;地球上有60亿人,我却荣幸地生在发现了这个“词”的国家;在地球所有的历史、地理环境与生物环境中,偏偏就在我出生的5年前、距离我出生的地方只有200英里处,我这个物种的两个成员发现了DNA的结构,从而揭示了宇宙中最大、最简单而又最惊人的秘密。如果你愿意,你可以嘲笑我的激情,就当我是个可笑的物质至上者吧:居然对一个缩写词(指DNA)都肯倾注这么大的热情。不过,跟着我到生命的源头去看看吧,我希望我能够让你相信这个词是多么迷人。

    1794年,博学的诗人、医生伊拉斯谟·达尔文(Erasmus Darwin)这样问道:“远在动物存在之前,地球和海洋就充满了植物;远在某种动物存在之前,其他动物就已存在。在这种情况下,我们能否假设:所有的有机生命都源自于,且仍然产生于,同一种有活性的纤维?”这样一个猜想在那个时代被提出来,让人惊愕。不仅仅是因为“所有有机生命都有共同来源”。这一大胆假说比他的孙子查尔斯·达尔文有关这一题材的书还早了65年,也是因为“纤维”这一古怪的用词。确实,生命的秘密就是在一条纤维里。

    但是,一根纤维怎么就能创造出有生命的东西?生命是不大好定义的,但是所有生命都有两种能力:复制自己的能力和制造秩序的能力。有生命的东西都能够造出跟自己差不太多的拷贝:兔子生兔子,蒲公英生蒲公英。但是兔子还会干一些别的。它们吃的是草,却能将其转化成兔子的骨与肉,不知怎么一来,就在混沌随机的世界里造出了有秩序有复杂性的身体。它们并没有违反热力学第二定律——在一个封闭的系统里所有事物都倾向于从有序变成无序。这是因为兔子不是一个封闭系统。兔子是靠消耗大量能量才得以建立一个有序的、复杂的局部结构——它的身体的。用爱尔温·薛定谔(物理学家,《生命是什么》的作者)的话说:生命是从环境里“把秩序喝进来”的。

    生命的两种能力,关键都在于信息。复制的能力之所以有可能存在,是因为存在一种“配方”,里面有制造一个新的身体所需要的信息。兔子的卵就带有组装一只新兔子的指南。通过新陈代谢来创造秩序同样也靠的是信息——用来建造和维修制造秩序的机器的指南。一只有生殖能力和代谢能力的成年兔子,是由它的生命纤维预先规划设计好的,正如一个蛋糕是在烘蛋糕的配方里就规划设计好了。这个想法可以直接追溯回亚里士多德。他曾说过,鸡的“概念”是隐含在鸡蛋里的,而橡树把自己的计划直接传达给了橡实。亚里士多德的这种模糊的信息学观念,在被物理学与化学埋没了多年之后,又被现代遗传学重新发现。麦克斯·德尔布吕克(Max Delbruck)(遗传学家)曾开玩笑地说:这位古希腊哲人应该因为发现了DNA而被追授诺贝尔奖。

    DNA的纤维就是信息,是一种用化学物质的密码写成的信息,每一个字母都是一种化学物质。而且,DNA密码事实上是用一种我们能够理解的方式写的,这真有点令人大喜过望。就像书面英语一样,遗传密码是写在一条直线上的线性语言;就像书面英语一样,遗传密码是数码式的,意思是说每一个字母都同等重要。更有甚者,DNA的语言比英语简单多了,因为它的字母表里只有四个字母,按惯例被称为A、C、G和T。

    当我们现在知道了基因就是用密码写的“配方”之后,就很难想象在过去只有那么少的人曾经想到过这一可能性。20世纪的上半叶有一个没有被回答的问题在生物学里一再出现:什么是基因?当时,基因简直是神秘莫测。让我们回到——不是DNA对称结构被发现的1953年,而是此前10年——1943年。10年之后在破解DNA的秘密上做了最突出工作的人,那时候都在干别的。弗兰西斯·克里克(Francis Crick)当时在朴次茅斯(Portsmouth)那边设计水雷;只有15岁的“神童”詹姆斯·沃森(James Waston)刚刚在芝加哥大学注册读本科,而且已立志用自己的一生去研究鸟类学;莫里斯?威尔金斯(Maurice Wilkins)在美国协助研制原子弹,罗萨琳·富兰克林(Rosalind Franklin)则在替英国政府工作,研究煤的结构。(四人是在1953年发现DNA结构上贡献最大的科学家;罗萨琳因罹患癌症于1958年去世;另外三人于1962年获得诺贝尔生理学及医学奖)

    还是1943年,在奥斯维辛集中营,约瑟夫·门格尔(Josef Mengele,纳粹医生,在犯人身上进行人体实验,被称为“死亡天使”)正对孪生子们进行致命的折磨,他的“科学研究”其实是对科学研究的一种极其恶劣的嘲讽。门格尔是在试图理解遗传学,但他的优化人种论已经被证明不是正确的途径。门格尔的实验结果对他之后的科学家是没有用处的。

    1943年,在都柏林,一个从门格尔那种人手下逃出来的难民、物理学家爱尔温·薛定谔,正在圣三一学院讲授一个名为“什么是生命”的系列讲座。他是在试图定义一个问题。他知道染色体载有生命的秘密,但是他不知道染色体是怎样储存生命秘密的。“就是这些染色体……以某种密码写就的程序,储存了每一个体发育的整个模式,以及发育成熟之后每一个体应有的功能。”他说,基因那么小,小得不可能是任何其他东西,而只能是一个大分子。他的这一见解影响了一代科学家——包括克里克、沃森、威尔金斯和富兰克林——去攻克一个顿时不再是无从下手的难题。但是,已经如此接近答案的薛定谔却偏离了轨道。他认为这个大分子之所以能够成为遗传物质的载体,是由于他心爱的量子理论。而他对自己这个想法执迷的研究最后被证明是走进了一条死胡同。生命的秘密跟量子没有任何关系。关于生命的答案并不出自物理学。

    1943年,在纽约,一位66岁的加拿大科学家奥斯瓦尔德·埃弗里(Oswald Avery),正在对一个实验进行最后的调整。这个实验将决定性地证实DNA是遗传的化学表现。这之前他已经发现,仅靠吸收一种化学溶液,一种肺炎菌就能从无害转变为有害。到了1943年,埃弗里已经总结出:发生了转变的东西就是DNA。但是他在发表自己结果的时候,表达得过于谨慎,以至于一段时间之内都没几个人注意到他的成果。在1943年5月写给他兄弟罗伊的信里,埃弗里也只比以前稍稍大胆了一点:

    如果我们是正确的(当然,这一点还有待证明),那就意味着核酸(DNA)并不仅仅是结构上重要,而是功能上活跃的物质,能够决定细胞的生化活性与特性。那么,就有可能利用已知的化学物质去准确地改变细胞并使这种改变遗传下去。这是遗传学家长期的梦想。

    埃弗里几乎已经走到这一步了,不过他仍然只是从化学的角度在思考。简·巴普提斯塔·冯·赫尔蒙特(Jan Baptista van Helmont,化学家、生理学家、医生)在1648年说过:“一切生命都是化学。”但这只是一种猜想。1828年,弗雷德里克·维勒(Friedrich Wohler)说:至少有些生命是化学。那时他刚用氯化氨和银的氰化物合成了尿素,从而打破了一直以来化学的世界与生物的世界之间不可逾越的界限。在他之前,尿素是只有生物体才能制造出来的东西。“生命就是化学”这句话是对的,不过也很煞风景,就像谁说足球就是物理一样。大概计算一下,生命可以说是三种原子的化学。生物体中98%的原子都是氢、氧和碳。但是,生命整体的特性,比如说遗传性,才有意思,而不是组成生命体的每一个零件。埃弗里想象不出来,是DNA的什么化学性质使它能够载有遗传性的秘密。这个问题的答案也不是从化学来的。

    1943年,在英国布莱奇利(Bletchley),一位天才数学家艾伦·图灵(Alan Turing)正在眼看着他最有洞察力的一个想法在绝密环境下变成真实的机器。图灵论证过:数字能够自己进行运算。为了破解德国军队洛伦兹编码器的秘密,英国制造了一台建立在图灵理论上的计算机:克劳索斯。这是一台多功能机器,有可以修改的内存程序。当时没有人意识到图灵也许比任何人都更接近生命的秘密,图灵自己更是没想到。遗传,实际上就是一种可以修改的内存程序;新陈代谢就是多功能的机器。把两者连接起来的是一种密码,是以物理的、化学的,甚至是非物质的形式存在的一种抽象信息。它的秘密就在于它能够复制自己。任何能够利用这世界上的资源把这密码复制出来的事物,就是有生命的东西。这种密码最可能的存在方式是数码方式:一个数字,一个短程序,或是一个词。

    1943年在新泽西州,一个有点与世隔绝的沉静的学者,克劳德·香农(Claude Shannon),正在琢磨一个他几年前在普林斯顿大学的时候想到的想法。香农的这个想法是说,信息和熵是一个硬币的两面,两者又都与能量有紧密的联系。一个系统的熵越小,它所含的信息就越多。蒸汽机之所以能够收集煤燃烧发出的能量并把它转化为旋转运动,是因为蒸汽机本身有很高的信息含量。人体也是如此。亚里士多德的信息理论与牛顿的物理学在香农的大脑中相遇了。像图灵一样,香农也根本没有想到生物学。但是香农这一深刻的想法,却比堆积如山的物理学与化学理论更接近于“什么是生命”这一问题的答案。生命也是数码信息,是用DNA写成的。

    太初有“词”,这个词却不是DNA。DNA的出现,是在生命已经出现之后,在生物体已经把两种活动——化学反应与信息储存,新陈代谢与复制——分工进行之后。但是DNA一直存着这个“词”的一份纪录,在漫长的岁月里将其忠实地传递下来,直到今天。

    想象一下显微镜下一个人类卵子的细胞核。如果有可能的话,你可以把23对染色体按大小重新排列一下,大的在左边,小的在右边。现在在显微镜下放大一下最左边的一根——纯粹是随意地,这根染色体被称为一号染色体。每一根染色体都有一条长臂和一条短臂,由一个被称为中心体的窄节所连接。如果你仔细地读,你会发现,在一号染色体的长臂上接近中心体的地方,有一串长约120个字母(A、C、G和T四种字母)的序列,重复出现了很多次。在每两个这种序列之间,是一些没有什么规律的“文字”,但这120个字母组成的“段落”却像一段耳熟能详的乐曲一样重复出现,总共出现了100次以上。阅读这种“段落”也许就是我们与最初的“词”最接近的时候。

    这个短“段落”是一个小基因,它也许是人体内最活跃的一个基因。它的120个字母不断地被制成一小段RNA,称为5SRNA。它与其他一些RNA、一些蛋白质被仔细地缠在一起,住在一个名叫核糖体的结构里。核糖体是把DNA配方翻译成蛋白质的地方。而蛋白质又是使得DNA能够复制的东西。借用萨缪尔·巴特勒(Samuel Butler)(19世纪作家)的风格,我们可以说:蛋白质就是一个基因用来制造另一个基因的手段,基因就是蛋白质用来制造另一个蛋白质的手段。厨师需要做菜的菜谱,而菜谱也需要厨师。生命就是蛋白质和基因这两种化学物质的相互作用。

    蛋白质代表的是化学反应,是生命活动、是呼吸、是新陈代谢、是行为——生物学家们称为“表现型”的那些东西。DNA代表的是信息,是复制、是繁殖、是性活动——生物学家们称为“基因型”的那些东西。两者都不能单独存在。这是一个经典的“先有鸡还是先有蛋”的问题:是先有基因还是先有蛋白质?先有DNA是不可能的,因为DNA只是一件含有些数学信息的无生气的东西,不能催化任何化学反应,非得有其他东西帮忙不可。先有蛋白质也不可能,因为蛋白质虽然能进行化学反应,却不能精确地复制自己。这样看来,不可能是DNA创造了蛋白质,也不可能是蛋白质创造了DNA。如果不是最初的那个“词”在生命的纤维中留下了一点淡淡的痕迹,这个谜团也许会一直让人觉得奇怪和糊涂。正如我们现在已经知道的,蛋是在鸡出现之前很久就有了的(爬行类动物是所有鸟类的祖先,它们是下蛋的),现在也有越来越多的证据表明在蛋白质存在之前有RNA。

    在当代,RNA是把DNA和蛋白质这两个世界联系起来的一种化学物质。它的主要作用是把信息从DNA语言翻译成蛋白质语言。但是,从它的行事特点看来,它几乎毫无疑问地是二者的祖先。如果DNA是罗马城,RNA则是希腊;如果DNA是维吉尔(Vivgil),RNA就是荷马。

    RNA就是那个“词”。RNA留下了五条线索,使我们看到了它是先于DNA和蛋白质的。直到今天,要想改变DNA序列中的任何组成部分,都是通过改变RNA序列中相应的组成部分而完成的,没有更直接的办法。而且,DNA语言中的字母T是从RNA语言中的字母U造出来的。现代的很多酶,虽然是蛋白质,但它们要想正常发挥功能却离不开一些小的RNA分子。更有甚者,RNA与DNA和蛋白质还有不同的一点,就是RNA能够复制自己,不需要任何外界帮助:给它正确的原料,它就能将其织成一条信息链。不管你观察细胞的哪一部分,最古老最基本的功能都需要RNA的参与。基因中的信息是以RNA的形式被一种需要RNA才能正常工作的酶提取出来的。这个信息,是由一台含有RNA的机器——核糖体翻译出来的。而在翻译过程中需要的氨基酸,又是一种小小的RNA分子给搬运过来的。在所有这些之上,还要加上一条,与DNA不同的是,RNA可以做催化剂,可以把分子——包括RNA——打断或是连上。它可以把RNA分子切断、连上,造出RNA的组成成分,把一条RNA链加长。一个RNA分子甚至可以在自己身上做“手术”,把自己的某一段切除,再把两个自由端接在一起。

    20世纪80年代早期,托马斯·赛克(Thomas Cech)和西德尼·奥特曼(Sidney Altman)(他们因在RNA功能方面的工作于1989年共获诺贝尔化学奖)发现了RNA的这些惊人特性,从而彻底改变了我们对于生命起源的理解。现在看来,最早的基因,“原基因”,很有可能是复制与催化合为一体的,是一个消耗自己周围的化学物质以复制自己的“词”。它的结构很有可能就是RNA。把任意一些RNA分子放在试管里,然后一遍遍地选出它们中间催化作用最强的成员,就可以重现RNA从什么也不是到具有催化作用的“进化”过程——几乎可以说是又进行了一次生命起源。这种实验最惊人的结果之一,就是最后得到的RNA往往含有一段序列,读起来酷似核糖体RNA基因——比如说,一号染色体上的5S基因——的序列。

    在第一只恐龙出现之前,在第一条鱼出现之前,在第一条虫子、第一棵植物、第一种真菌、第一种细菌出现之前,世界是RNA的世界。这大概是40亿年前,地球刚刚形成不久,宇宙也仅仅有100亿年历史的时候。我们不知道这些“核糖生物体”是什么样子的。我们只能猜想它们是怎样“谋生”的——从化学意义上说。我们不知道在它们之前有什么,但从存留在今天的生物中的线索看来,我们可以比较肯定地说RNA世界确实存在过。

    这些“核糖生物体”面临着一个大问题。RNA是不太稳定的物质,几小时之内就可以解体。如果这些“核糖生物体”去了比较热的地方,或是试图长得比较大,它们自己的基因就会迅速坏死,遗传学家们称为“由错误而引起的灾难”。后来,它们中的一个从试验与错误中发明了一种新的、更“坚强”的RNA的变种:DNA。它还发明了一套从DNA复制RNA的系统,包括一种我们称为“原核糖体”的机器。这套系统既要快速又要准确,于是它把遗传信息连在一起的时候每次连三个字母。每个三字母的小组都带有一个标签,使得它能够更容易地被“原核糖体”找到。这个标签是氨基酸做的。很久以后,这些标签被连在一起,制成了蛋白质,而那些三个字母的“词”,则成了制造蛋白质的密码——遗传密码。(所以直到今天,遗传密码每个词都有三个字母,作为制造蛋白质的配方的一部分,每个词拼出20个氨基酸中的一个。)这样,一个更复杂的生物就诞生了。它的遗传配方储存在DNA里,它体内的各种“机器”是蛋白质做成的,而RNA则在两者之间架起一座桥梁。

    这个生物名叫露卡(Luca)——所有物种在分化之前最后的一个共同祖先。(原文是The Last Universal Common Ancestor,缩写为LUCA)它长得什么样子?住在什么地方?传统的回答是:它长得像个细菌,生活在一个离温泉比较近的温暖的水塘里,或生活在浅海湾里。不过,在过去的几年里比较时髦的做法是给露卡一个环境比较险恶的住处,因为变得越来越清楚的是,地下与海底的岩石上存在着亿万种以化学物质为养分的细菌。现在一般认为,露卡存在于地下极深的地方,存在于火成岩的裂缝里,“吃”硫、铁、氢和碳为生。直到今天,生活在地球表面的生物仍然只是地球所有生物中薄薄的一层。地下深层那些喜热细菌——也许就是造就天然气的那些物质——体内含有的碳的总量,也许是地球表面所有生物含碳量的十倍。

    不过,在试图确认最早的生命形式的时候,有一个概念上的困难。现在,绝大多数的生物都不可能从它们父母以外的任何地方得到基因了,但是过去却不一定如此。即便是今天,细菌也可以通过吞掉其他细菌来得到它们的基因。在过去某一阶段,也许有过很普遍的基因交换,甚至基因“盗窃”。很久以前,染色体可能是既多且短的,每条染色体可能只有一个基因,失也容易得也容易。如果真是如此,卡尔·沃斯(Carl Woese)(微生物学家)指出,那么这样的生物就还不是一个能够存活一阵的生物体,而只是暂时存在的一组基因。也因此,存在于我们所有人身体里的基因,也许来自很多不同的“物种”,要想把它们归类溯源是徒劳的。我们不是来自于某一个祖先,而是来自于由带有遗传物质的生物体组成的整个“社区”。正如沃斯所说,生命物质从何而来有史可循,生命却没有家族史。

    你可以把这种“我们不是来自于某个个体,而是来自于一个社区”的结论看成是一种推销集体主义精神和全局观念的、意在让人感觉良好的模糊哲学。你也可以把它看成是“自私的基因”这一理论的终极证明:在过去那些日子里,基因之间的战争比今天更甚,它们把生物体作为临时的战车,只跟生物体建立短暂的联盟,而现在的战争更像是基因与生物体组成的团队与其他团队之间的战争。这两种说法信哪一种,你自己选吧。

    就算以前有过很多露卡,我们仍然可以猜想它们以前生活在哪里,以什么为生。这里,“嗜热细菌是所有生命的祖先”这一说法出现了第二个问题。由于三位新西兰人(A.Poole、D.Jeffares和D.Penny)在1998年公布的精彩的探索工作,我们突然瞥见了一种可能性,那就是,在几乎每一本教科书上都可以看到的生物进化树,可能都是大头朝下了。那些书都肯定地说,最先出现的生物是类似于细菌的简单细胞,它们的染色体是环状的,而且每个染色体只有一份;所有其他生物的出现,都是因为多个细菌结成“团伙”,变成了复杂细胞。现在发现,也许倒过来是更有道理的。最初的现代生物一点也不像细菌,它们也不生活在温泉里或是海底深处火山通道口。它们与原生质(protozoa)很像:它们的基因组是分成片段的,有多条线性染色体而不是一条环状染色体,而且它们是“多倍体”——每一个基因都有几个备份,用来帮助改正复制中出现的拼写错误。还有,这些原生质应该是喜欢比较冷的气候。正如帕特里克·福泰尔(Patrick Forterre)(微生物学家)一直坚持的,现在看起来,细菌可能是后来才出现的,是高度简化与功能特化了的露卡的后代,是在DNA—蛋白质世界被发明之后很久才出现的。它们的把戏是把在RNA世界里形成的很多“设备”都扔掉,以便在很热的地方存活。在细胞里存留了露卡那些原始的分子特征的生物是我们;细菌比我们“进化得更高级”。

    一些“分子化石”的存在支持这个奇怪的说法,这些“分子化石”是一小点一小点的RNA:向导RNA,桥RNA,小细胞核RNA,小核小体RNA,自我剪接的内含子(这是一些不同功能的RNA)。它们在你的细胞核里转悠,干一些完全无用的事,比如说,把它们自己从基因里切出去。细菌就没有这些玩意。“细菌把这些东西给扔掉了”是比“我们发明了它们”更简约的解释。(可能让人有点吃惊的是,从原则上说,除非有其他理由,否则科学认为简单解释是比复杂解释更有可能的,这个原理在逻辑上被称为“奥卡姆剃刀”。)细菌在“侵入”很热的地方,比如说温泉或温度可达170摄氏度的地下岩层的时候,就把这些旧的RNA扔掉了。为了尽量减小由热而导致的错误,它付出的代价就是简化自身的设备。扔掉这些RNA之后,细菌发现它们的细胞中经过简化的新设备使得它们在一些繁殖速度越快越有优势的生存夹缝里——比如寄生的环境或以腐烂的动植物为生的环境——有了竞争实力。我们人类保留了那些旧的RNA,那些功能早已被其他“机器”代替了的旧“机器”的残余,一直没有把它们整个扔掉。与竞争极为激烈的细菌世界不同,我们——所有动物、植物和真菌——从来就没有遇到过如此激烈的、要简单快速才占优势的竞争。相反,我们看重的是复杂的结构、是有尽可能多的基因,而不是一台高效使用这些基因的机器。

    遗传密码中那些三个字母的词在所有生物中都是一样的。CGA的意思是精氨酸,GCG的意思是丙氨酸——在蝙蝠里、在甲虫里、在白桦树里、在细菌里,都是如此。即使是在那些古细菌(这些“古细菌”现在仍然存在)里以及那些名叫病毒的微小而又狡猾的囊状物里,它们的意思也是一样的。尽管这些古细菌有些生活在大西洋表面之下几千英尺处温度达到沸点的硫磺泉里。不管你去世界的什么地方,不管你看到的是什么动物、植物、昆虫或其他一团什么东西,只要它是有生命的,它就用的是同一个字典、理解的是同一套密码。所有的生命原是一体。除了在个别小范围内有些改动——主要是由于不明的原因而发生在有些纤毛原生动物里——之外,每一个生命体都用同样的遗传密码。我们都用的是同一种“语言”。

    这就意味着——信仰宗教的人士也许会发现这是一个有用的说法——只有一次创世纪,生命的诞生源自一个单独的事件。当然,最初的生命仍然有可能是发源于另一个星球并由太空船播撒在地球上的;也有可能最初有过千万种生命,但只有露卡在那一“锅”原始汤里那种无情的、“谁有本事谁拿”的竞争中幸存下来。但是,在60年代遗传密码被破解之前,我们不知道我们现在知道了的东西:所有生命都是一体;海带是你的远房表哥,炭疽菌是比你更发达的你的亲戚。生命的统一性是从经验中得到的事实。伊拉斯谟·达尔文当年不可思议地接近了这一事实:“所有的有机生命都源自于,且仍然产生于同一种有活性的纤维。”

    就这样,从基因组这部“书”里,我们可以读到一些简单的真理:生命的统一性,RNA的重要性,地球上最早的生命的化学特性,大的单细胞生物可能是细菌的祖先,细菌不是单细胞生物的祖先。40亿年前的生物是什么样的,我们没有化石可以研究。我们只有这部了不起的书:基因组。你的小指头上细胞里的基因,是第一个有复制功能的分子的嫡系传人。这些基因通过一条永不断裂的复制链,在复制了几十上百亿次之后到达我们这里,它们携带着的数码信息里仍然留有最早的生存竞争的痕迹。如果人类基因组可以告诉我们原始汤里发生的事情,它会告诉我们多少那之后的4000万个千年里发生的事!人类基因组是一部我们历史的纪录,它由密码写就,为运行的“机器”而写。

    第二号染色体 物种

    具有那么多高贵品质的人,肉体仍然带着他的卑微起源的抹不去的痕迹。 ——查尔斯•达尔文

    有些时候,你会对一些显而易见的东西熟视无睹。1955年以前,人们一致认为人有24对染色体。这是那种“人人都知道这是对的”的事。之所以人人都知道这是对的,是因为在1921年,有个名叫西奥菲勒斯•佩因特(Theophilus Painter)的得克萨斯人,把因为精神失常和自虐而被阉割了的两个黑人和一个白人的睾丸拿来,做成了极薄的切片,把这些切片用化学试剂固定之后,在显微镜下进行观察。佩因特试着数了这几个倒霉蛋的成精细胞里那些缠成一团的、不成对的染色体,最后得出了24这个数。“我自信这个数字是正确的”,他说。其他人之后又用其他方法重复了他的实验。所有的人都得到了24这个数。

    之后的30年,没人对这个“事实”表示过怀疑。有一组科学家还放弃了他们在人的肝脏细胞上进行的实验,因为他们在这些细胞里只找到23对染色体。另一个研究人员发明了一种把所有染色体都分离开的方法,但他仍然认为自己看到了24对染色体。直到1955年,一个印度尼西亚人庄有兴(Joe-Hin Tjio)从西班牙到瑞典去跟阿尔伯特•莱文(Albert Levan)工作,真相才被发现。

    庄和莱文使用了更好的技术,清清楚楚地看到了23对染色体。他们甚至还回过头去在一些书中的照片里数出了23对染色体,尽管照片下面的文字注明应该有24对。没有人会糊涂到不想看见事实真相的地步。(这句话在这里都含有讽刺的意味。)

    人类没有24对染色体,其实是一件叫人惊讶的事。大猩猩有24对染色体,黑猩猩也是。在猿类动物里我们是个例外。在显微镜下面,我们与其他猿类动物最大、最明显的区别,就是我们比它们少一对染色体。原因很快就弄清了,并不是猩猩的染色体到我们这儿丢了一对,而是在我们的身体里,两对猩猩的染色体融合在一起了。人类染色体中第二大的一条,二号染色体,是两条中等大小的猩猩染色体融合起来形成的。这一点,从人类染色体与相应的猩猩染色体上那些暗带的排列就可以看出来。

    教皇让•保罗二世(PopeJohn-PaulII)在1996年10月22日对天主教科学院所作的讲话中提出,古猿与现代人类之间存在一个“本体的断裂”——这个断裂点就是上帝向动物的一个分支注入了人的灵魂的时刻。这种说法可以使教廷与进化论达到和解。也许这个本体的飞跃是发生在两条猩猩染色体融合的时候吧,也许编码灵魂的基因就在人类二号染色体中间的地方?(这句话在这里都含有讽刺的意味。)

    先不提教皇了。人类这个物种怎么说也不是进化的巅峰。进化没有巅峰,进化也没有进步退步之分。自然选择不过是生命形式不断变化的过程,而变化是为了适应由物质环境和其他生命形式提供的多种机会。生活在大西洋底硫磺出口的黑烟菌,是在露卡时代之后不久就跟我们的祖先分开了的一族细菌的后裔。起码在基因水平上,这种细菌大概比一个银行职员还进化得更高级。因为这种细菌每一代都比人的一代更短,所以它有更多次机会去完善自己的基因。

    这本书只专注于一个物种——人类——的状况,但这并不说明这个物种的重要性。当然,人类是独特的。在他们的两只耳朵之间,拥有地球上最复杂的生物机器。但是复杂性并不是一切,复杂性也不是进化的目的。这个星球上的每一个物种都是独特的。独特性是一种过剩了的商品。尽管如此,我还是想在这一章里探讨一下人类的独特性,去发现我们这个物种特性的根源。原谅我的狭隘吧。起源于非洲的没毛灵长类,虽然有短暂的繁荣,但他们的故事只是生命的历史中的一个脚注。不过,在这些没毛的灵长类自己的历史里,他们的故事可是占据中心地位的。我们这个物种的独特“卖点”到底是什么呢?

    在对环境的适应上,人类是个成功者。他们也许是整个地球上数量最多的大型动物。他们有大约60亿个成员,加在一起有3亿吨生命物质。那些在数量上达到或超过人类水平的大型动物,要么是那些被我们驯化了的动物:牛、鸡、羊,要么是依赖于人类环境的动物:麻雀和老鼠。相比之下,全世界只有不到1000只山地大猩猩。即使是在我们开始屠杀它们、毁坏它们的生存环境之前,它们的数量也很可能超不过现有数量的十倍。还有,人类这个物种显示了征服多种生存环境——热的、冷的,干的、湿的,海拔高的、海拔低的,海洋、沙漠——的惊人能力。除了人之外,鹗、仓枭和燕鸥是仅有的在南极洲之外的各大洲都比较兴旺的大物种,而在各个大洲,它们的生存环境都很有限。人类在适应环境上的成功无疑是付出了高昂代价的,我们注定很快就要遇到大灾难(环境破坏):作为一个成功的物种,我们对未来真是出奇地悲观。不过到目前为止,我们还算成功。

    但是,一个惊人的事实是:我们来自于一长串失败。我们是猿,而1500万年前,面对那些“设计”得更好的猴子的竞争,猿差点儿灭绝了;我们是灵长类,而4500万年前,面对那些“设计”得更好的啮齿动物的竞争,灵长类哺乳动物差点儿灭绝了;我们是由爬行动物进化来的四足动物,但是2亿年前,面对那些“设计”得更好的恐龙的竞争,我们的爬行动物祖先差点儿灭绝了;我们是有叶状鰭的鱼的后代,但是3.6亿年前,面对那些“设计”得更好的伞状鰭鱼的竞争,有叶状鰭的鱼差点儿灭绝了;我们是脊索动物,但在5亿年前的寒武纪,面对那些非常成功的节肢动物的竞争,我们是侥幸生存下来了。我们在适应环境上的成功,是克服了那些让人胆战的困难才取得的。

    在露卡之后的这40亿年里,那个“词”在——用理查德•道金斯(Richard Dawkins,生物学家)的话说——制造“生存机器”方面变得越来越高明了。“生存机器”是那些大型的、用血肉构造成的生物体,它们善于把局部的熵减小以更好地复制自己体内的基因。它们能做到这一点,是因为它们经历了漫长的、大规模的尝试与失败:自然选择。上千亿的生物体被造出来并被试验过,只有那些达到了越来越苛刻的生存条件的生物体,才得以繁衍下去。一开始,这只是一个比较简单的、化学反应是否高效的问题:最好的生物体是那些发现了把其他化学物质转变成DNA和蛋白质的细胞。这个阶段持续了大约30亿年。其他星球上的生命在那个时候是什么样的我们不知道,但在地球上,生命好像就是不同种类的变形虫之间的竞争。在那30亿年间曾经生活过上千亿的单细胞生物,每一个生命在几天之内繁殖,然后死亡。那30亿年里发生了大量的尝试与失败。

    但是生命并没有到此为止。大约10亿年前,很突然地出现了一种新的世界秩序:更大的、多细胞的生物体被发明了,大型生27物爆炸性地大批出现。从地质学角度来看,只是一眨眼的工夫(俗称的寒武纪大爆发也许只持续了1000万到2000万年),就出现了大批结构无比复杂的生物:跑得飞快的、几乎有一英尺长的三叶虫,比这还长的拖着黏液的蠕虫,半码(1码约0.914米)长的舞动的藻类。单细胞生物仍然占据着统治地位,但是这些不认输的大型“生存机器”在给自己划出一块生存的地域。而且很奇怪,这些多细胞体获得了一些带有偶然性的成功。尽管从外太空来的陨石曾经砸到地球上,造成一些零星的倒退,而且很不幸的是,这种灾难总是倾向于灭绝更大、更复杂的生命形式,但是进化的趋势还是清晰可辨。动物存在的时间越长,它们中的一些就变得越复杂。具体地说,那些大脑最发达的动物的大脑,每一代都变得更大:古生代最大的大脑比中生代最大的要小,中生代最大的大脑比新生代最大的要小,新生代最大的大脑又比当代最大的要小。基因们发现了一种实现自己“野心”的方法:制造一种不仅仅能够生存,而且还具有智慧行为的机器。现在,如果一个基因发现自己是在一个受到了冬季暴风雪威胁的动物体内,它可以指望这个动物做些聪明的事,比如迁徙到南方,或是给自己搭个避风的住所。

    从40亿年前开始的这个让人喘不上气的“旅程”把我们带到了距现在1000万年前的时候,最初的昆虫、鱼、恐龙和鸟类都早已出现,那时地球上大脑最大(大脑与身体的比例最大)的生物可能就是类人猿一我们的祖先。距现在1000万年前的那个时候,在非洲可能有两种,甚至两种以上不同的猿。这两种猿,一种是大猩猩的祖先,另一种是黑猩猩和人类的共同祖先。大猩猩的祖先们有可能在中部非洲的一串火山区的森林里安顿了下来,从此在基因上与其他的猿隔断了。那之后的500万年间,另一种猿有了两种不同的后代,最终导致人类和黑猩猩的出现。

    我们之所以知道这段历史是因为它是写在基因里的。就在1950年,伟大的解剖学家J•Z•杨(Young)还写道:我们还不清楚人类到底是与猿来自于同一祖先,还是起源于与猿在6000万年前就分开了的另一灵长类的分支。那时还有人认为棕猩猩(orangutan)是人类最近的表亲。但是现在,我们不仅知道黑猩猩与人类分开是在大猩猩之后,还知道人类和猿的分开发生在不到1000万年前,甚至可能是不到500万年前。(现在一般认为,人的祖先与棕猩猩的祖先是在1000万〜1500万年前分开的,人的祖先与大猩猩的祖先是在600万〜800万年前分开的,而人的祖先与黑猩猩的祖先是在500万〜700万年前分开的。)物种之间的关系可以从基因中那些随机的“拼写”错误积累的速度中看出来。黑猩猩和大猩猩基因的区别比黑猩猩和人类基因的区别要大——每一个基因、每一个蛋白质序列、每一段你任意捡起来的DNA序列,都是如此。用最没有诗意的话说,一条人类DNA与一条黑猩猩的DNA组成的杂合体在比较高的温度下才能分解成两条,而大猩猩DNA与黑猩猩DNA的杂合体或人类DNA与大猩猩DNA的杂合体,在较低温度下就可分开。

    比确定谁是谁的祖先更难的,是校正分子钟以精确判断新物种出现的年代。因为猿的寿命很长,而且年龄比较大的时候才开始生育,所以分子钟走得比较慢(基因的拼写错误大多是在DNA复制的时候、在制造卵子和精子的时候产生的)。但是我们还不知道在校正分子钟的时候怎样把这个因素考虑进去,而且,基因和基因也不一样。有些DNA片段好像暗示着人类和黑猩猩分开是很久以前的事;其他的DNA,比如说线粒体DNA,又显示一个更近的日期。500万到1000万年是被普遍接受的一个范围。

    除了二号染色体是由两条猩猩的染色体融合而成之外,人类染色体和黑猩猩的染色体只有极少和极小的看得见的区别。有13条染色体是一点区别都看不出来的。如果你随机选取黑猩猩基因组里的一个“段落”,然后把它与人类基因组里相应的“段落”比较,你会发现只有个别几个“字母”是不一样的:平均每100个字母只有不到两个不同。我们就是黑猩猩,这句话有98%的准确度;

    黑猩猩就是人,这句话的可信度是98%。如果这还不能打击你的自信,那么想一想,黑猩猩97%是大猩猩,人类的97%也是大猩猩。换句话说,我们比大猩猩更像黑猩猩。

    这怎么可能呢?我跟黑猩猩之间的区别太大了。黑猩猩毛比我多,它的头的形状跟我的不同,它身体的形状跟我的不同,它的四肢跟我的不同,它发出来的声音也跟我不同。黑猩猩身上就没有一样东西是跟我有98%的相同的。可是,真是这样吗?黑猩猩和人的区别到底多大,得看跟谁比。如果你拿两个黏土做的老鼠模型,要把一个改成黑猩猩模型,另一个改成人的模型,大部分的改变会是一样的;如果你拿两个黏土做的变形虫模型,要把一个改成黑猩猩模型,另一个改成人的模型,大部分的改变会是一样的。两个模型都需要加上32个牙、四肢、每只手上五个指头、两只眼睛、肝脏;每个模型都需要毛发、干的皮肤、脊柱和中耳里的三块小骨头。从变形虫的角度说,或者从一个受精卵的角度说,人类和黑猩猩就是98%地相似。黑猩猩身体内的骨头没有一块是我们没有的;黑猩猩大脑里的化学物质没有一样是在人脑里找不到的;我们的免疫系统、消化系统、血液系统、淋巴系统、神经系统,没有哪一部分是黑猩猩没有的,反过来也是一样。

    黑猩猩大脑里的脑叶也没有哪个是我们没有的,我们的脑叶黑猩猩也都有。维多利亚时代的解剖学家理查德•欧文爵士(Sir Richard Owen),在为了抵抗自己这个物种是猿的后代这一理论所作的最后的、绝望的努力中,声称海马区小叶是人脑特有的结构,是灵魂的所在地,是神造人类的证据。这是因为从探险家保罗•杜查禄(Pauldu Chaillu)带回的来自刚果的大猩猩大脑标本里,欧文没能找到海马区小叶。托马斯•亨利•赫胥黎(Thomas Henry Huxley,19世纪生物学家,达尔文进化论的坚定捍卫者)愤怒地回应说:海马区小叶在类人猿的大脑里是存在的。“不,它是不存在的”,欧文说。“它就是存在的”,赫胥黎说。1861年间有一个短暂的时期,“海马区问题”是维多利亚治下的伦敦关注的焦点,在幽默杂志《木偶剧》和查尔斯•金斯利(Charles Kingsley)的小说《水婴》里都被讽刺过。赫胥黎的观点-今天也有很多人响应——并不仅限于解剖学:“我不是那种要把人的尊严建立在他那伟大的脚趾头上的人,也不想灌输如果类人猿有海马区小叶人类就没救了这种观念。相反,我已经尽我所能去扫掉这种‘虚荣心’。顺带说一句,在“海马区问题”上赫胥黎是对的。

    归根结底,从黑猩猩和人类的共同祖先住在非洲中部的日子到现在,人类只繁衍了不到30万代。如果你拉着你妈妈的手,她又拉着你外祖母的手,她又拉着你曾外祖母的手……这条线刚刚从纽约延伸到华盛顿,你们就已经要跟“丢失的一环”(这里指人和黑猩猩的共同祖先,目前还没有找到它的化石。)——人类与黑猩猩的共同祖先一拉手了。500万年是一段很长的时间,但是进化不是按年计算,而是按代计算。细菌要想经历这么多代只需要25年时间。

    那“丢失的一环”长得是什么样子呢?通过仔细研究人类祖先的化石,科学家们已经离答案非常近了。离“丢失的一环”最近的化石可能是一种小小的猿人的骨架,这种猿人被取名为阿底皮西卡斯(Ardipithecus),存在于距今大约400万年前。尽管有几个科学家认为阿底皮西卡斯存在于“丢失的一环”之前,这其实不太可能:阿底皮西卡斯的骨盆主要是为直立行走而“设计”的;从这种设计退化回与大猩猩和黑猩猩的骨盆相似,是极不可能的。当然,我们需要找到比阿底皮西卡斯还要早几百万年的化石,才能够准确无误地知道我们在观察阿底皮西卡斯的时候是否就是在观察人与黑猩猩的共同祖先。不过,我们通过阿底皮西卡斯可以大致猜想一下那“丢失的一环”长得什么样子:它的大脑可能比现代的黑猩猩的大脑要小;它的身体活动,在靠两条腿支撑的时候,可能与现代的黑猩猩一样灵活;它的饮食结构也许跟现代的黑猩猩差不多:以果类和其他植物为主;公的比母的个子大很多。从人类的角度来看,很难不想到这个“丢失的一环”跟黑猩猩比跟人相似。黑猩猩当然可能不同意,但是看上去,我们这一支无论如何是比黑猩猩的一支经历了更多的变化。

    与曾经生活过的每一种猿一样,这“丢失的一环”很可能是生活在森林里的:一种标准的、现代的、上新世的、以树为家的猿。在某一时刻,它们的群落分成了两支。我们知道这一点,是因为一个群落分成相互隔绝的两部分时常常引发特化(speciation,指在相对稳定的环境中充分进化以至于不再能适应其他环境):这两个部分在基因上逐渐有差别了。造成“丢失的一环”分成两支的,有可能是一座山脉,也有可能是一条河流[今天,刚果河分隔着黑猩猩和它的姐妹物种一小猩猩(bonobo),也有可能是大约500万年前形成的西部大裂谷把人类的祖先隔在了干旱的东侧。法国古生物学家伊夫•科庞(Yves Coppens)把这最后一种假设称做“东侧理论”。这方面的理论越来越不着边了。也许是当时刚形成不久的撒哈拉沙漠把我们的祖先隔在了北部非洲,而黑猩猩的祖先留在了南部。也许在500万年前,当时很干旱的地中海盆地被源自直布罗陀海峡的巨大洪水——比尼亚加拉河(美国与加拿大交界处的河流)的流量大1000倍——给淹了,这样,就突然把“丢失的一环”中的一部分给隔绝在了地中海里的一些大岛上,它们在那里以涉水捕捉鱼和有壳的海洋生物为生。这个“洪水假说”闹得沸沸扬扬,却没有任何确凿证据支持它。

    不管具体机制是什么,我们可以猜想到,我们的祖先是与其他猿隔绝的很小的一支,而黑猩猩的祖先当时则是主流的一族。这是因为从人类的基因里,我们发现人类在进化过程中经过了一个非常窄的“瓶颈”(也就是说,有一个人口数量极少的时期),比黑猩猩经过的“瓶颈”窄得多:在人类基因组里,随机的变异比黑猩猩基因组里的少得多。

    那么,让我们来勾画一下孤岛(不管是真的岛还是假的)上的这群被隔绝的动物吧。这一小群猿人开始近亲繁殖,面临着灭绝的危险,被遗传学上的“初始效应”(如果一个群体在开始的时候只有数目很少的个体,意即群体里的所有个体都是很少的几个祖先的后代,那么祖先身体里偶然产生的基因变异就会在这个群体里变得非常普遍,这就是初始效应。在一个祖先数目很多的群体里这种情形就不会发生。)所影响(这种效应使得一个很小的群落可以有很大的、完全是由偶然性造成的遗传变异)。在这一小群猿人中出现了一个很大的突变:它们的两条染色体融合起来了。从这以后,它们的繁殖就只能在自己这一群之内进行了,就算是这个“岛”跟大陆重新接合之后也是如此。它们与大陆上它们的“亲戚”杂交而生的后代是不育的。[我要瞎猜了,我们跟黑猩猩到底能不能生出有生育能力的后代?科学家们好像对我们这个物种在繁殖的孤立性方面(reproductive isolation)很缺乏好奇心嘛。]

    这个时候,其他惊人的变化开始出现了。骨架的形状开始变化,使得直立和用两条腿行走变得可能了,而这很适合于在平坦的地区长途跋涉;其他猿的行走方式更适合于在比较起伏的地区短途行走。皮肤的变化也出现了——毛越来越少,而且在热天大量出汗,这一点在猿类动物里是比较特殊的。这些特点,再加上给脑袋遮阴的一层头发,加上头皮上结构像散热器一般的血管,示意着我们的祖先已经不再生活在有树阴、多云的森林里了;它们行走在开阔的陆地上,行走在赤道上的烈日下。

    什么样的生存环境造成了我们的祖先骨架方面的巨大变化?你可以尽情地猜测。只有极少的几个说法被证明是有可能的,也只有极少的几个被证明没有可能。在那几个有可能的理论里,最可信的一个是说这些变化的发生是因为我们的祖先被隔绝在了一块比较干旱和开阔的草原。这个生存环境找上了我们,我们可没有去找它:在非洲很多地区,那个时代正是森林被热带草原取代的时候。一段时间之后,在大约离现在360万年前,从现在的坦桑尼亚的萨迪曼火山飘出来的火山灰刚开始湿润,在这些火山灰上,三个古人类有目的地从南走向北。走在最前面的是最大的那个;紧跟它的足迹的是中等大的那个;最小的那个走在它们左边一点,要甩开大步才能跟上。一段时间之后,它们短暂地停了一下,向西面偏了偏,然后又继续前行,就像你我一样直立地前行。在雷托利(位于坦桑尼亚北部。)发现的脚印化石,要多清楚就有多清楚地讲述了我们祖先直立行走的故事。

    即便如此,我们所知仍然很少。雷托利的那三个猿人是一男、一女和一个孩子,还是一个男的和两个女的?它们吃些什么?它们喜欢什么样的栖息地?由于东非大裂谷阻挡了从西面而来的潮湿的风,非洲东部在当时毫无疑问地越来越干了,但是这并不说明它们是在找干旱的地方。事实上,我们对于水的需要,我们的易出汗,我们的适应于含有大量油和脂肪的鱼类食物,还有其他一些因素(包括我们对海滨、对水上运动的喜爱),暗示着我们的祖先可能是喜欢水的。我们游泳游得相当不错。最初,我们的祖先是生活在水边的森林里或是湖边吗?

    当时间合适的时候,我们的祖先戏剧性地变成了食肉动物。但在那之前,一种崭新的类人猿——实际上是几种——出现了。它们是雷托利猿人那样的生物的后代,但不是现代人类的祖先,而且它们可能是只以植物为食的。它们被称为南方古猿(robust Australopithecus,robust一词是“结实、粗壮”的意思)。在研究这些猿人的时候,基因帮不上我们,因为这一支猿人已经灭绝,也没有进化成其他物种。正如如果我们不能“阅读”基因,我们就无从得知我们与猩猩的表亲关系一样,如果我们一这里我所说的“我们”,主要是指李基—家(Louis S.B. Leakey,他的太太Mary Leakey和儿子Richard Leakey;三人都是20世纪英国著名考古学家、古人类学家,雷托利的南方古猿的脚印就是由Mary带队的一组考古学家于1976年发现的)、唐纳德•约翰逊(Donald Johanson,考古学家,于1974年在埃塞俄比亚发现了一具相当完整的古人类的骨骼,是目前为止发现的年代最古远的古人类的骨骼,被起名为“露西”(Lucy))等人一没有发现那些化石,我们就不可能知道我们曾经有过很多南方古猿这样的更近的表亲。别看南方古猿名字挺“粗壮”,其实只是指它们的下颚很结实。它们是很小的动物,比黑猩猩小,也比黑猩猩笨,但是它们的身体已经直立了,脸部也很发达:有着由巨大的肌肉支撑着的庞大的下颚。它们咀嚼很多,可能咀嚼的是草和一些比较硬的植物。为了能够更好地把植物在嘴里翻来覆去地嚼,它们的犬齿也逐渐消失了。最后,大约100万年前吧,它们灭绝了。我们可能永远不会知道太多它们的事情了。也许是我们把它们吃了呢。

    言归正传吧,当时我们的祖先是比南方古猿更大的动物,跟现代人一样大或者更大一点:它们身高接近两米,很是魁梧,就像艾伦•沃克(AlanWalker)(艾伦•沃克:当代美国考古学家。)和理查德•李基(RichardLeakey)描述的、存在于160万年前的著名的纳瑞奥科托米(Nariokotome)男孩(Nariokotomeboy,指的是在肯尼亚纳瑞奥科托米沙流地带发现的一具古人类骨骼。)的骨骼。它们已经开始使用石器工具,代替它们的牙齿。这帮家伙有着厚厚的头骨,有石头做的武器(这两者可能缺一不可),已经完全能够杀死和吃掉毫无抵抗能力的南方古猿了。在动物世界里,表亲关系一点不可靠:狮子会杀死猎豹,狼会杀死草狼。没有导演,是一些有竞争优势的自然进程把这个物种带入了后来爆炸般的成功——它们的大脑越来越大了。有些特别喜欢拿数学折磨自己的人计算过,大约每过10万年,大脑就增加1.5亿个脑细胞,当然,这个数字就像是旅游手册上常见的那种一点用处都没有的统计资料。发达的大脑、食肉、缓慢的发育、在成年之后仍然保留孩童时期的特征(光滑皮肤、小下颚、拱形的头盖骨),这些都必须同时存在。如果不吃肉,需要大量蛋白质的大脑就成了昂贵的奢侈品。如果头骨过早定型,就不会有大脑所需的空间。如果不是发育缓慢,就不可能有时间去学习如何充分发挥一个发达大脑所具备的优势。

    这整个过程可能是由性选择来推动的。除了大脑的改变之外,另外一个很大的变化也在发生。与雄性相比,雌性身材的变化很大。在现代的黑猩猩里、南方古猿里和最早的猿人化石里,雄性是雌性的一倍半大,但在现代人里这个比例小得多。在化石纪录里这个比例稳步地在降低,这是史前纪录里最受忽视的事实之一。它意味着这个物种的交配方式发生了变化。黑猩猩那种多配偶的、短暂的性关系,大猩猩那种“妻妾”成群的多“妻”制,被一种类似于一夫一妻制的形式所代替,身体大小方面性别差异的减小就是一个清晰的证据。但是,在一个一夫一妻制的系统里,雄性和雌性都会感到认真选择配偶的压力。在多妻制下,只有雌性需要小心选择配偶。配偶之间长久的纽带把每一个猿人与它的配偶在它生育期的大部分时间内都拴在一起了:质量,而不是数量,突然重要起来了。对于雄性来说,选择一个年轻的配偶突然至关重要起来,因为年轻雌性的生育能力还能保持很多年。对于异性身体上象征年轻的、如孩童般的特征的青睐,意味着对于年轻人的拱形的大头盖骨的青睐,大脑增大的过程也就从此开始。

    把我们推向习惯性的一夫一妻制,或起码是把我们往这里拉得更深一些的,是在食物方面产生的性别分工。我们发明了一种跟地球上所有其他物种都不同的性别之间的合作关系。由于女性采到的植物类食物是两性分享的,男性就赢得了从事危险的打猎活动的自由;由于男性得到的肉类食物是两性分享的,女性就可以得到高蛋白的、易于消化的食物,而不必为了自己去寻找这种食物而放弃照顾幼小的孩子。这意味着我们这个物种在干旱的非洲平原上找到了减少饥馑的生存方法。当肉类比较少的时候,植物类食物补充了不足;当干果和水果少的时候,肉类可以填充不足。这样,我们得到了高蛋白的食物,却没有必要像猫科动物那样发展出高度专门化的捕猎方法。

    通过性别分工而培养出来的一些习惯也延伸到了生活的其他方面。我们擅长分享东西,就像是有人逼着我们这么做似的。这就带来了新的好处:每个个体可以发展专门的技能。我们这个物种特有的这种在“专家”之间的分工,是我们成功适应环境的关键,因为它使得技术的发展成为可能。今天我们生活的社会在分工方面更加有独创性,涉及范围更大。

    从那个时候开始,这些变化就有一种内在的连贯性。体积大的脑子需要肉类食物(今天的素食者是靠吃豆类食品而防止缺少蛋白质的)分享食物使得吃肉的习惯变得可能(因为男性的捕猎活动可以失败)分享食物要求有个比较大的脑(如果不能有意识地记住细节,你会很容易就被一个想占便宜的家伙骗了)按照性别分工推动了一夫一妻制(一对配偶现在成了一个经济实体)一夫一妻制导致性选择的时候对于代表青春的身体特征的重视(配偶年轻有更大优势)。理论就是如此这般一圈圈地转,我们用这些螺旋形的让人宽心的理由来证明我们是怎样成为今天这样的。我们用一些非常脆弱的证据,建造了一个一碰就倒的科学房子。但是我们相信这些理论有一天是可以验证的。化石纪录显示不出多少过去动物的行为;那些骨骼太干,哪块被发现也太随机。但是基因纪录会告诉我们更多。自然选择就是基因改变其序列的过程。在改变的过程之中,那些基因留下了一份我们这个星球上40亿年的生命的纪录。它们是比尊敬的毕德(Venerable Bede)(7世纪基督教教士,因撰写基督教早期历史而闻名,有“英国历史之父”的称号。)写的手稿更为珍贵的信息来源,只要我们会解读它们。换一种说法,关于我们的过去的纪录是刻在我们的基因里的。

    基因组中大约2%的成分讲述了我们在生存环境与社会环境方面的进化与黑猩猩的有什么不同。当一个有代表性的人和一个有代表性的黑猩猩的基因组被输入到电脑里,当活跃的基因从背景“噪音”里被分离出来,当人和黑猩猩基因的区别被列成一个表之后,我们就可以瞥见,更新世时期的生存压力是怎样作用在两个具有共同起源的物种上的。人和黑猩猩相同的那些基因的功能是一些基本的生物化学反应和身体的总体设计。也许惟一的区别是那些调节激素与发育的基因。不知怎么一来,那些基因用它们的数码语言告诉人胚胎上的脚长成一个平板的东西,有脚跟,有大脚趾;同样的这些基因却告诉黑猩猩胚胎上的脚去长成一个更加弯曲的东西,不太有脚跟,脚趾更长、更能抓东西。

    试着想象一下基因是怎么做到这些的,就让人思绪起伏。虽然基因控制生长和形态是毋庸置疑的,但是它们是怎样控制生长与形态的?科学才刚刚有了一些最最模糊的线索。人类和黑猩猩之间除了基因的区别以外,两者毫无二致。那些强调人类的文化环境、否认或怀疑人与人之间、人种与人种之间基因区别的重要性的人,也同意人类与其他物种之间的区别主要是基因的区别。假设我们把一个黑猩猩的细胞核注射到一个去掉了细胞核的人类卵细胞里去,并把这个卵细胞植入一个人的子宫,生下来的婴儿(如果它能存活)在一个人类家庭长大,它会长得什么样子呢?你都用不着去做这个极端不道德的实验就会知道:它会长得像个黑猩猩。尽管它一开始有人类的细胞质,用的是人类的胎盘,在人类中间长大,但它长得一点都不会像人。

    摄影提供给我们一个有用的比喻。想象你照了一张黑猩猩的照片。要冲洗它,你要按规定的时间把它放在显影液里,但是不管你怎么费劲,你都不可能通过改变显影液的配方而得到一张人的照片。正如一张底片要被浸在显影液里,影像才能出现,一张用卵细胞中基因的数码语言写就的黑猩猩的设计图,也要有适合的环境才能成为一个成年的黑猩猩——养分、液体、食物、照料——但是它已经有了怎样成为一个黑猩猩的信息。

    同样的道理,在动物行为上就不一定对了。典型的黑猩猩的“硬件”可以在另外一个物种的子宫里组装起来,但是“软件”却有点不那么对劲了。一个被人类养大的黑猩猩的婴儿,会与被黑猩猩养大的“泰山”(美国电影《人猿泰山》里的人物,是一个英国绅士遗留在非洲的孩子,被猩猩抚养长大。)一样,在与自己物种的其他成员相处上有些糊涂。比如说,泰山就不可能学会说话,被人类养大的黑猩猩也不会去学怎样讨好那些居支配地位的动物,怎样去威吓居从属地位的动物,怎样在树上做巢或怎样抓白蚁。在行为上,仅有基因是不够的,起码对黑猩猩是如此。

    但是基因是必需的。线性数码信息中一点小小的区别就能指挥人类和黑猩猩身体上那2%的区别,如果想到这里会让你思绪起伏,那你想象一下这些信息里小小的改变就能够精确地改变黑猩猩的行为,这可能更让你思绪起伏了。我刚才随便提到了不同种类猿的交配系统——常换配偶的黑猩猩,一夫多妻的大猩猩,一夫一妻的人类。我这样做的时候是随便地假设了每个物种都有一个比较典型的做法,而这个假设就要进一步假设这个做法至少是部分受基因的影响和控制的。一堆基因,每一个都是一串四个字母的密码,怎么就能够决定一个动物是有一个还是多个配偶?答案:我一点门儿都摸不着。不过,我不怀疑基因能够做到这一点。

    基因是动物结构的配方,也是动物行为的配方。

    第三号染色体 历史

    我们发现了生命的秘密。 ——弗兰西斯•克里克(1953年2月28日)

    在1902年,阿奇博尔德•加罗德(Archibald Garrod)虽然只有45岁,他已经是英国医学界的一根顶梁柱了。他是著名教授、有爵士头衔的艾尔弗雷德•巴林•加罗德(Alfred Baring Garrod)的儿子。这位教授在痛风病——上流社会最普遍的疾病——方面的理论被认为是医学研究的胜利。阿奇博尔德•加罗德自己的医学生涯也不费力地就得到了认可,后来他因为一战期间在马尔他所做的医疗工作也被封为爵士。之后,他又得到了一项最为荣耀的奖赏:继尊敬的威廉•奥斯勒爵士(Sir William Osle,19世纪末20世纪初医学家、医学教育家,1905年起在牛津大学任教)之后,任牛津大学瑞吉尤斯(Regius)医学教授之职。

    你能够勾勒出他的形象,是不是?他是那种死板的、墨守成规的爱德华时代的人物,硬硬的领子、硬硬的嘴唇、僵硬的思维,挡在科学进步的路上。那你就错了。就在1902年,阿奇博尔德•加罗德提出了一个有些风险的假说,从而证明了他是一个远远领先于他的时代的人,而且在不知不觉中,他的手指已经放在了从古至今生物学最大谜团的答案上了。这个谜团就是:什么是一个基因?事实上,他对基因的理解如此有天才性,在他去世之后很多年才有人开始理解他的想法:一个基因就是一种化学物质的配方。这还不算,他认为自己已经发现了一个基因。

    在伦敦大欧尔茫德街圣巴托洛密欧医院工作的时候,加罗德接触到了一系列患有一种少见但不太严重的疾病——尿黑酸尿症——的病人。这些病人除了有一些如风湿痛之类的不太舒服的症状之外,他们的尿和耳垢遇到空气就会变成红色或是墨黑色,视他们的饮食情况而定。1901年,一个患有这种病的男孩的父母生了他们的第五个孩子,这孩子也有这种病。这让加罗德开始想到,这种病是否是家族遗传的。他注意到这两个病儿的父母是第一代表兄妹。于是他回过头去检查其他的病例,4个家庭中有三个是第一代表亲结婚,那17个尿黑酸尿症病人,有8个互相是第二代表亲。但是这种疾病并不是简单地从父母传给孩子,大多数病人有正常的孩子,但是这种病又会在孩子的孩子身上出现。非常幸运的是,加罗德对于最先进的生物学观念很有了解。他的朋友威廉•贝特森(William Bateson,生物学家)对于格雷戈尔•孟德尔(Gregor Mendel)的研究成果在两年前被重新发现非常激动,正在写一本巨著向公众介绍并捍卫孟德尔“主义”。这样,加罗德知道他是在跟孟德尔所说的隐性性状打交道——一种特性可以被某一代人“携带”,孩子如果从父母双方都得到这种特性的遗传,才会表现出来。他甚至引用了孟德尔用在植物上的术语,称这种人是“化学突变种”。

    这就给了加罗德一个新的想法。他想到,也许这种病之所以只发生在得到父母双方遗传的人身上,是因为有什么东西丢失掉了。因为他对于生物学与化学都很精通,他知道黑色的尿和耳垢是由于一种叫做尿黑酸的物质大量积累而造成的。尿黑酸可能是人体化学反应的一个正常产物,但是在正常人里这种物质会被降解和排出体外。之所以会大量积累,加罗德想,也许是因为降解尿黑酸所需要的催化剂没有正常工作。这个催化剂,他想,一定是用蛋白质做成的一种酶,而且一定是一种遗传物质(现在我们就会说,一个基因)的产物。在那些病人体内,这个基因制造了一种有缺陷的酶;对于那些携带者,这个缺陷没有什么害处,因为他们从父母中健康的一方得到的基因是正常的。

    这样,加罗德的大胆假说“先天代谢错误”就诞生了,假说中包含了一个意义深远的假设:基因是制造化学反应催化剂的,一个基因制造一种功能非常专门的催化剂。也许基因就是制造催化剂的机器。“先天代谢错误,”加罗德写道,“产生于代谢过程中一个步骤的错误,而代谢过程中步骤的错误又产生于一种酶的缺失或不正常的功能。”因为酶是由蛋白质组成的,它们无疑是“个体化学差异的载体”。加罗德的书于1909年出版,受到广泛的好评,但是这本书的评论家们完全曲解了他的思想。他们以为加罗德只是在谈一种罕见的疾病,而没有意识到他谈的是对所有生命都适用的基本原理。加罗德的理论被忽略了35年之后才被重新发现。那时候,遗传学中新的观点爆炸般出现,加罗德已经去世10年了。

    我们现在知道,基因的主要功能是储存制造蛋白质所需的配方。蛋白质则是完成身体内所有化学、结构、调节功能的物质:它们产生能量,抵御感染,消化食物,形成毛发,运输氧气,诸如此类。

    每一个蛋白质都是通过把一个基因携带的遗传密码翻译出来而被制造成的。这句话反过来就不一定对了:有些基因永远也不会被翻译出来用来制造蛋白质,比如说一号染色体上的核糖体RNA。不过就算是这些基因,也是被间接用来制造蛋白质的。加罗德的假说大体上是正确的:我们从父母那里得到的不是别的,是一份规模巨大的配方,用来制造蛋白质和制造蛋白质所用的机器。

    加罗德的同代人也许没有理解他的思想,不过他们起码给了他应有的荣耀。但是对于加罗德站在其肩膀上的那位“巨人”,格雷戈尔•孟德尔,我们却不能说同样的话。很难想象有比加罗德和孟德尔的背景差别更大的两个人了。孟德尔的教名为约翰孟德尔,1822年出生在莫拉维亚(Moravia)(中欧的一个地区,现在归属捷克共和国。下文的奥尔姆茨即为莫拉维亚的一个城市。-译者注)北部一个名为海恩曾多尔夫(现在叫做海诺伊斯)的小村庄。他的父亲安东租了一小片农场,靠给地主干活来抵租。约翰16岁那年,在特洛堡的文法学校里正一帆风顺的时候,父亲被一棵倒下来的树砸到,健康与生计都毁了。安东把农场转手给了自己的女婿,换些钱来支付儿子上文法学校和后来在奥尔姆茨(OlmUtz)上大学的学费。但是这样的生活太艰难了,约翰需要更有钱的人资助。最后,他当了奥古斯丁教派的修道士,开始使用格雷戈尔兄弟这一名字。他在布鲁恩[BrUnn,现在的伯尔诺(Brno)(捷克东南部城市。)产的神学院里完成了学业,成了一名神父。他按照要求做了一段时间的教区神父,不太成功。他又进了维也纳大学学习,试图成为一个科学教师,但是却没有通过考试。

    他又回到了布鲁恩,31岁,一无所成,只能在修道院里生活。他很擅长数学和象棋,有个数学脑子,也很乐天。他还是一个热情很高的园丁,从父亲那里学到了嫁接果树和授粉的方法。就是在这里,在他没有通过正规学习而得到的农业知识里,埋藏着他的洞察力的根源。当时,养牛和养苹果树的人们对于颗粒遗传学的基础已经有了一些模模糊糊的认识,但是没有人系统地研究过它。“没有一个实验的设计与深度能够使得我们有可能确定每一代里不同性状的数量,或是确定它们之间的统计关系”,孟德尔写道。你可以听见,听众已经打起鼻鼾了。

    于是,34岁的孟德尔神父在修道院的花园里,利用豌豆开始了一系列实验,前后持续了8年。这些实验包括了种植3万多棵植物,仅1860年一年就种了6000棵。这些实验最终永远地改变了世界。实验结束之后,他对自己的成就很清楚,而且把它清楚地表达出来,发表在布鲁恩自然科学学会的进展报告上。所有好的图书馆都存有这份刊物,但对他的成就的认可却迟迟没有到来。被提升为布鲁恩修道院的院长之后,孟德尔渐渐对他的花园失去了兴趣,成了一个善良、忙碌却又好像不特别敬神的神父(他在文章里提到美味佳肴的次数比提到上帝的次数还多)。他生命的最后岁月耗在了一场越来越痛苦与孤独的反对政府对修道院增收一项新的税收的运动里。孟德尔是最后一个需要交这项税的院长。在他的黄昏岁月里,也许他曾经想到过,他这一生最大的成就,可能是让一个音乐学院里天才的19岁男孩里奥•亚那谢克(Leos Janacek)(19世纪末20世纪初作曲家)当了布鲁恩合唱团的指挥。

    在花园里,孟德尔做了一些杂交实验:把不同种的豌豆拿来杂交。但是这可不是一个业余科学家的游戏,这是一个大规模的、系统的、认真设计出来的实验。孟德尔选择了七对不同种类的豌豆来杂交,圆粒的与皱粒的杂交;黄子叶的与绿子叶的杂交;鼓豆荚的与瘪豆荚的杂交;灰色豆皮的与白色豆皮的杂交;未成熟时豆荚是青色的与未成熟时豆荚是黄色的杂交;在轴上开花的与在顶端开花的杂交;长秆的与矮秆的杂交。他还杂交了多少对其他种类的豌豆,我们不得而知。这七对性状都是代代相传的,也都是由一个单个基因决定的,所以,他肯定是已经从初步结果中知道了可能的结果是什么,才选择了这七对。每一对杂交出来的后代都跟双亲中的一个一模一样。双亲中的另一个的特征似乎消失了。其实没有:孟德尔让那些杂交后代自我繁殖之后,消失的特征又在大约四分之一的“孙子”辈里出现了。他数了又数,第二代的19959棵植物中,显性特征与隐性特征的比例是14949比5010,大约是2.98比1。如罗纳德•费希尔爵士(Sir Ronald Fisher)(罗纳德•费希尔:20世纪英国统计学家、遗传学家,对统计学在生物学里的应用做出了巨大贡献。)在下一个世纪里说的,这个比例跟3接近得令人起疑。别忘了,孟德尔数学很好,而且在做实验之前,他就知道他的豌豆们应该遵从什么样的数学公式。

    像一个中了邪的人一样,孟德尔从豌豆又转向倒挂金钟和玉米等其他植物,并得到了同样的结论。他知道他发现了遗传学方面非常重要的东西:遗传的特征不会混杂起来。在遗传里有一些结实的、不可分的、量子化的、颗粒化的东西。遗传物质没有像液体一样均匀融合起来,没有像血液一样融在一起,相反,遗传物质像很多很小的宝石颗粒,暂时地混杂在一起了。事后看起来,这个原理一直是很明显的。否则,怎么解释一个家庭里可以既有蓝眼睛的孩子又有棕眼睛的孩子?达尔文虽然把自己的理论建立在遗传特性的融合性上,但是他几次暗示过这个问题。“近期以来,我倾向于猜想,”他在1857年写信给赫胥黎道:“模模糊糊、粗略地猜想,将来我们会发现,通过受精卵而完成的繁殖,是两个独特的个体的一种混合,却不是一种真正的融合……。除此之外,我想不出其他原因去解释为什么两性繁殖的后代与它们的前辈如此之相像。”

    在这个问题上达尔文很是紧张。此前他刚刚被一个苏格兰的工程学教授猛烈地抨击过。这个教授有个奇怪的名字:弗里明•詹金(Fleeming Jenkin)。他指出了一个简单而又无懈可击的事实,那就是自然选择与遗传特性的融合性是互相矛盾的。如果遗传确是通过把遗传物质均匀融合起来而完成的,那么达尔文的学说就不太可能是正确的,因为每一个新的、有生存优势的变化都会被其他因素给稀释掉。詹金用了一个故事来阐明他的观点,一个白人想通过与一个岛上的黑人生孩子而把这个岛上的人群变白。他的白人的血很快就会被稀释到无足轻重的地步。从内心说,达尔文知道詹金是对的。连素来火暴的托马斯•亨利•赫胥黎面对詹金的观点也默不作声。但是达尔文也知道,他自己的理论也是正确的。他不知道应该怎样调和这个矛盾。如果他能读到孟德尔的学说就好了。

    事后再看,很多事情都非常明显,但是仍然需要一个天才来戳穿这层纸。孟德尔的成就在于他揭示了:大部分遗传特性看上去像是融合得很好的东西,惟一的原因,是这些遗传特性是由多种“颗粒”决定的。19世纪早期,约翰•道尔顿(John Dalton,物理学家、化学家)已经证明了水是由亿万个坚硬的、不可再分的小东西——原子——组成的,从而击败了他的对手——持有连续性理论的人们。现在,孟德尔证明了生物学里的“原子理论”。生物学里的原子在过去可能被起了很多五花八门的名字,在20世纪的第一年里用过的名字就有要素、原芽、质粒、全因子、生源体、依德、异丹。不过,流传下来的是“基因”这个名字。

    44从1866年起,在四年的时间里,孟德尔不断地把自己的论文和想法寄给慕尼黑的植物学教授卡尔一魏海姆•尼亚戈利(Karl-Wilhelm Nageli)。他越来越大胆地指出自己的发现的重要性。但是在四年的时间里尼亚戈利总是误解他的意思。他居高临下地给这位执著的修道士写去礼貌的回信,告诉他去研究山柳兰。就算一个人再努力也不可能给出比这个更捣乱的建议了。山柳兰是单性生殖的,也就是说,它虽然需要花粉才能生殖,却不接受传给它花粉的“同伴”的基因。这样,杂交实验就会得出奇怪的结果。与山柳兰斗争了一阵之后,孟德尔放弃了,转而研究蜜蜂。他在蜜蜂上做了大量实验,所要结果却从来没有被找到。他是否发现了蜜蜂特殊的单倍二倍体的遗传方式呢?(雄性蜜蜂每一条染色体只有一份,是单倍体;雌蜂则每条染色体有两份,是二倍体。)

    与此同时,尼亚戈利发表了他自己论遗传学的长篇巨著。在他的文章里,他提到的自己的一项工作是孟德尔理论的一个绝好例子,但是他仍然没有明白孟德尔的理论,也没有在文章中提到孟德尔的发现。尼亚戈利知道,如果你把安哥拉猫与另一种猫交配,安哥拉猫特有的皮毛就会在下一代里消失得干干净净,但是在再下一代里又会重新出现。很难找到比这更好的例子来说明孟德尔所说的隐性性状了。

    不过,在他的有生之年,孟德尔差点儿就得到了认可。查尔斯•达尔文通常是很惯于从别人的工作里得到灵感的。他甚至给自己的一个朋友推荐过一本福克(W.O.Focke)写的书,里面引用了14篇孟德尔的文章。可是达尔文自己却好像根本没有注意到这些。孟德尔的命运是在他与达尔文都去世多年之后,在1900年被重新发现的。这是在三个不同地点几乎同时发生的。重新发现他的人——雨果•德弗里斯(Hugo DeVries)、卡尔•克伦斯(Carl Correns)和埃里奇•冯•丘歇马克(Erichvon Tschermak),三个都是植物学家,每一个人都是辛辛苦苦地在不同物种上重复了孟德尔的工作之后,才发现了孟德尔的文章。

    对于生物学界,孟德尔理论来得太突然了。进化理论中没有任何东西要求遗传“一块一块”地发生。事实上,孟德尔的学说仿佛是在破坏达尔文费尽力气试图建立的所有理论。达尔文说过,进化就是自然选择之下微小的、随机的变化的累积。如果基因是45硬邦邦的小东西,如果遗传特性可以在隐藏了一代之后又完整地出现,那么它们如何能够逐渐地、微妙地变化呢?从很多角度来说,20世纪初人们看到的是孟德尔学说打败达尔文学说。当威廉•贝特森说,颗粒遗传学的作用起码是限制了自然选择的作用时,他说出了很多人的想法。贝特森是个脑筋糊涂文风枯燥的人。他相信进化是跳跃性的,从一种生命形式跳到另一种,没有中间的过渡。为了证明这个离奇的理论,他在1894年出版了一本书,

    阐述到遗传是颗粒性的。为此,他从那以后一直受到“真正”的达尔文主义者的强烈攻击。如此说来,他对孟德尔学说张开双臂欢迎并第一个把它译成英文,就毫不奇怪了。“在孟德尔的发现里,没有任何东西是与正统的理论——亦即物种产生于自然选择——相矛盾的”,贝特森就像一个自称是惟一能够诠释圣保罗的神学家那样写道:“无论如何,现代科学的探索毫无例外地是为了除掉我们总结出来的自然规律里那些‘超自然’的成分,虽然有些时候这些探索本身就带有‘超自然’的烙印。坦率地说,不能否认,达尔文著作中的某些章节在某种程度上鼓励了对于自然选择原理的曲解与滥用。但是我感到安慰的是,我相信,如果达尔文有幸读过孟德尔的大作,他一定会立刻修改这些章节。”

    但是,正是因为这个大家都不喜欢的贝特森如此推崇孟德尔,欧洲的进化论学者们才对孟德尔的学说很是怀疑。在英国,孟德尔学派与“生物统计”学派之间激烈的冲突持续了20年。这个冲突传到了美国,不过在美国,两派之间的争论不那么激烈。1903年,美国遗传学家沃特•萨顿(Walter Sutton)发现,染色体的行为就像是孟德尔式的遗传因子:它们是一对一对的,每一对里一条来自父方一条来自母方。托马斯•亨特•摩尔根(Thomas Hunt Morgan),美国遗传学之父,了解到这个发现之后,就及时地“皈依”了孟德尔“教派”。于是,讨厌摩尔根的贝特森就放弃了自己原本正确的立场,转而攻击这个有关染色体的理论。科学的历史就是常常被这种无聊的争吵决定的。贝特森最终变得默默无闻,而摩尔根却干成了一些大事:他创立了一个成果显赫的遗传学派,遗传学上的距离单位——厘摩尔根——也是借他的名字命名的。在英国,直到1918年,罗纳德•费希尔才用自己敏捷的数学头脑消除了孟德尔学说和达尔文学说之间的矛盾。孟德尔学说非常漂亮地证明了达尔文学说的正确性,根本没有与其抵触。“孟德尔学说,”费希尔说:“给达尔文建起来的那所建筑补上了缺失的部分。”

    但是,突变的问题还是没有解决。达尔文的学说要求遗传的多样性,孟德尔的学说却提供了稳定性。如果基因就是生物学里的“原子”,改变它们岂不是像炼金术那样成了异端邪说?在这方面的突破,来自于第一次人工诱发的突变,这是由一个跟加罗德和孟德尔非常不同的人完成的。在爱德华时代的医生与奥古斯丁教派的修道士旁边,我们还得再加上一个好斗的赫尔曼•乔•穆勒(Hermann Joe Muller)。20世纪30年代,有许多聪明的犹太科技人才跨过大西洋,到美国避难,穆勒与这些人几乎各个方面都一样,只除了一点:他是向东走的。他是土生土长的纽约人,一个小型金属铸造公司老板的儿子。在哥伦比亚大学他开始热爱遗传学,但因为跟导师摩尔根合不来,在1920年去了得克萨斯大学。在对待天才的穆勒的时候,摩尔根的态度也许是有一丝排犹主义的痕迹,但是穆勒跟人闹矛盾,却是再典型不过的事。他的一生都不断跟这个吵跟那个吵。1932年,他的婚姻触礁,他的同事窃取他的思想(他自己是这么说的),他自杀未遂之后,离开得克萨斯去了欧洲。

    使穆勒得到诺贝尔奖的重大发现是基因突变可以人工诱发。这就像是欧内斯特•卢瑟福(Ernest Rutherford)先他几年而发现的,原子是可以嬗变的。也就是说,在希腊文里意思为“不可分割”的“原子”这个词,是不合适的。1926年,穆勒问自己:“在所有生命过程中,突变是否真的有一个与其他过程都不一样的特点:它是否真的不可被人工改变和控制?它是否占有一个与物理学中最近才被发现的原子嬗变相当的位置呢?”

    第二年,他回答了这个问题。通过用大剂量的X射线去“轰炸”果蝇,穆勒使它们的基因产生了突变,它们的后代出现了新的畸形。他写道:突变,“并不是一个远不可及的上帝,站在细胞遗传物质里一座坚不可摧的堡垒里跟我们开开玩笑”。就像原子一样,孟德尔的遗传颗粒一定也有一些内在的结构。它们可以被X射线改变。突变之后它们仍然是基因,只是不再是以前的基因了。

    人工诱发突变是现代遗传学的开始。1940年,两个科学家,乔治•比德尔(George Beadle)和爱德华•塔特姆(Edward Tatum),用穆勒的X射线方法造出了红面包霉菌的突变种。然后,他们发现新的突变种无法制造一种化学物质,因为它们体内有一种酶没有正常功能。他们提出了一条生物学定律,后来被证明是基本正确的:一个基因确定一种酶。遗传学家们开始不出声地唱起来了:“一个基因,一种酶。”这其实是加罗德的旧的假说以现代的、生物化学的方式的具体表达。三年之后,莱纳斯•鲍林(Linus Pauling,化学家,因在化学键和复杂分子结构方面的工作获得1954年的诺贝尔化学奖,因在反对核武器试验、扩散方面的贡献获得1962年的诺贝尔和平奖)做出了惊人的推断:一种很严重的贫血症的病因,是制造血红蛋白的基因出了错误,这种病的病人主要是黑人,他们的红细胞变成了镰刀形。这个基因错误表现得像是一个真正的孟德尔式突变。事情慢慢地明显起来了:基因是蛋白质的配方;突变就是改变了的基因制造出来的改变了的蛋白质。

    这个时候,穆勒并不在人们的视野里。1932年,他对社会主义的狂热和同样的对于有选择地繁衍人类(即优化人种论)的狂热,使他渡过大西洋去了欧洲。他希望看到精心繁殖出来的、具有马克思或列宁的特征的儿童,不过在他的书的较晚版本里,他识时务地将这一点改成了林肯或笛卡儿。他在希特勒掌权之前的几个月到了柏林。在那里,他惊恐万状地看到了纳粹分子砸毁了他的老板奥斯卡•沃格特(Oscar Vogt)的实验室,因为沃格特没有赶走在自己手下工作的犹太人。

    穆勒又向东走了一步,到了列宁格勒尼柯莱•瓦维洛夫(Nikolay Vavilov)的实验室。刚到不久,反对孟德尔学说的特洛菲姆•李森科(Trofim Lysenko)就得到了斯大林的青睐,开始迫害相信孟德尔理论的遗传学家,以巩固他自己的疯狂理论。他的理论宣称,麦子就像俄罗斯人民的灵魂一样,不必通过繁殖,只要通过训练就可以让它们适应新的环境。对于不同意这种理论的人,不应该劝说,而应该将他们枪毙。瓦维洛夫死在监狱里了。还抱有幻想的穆勒把自己的有关优化人种论的新书送了一本给斯大林。但是,听说书并没有受斯大林赏识之后,他找了个借口及时离开了苏联。他参加了西班牙内战,在国际纵队的血库工作。后来他又去了爱丁堡,跟往常一样走霉运,刚到就赶上了第二次世界大战的爆发。他发现,在没有电力供给的苏格兰冬天,在实验室里戴着手套做科研很难。他绝望地想回到美国。但是谁也不想要一个好斗易怒的社会主义者,课讲得不好,还在苏联住过。

    最后印第安纳大学给了他一份工作。第二年,他因为发现人工诱发突变而获得了诺贝尔奖。但是,基因仍然是不可捉摸的神秘玩意。基因本身是由蛋白质制造的,这就使得它能够决定蛋白质的结构这一能力显得更让人摸不着头脑,细胞里好像没有其他东西比基因更复杂更神秘了。没错,染色体上倒是有些很神秘的玩意:那个乏味的被称为DNA的核酸。1869年在德国的图宾根(Tubingen),一个名字叫做弗雷德里克•米歇尔(Friedrick Miescher)的瑞士医生,从受伤的士兵那些充满脓血的绷带里第一次分离出了DNA。米歇尔本人猜到了DNA可能是遗传的关键。1892年他写信给他叔叔的时候表现出惊人的先见之明:DNA也许传递了遗传信息,“就像在很多语言中,24到30个字母就能组成词和概念”。但是,那时候没有人注意DNA;它被认为是一种比较单调的物质:只有四种不同的“字母”,它怎么可能带有遗传信息?

    因为穆勒的缘故,一个19岁就拿到了学士学位的早熟的年轻人去了印第安纳。他就是詹姆斯•沃森。看上去他一定不像是一个解决基因这个问题的人,但他就是解决了。在印第安纳大学,像我们可以预料的那样,他跟穆勒处不来,于是他师从了意大利移民萨尔瓦多•卢里亚(SalvadorLuria)。沃森建立了一种近乎偏执的信念:基因是由DNA而不是蛋白质组成的。为了寻找证据,他去了丹麦,之后又因为对他的那些丹麦同事不满意,在1951年10月去了剑桥。机遇把他扔到了卡文迪什(Cavendish)实验室,在那里他遇到了弗兰西斯•克里克,拥有同样天才的头脑,对于DNA的重要性也是同样坚信不疑。

    之后的事情已经载入史册。克里克是早熟的反面。当时他已经35岁,却还没有拿到他的博士学位。一颗德国的炸弹炸毁了他在伦敦大学学院的仪器,使得他无法测量热水在高压下的黏性。对他来说,这倒是一种解脱。他离开自己停滞不前的物理学生涯,往生物学方面挪了几步,但是也没有得到什么成功。开始,他被剑桥的一个实验室雇用,测量细胞在外力之下吞噬了一些颗粒之后的黏性。他从这份枯燥的工作逃了出来,在卡文迪什实验室忙着学习晶体学。但是他没有耐心整天只关注自己的研究,也没有耐心只研究小的问题。他的大笑、他的自信的智慧、他的喜欢告诉别人人家正在研究的问题的答案,使他在卡文迪什实验室开始讨人嫌了。克里克也对多数人对于蛋白质的着迷隐隐地有些不满。基因的结构是个大问题,他猜测到,DNA也许是答案的一部分。受沃森的“勾引”他放弃了自己的研究,开始沉迷于DNA这个“游戏”这样,科学史上一个伟大的合作诞生了:年轻、雄心勃勃、头脑敏捷的美国人懂一些生物学,一点不费劲就成了天才却无法专注的英国人懂一些物理学。他们的合作充满友好竞争,因此也十分高产。这简直是放热反应。

    短短几个月之内,利用别人辛苦收集来却没有分析透彻的数据,他们做出了也许是从古至今最伟大的科学发现之一:他们发现了DNA的结构。即使是阿基米德从浴缸里跳出来那次,都不如沃森和克里克更有资格炫耀。克里克是这么炫耀的——1953年2月他在“鹰”酒吧里说:“我们发现了生命的秘密。’沃森被这个说法吓坏了,他还是担心他们的研究是否有什么错误。

    但是,他们没有错。一切都突然间清楚了:DNA带有一种密码,写在一条精巧的、缠绕在一起的双螺旋阶梯上,还可以是无限长的。靠着它的字母之间的化学亲和力,这个密码能够复制自己,并且清晰地写明了制造蛋白质的配方,这是通过一本当时还没有被发现的“密码手册”在DNA与蛋白质之间建立起对应关系而完成的。DNA结构的惊人成功,在于它让一切都显得那么容易,却又非常具有美感。正如理查德•道金斯所说:“在沃森一克里克之后,分子生物学的真正革命在于它变成了数码式的,……基因的‘机器语言’不可思议地与计算机语言接近。

    沃森一克里克的DNA结构发表之后一个月,英国新女王加冕,在同一天,一个英国探险队征服了珠穆朗玛峰。除了《新闻纪事》上的一条小消息外,DNA双螺旋结构的发现都没能上报纸。而今天,大多数科学家都认为它是20世纪甚至是1000年来最重要的发现。

    DNA结构发现之后,接踵而来的是很多年的让人心烦的迷惑。那个密码本身,基因借助来表达自己的那个语言,固执地保守着它的神秘。对于沃森和克里克来说,找到密码几乎是太容易了,只需要把猜测、物理学知识和灵感结合起来。破译密码却需要真正的天才。很明显这个密码是由四个字母组成的:A、C、G和T。

    而且几乎可以肯定地说,就是这个密码被翻译成了有20个字母的氨基酸,氨基酸又组成了蛋白质。但是,这是怎样完成的?在哪里、以什么方式完成的?

    在领着我们到达了最终答案的那些思路里,大多数思路来自于克里克,包括被他称为接合分子的东西-我们今天称为转导RNA。在没有任何证据的时候,克里克就认定这样的分子肯定是存在的。最后,它老老实实地露面了。不过,克里克也有过一个如此之好的想法,被称为是历史上最伟大的错误理论。克里克的“没有逗号的密码”理论比自然母亲所用的方法要优美得多。它是这样的:假设这个密码的每一个词有三个字母(如果只有两个,那么总共只能有16个不同的词,不够用)。假设密码里没有逗号,词与词之间没有空隙。现在,假设这个密码不包括那些如果你从错误的地方51开始读就会读错的词。打个布赖恩•海斯(BrianHayes)(布赖恩•海斯:美国当代科普作家,精通计算机。一译者注)用过的比方吧,先想出所有用A、S、E和T这四个字母组成的英文词:ass、ate、eat、sat、sea、see、set、tat、tea、tee。现在,把那些从错误的起点开始读就会读错的词去掉。比如说,ateateat可以被读成ateateat,也可以被读成ateateat,还可以被读成ateateat。在密码里这三种读法只能有一种。

    CCC、GGG和TTT,然后,把剩下的60个词每三个并成一组。每一组里的三个词都含有同样的字母,字母的顺序是循环的。比如,ACT、CTA和TAC是一组,因为在每一个词里面,C都跟在A后面,T跟在C后面,A跟在T后面。ATC、TCA和CAT就是另外一组了。每一组里只有一个词是用在密码里的。这样,就整整有20个词。别忘了,蛋白质的字母表里恰好有20个由氨基酸组成的字母!一个四个字母的密码给出了一个20个字母的字母表。

    克里克想让人们不要对他的理论太过认真,但他是徒劳了。“在破译密码上,我们现在的假设和推断依据不足,从理论上说,我们不应对这些推断抱有太大的信心。我们做出这个推断,只是因为它能够从合理的物理学假设出发,以一种简洁的方式给出‘20’这个有魔力的数。”但是,DNA的双螺旋结构在一开始也没有什么证据啊。兴奋的情绪出现了。有5年的时间,人人都觉得克里克的理论是正确的。

    但是,专注于理论的日子过去了。1961年,其他人都还在琢磨理论的时候,马歇尔•尼伦伯格(Marshall Nirenberg)和约翰•马太(Johann Matthaei)(尼伦伯格是20世纪美国生物学家,因为在破译遗传密码以及对于遗传密码在蛋白质合成中的作用的研究获得1968年诺贝尔生理学和医学奖。他的工作初始阶段是与德国科学家马太共同进行的。一译者注)破译了密码中的一个词。他们的方法很简单:只用U(尿嘧啶,相当于DNA里的T)造了一条RNA链,然后把它扔进了氨基酸溶液里。在这个溶液里,核糖体把苯丙氨酸缝合在一起,造出了一个蛋白质。这样,遗传密码里的第一个词被破译了:尿嘧啶代表苯丙氨酸。“没有逗号的密码”理论到底是错误的。这个理论最美的地方就在于它不会出现读码移位突变,这种突变可以由于一个字母的丢失使得这个字母之后的所有信息都失去意义。但是,大自然却选用了另一种方法,虽然稍欠优雅,却能够经受住其他错误。它含有很高的重复性:一个意思可以用很多三个字一组的词表达。

    到了1965年,所有的遗传密码都已经知道了,现代遗传学也开始了。60年代的前沿突破,到了90年代已经成了常规实验。因此,在1995年,科学可以重新回到阿奇博尔德•加罗德的那些早已去世的尿黑症病人那里,确信地说出,是哪一个基因上的哪一个“拼写”错误导致了尿黑酸尿症。这个故事是20世纪遗传学的一个缩影。别忘了,尿黑酸尿症是一种非常少见又不太有危险的疾病,用调整饮食的方法就可以比较轻易地治好。所以有很多年,科学家都没有去碰它。在1995年,两个西班牙人被它在历史上的重要性所吸引,开始了对它的研究。他们在曲霉真菌里造出了一种突变种——在苯丙氨酸的存在下,这种突变种体内会积存大量的紫色色素:尿黑酸。与加罗德的推测一致,在这个突变种里有一种蛋白质是有功能缺陷的,它叫做尿黑酸双加氧酶。这两个人用一些特殊的酶把曲霉真菌的基因组打成碎片,找出与正常霉菌基因组不同的片段,然后把这些片段里的密码读出来。这样,他们最终抓住了出问题的基因。之后,他们搜索了人类基因的资料库,试图发现是否有一个类似的人类基因可以与曲霉真菌里这个基因结成一对。他们找到了。在三号染色体的长臂上,有一段DNA字母与那个真菌里的基因的字母序列有52%的相似。从尿黑酸尿症患者体内找到这个基因,并把它和正常人体内的同一基因相比较之后,我们发现患者的这个基因在第690个字母或第901个字母上与正常基因的不同,是致病的关键。每一个病人都是因为这两个字母中的一个出了错,而导致这个基因造出的蛋白质不能发挥正常功能。

    这个基因是那些乏味基因的一个典型:在一个没意思的身体器官里造一种没什么意思的蛋白质,一旦出了问题,会导致一种没什么意思的疾病。它没有任何一方面给人惊奇或是有什么特殊之处。它跟智商或同性恋倾向没有任何关系,它没有向我们揭示生命的起源,它不是“自私的基因”它老实地遵守孟德尔定律,它既不会致死也不会致残。不管出于什么目的要达到什么目标,你都不得不承认,它在地球上所有生命里都是一样的,连面包霉菌里都有它,而且它在那里的功能跟在我们体内的功能一样。但是,制造尿黑酸双加氧酶的这个基因无愧于它在历史上占的小小的地位,因为它的故事就是遗传学本身的故事。这个没什么意思的小小基因揭示出来的美,会让格雷戈尔•孟德尔都感到炫目,因为它是他的定律的具体表现,它讲述的故事不仅是关于那些微观的、缠在一起的、结构对称的双螺旋的,也是关于那些由四个字母组成的密码的,而且还是关于所有生命在化学上的一致性的。

    第四号染色体 命运

    先生,您告诉我们的这些,只不过是科学的加尔文主义。 ——一位姓名不详的士兵在一场通俗讲座之后对威廉·贝特森说

    打开任何一份人类基因名录,你面对的,不是人类到底有多少潜能,而是一个疾病的名单。这些疾病,大部分是以一两个名不见经传的中欧医生的名字命名的。这个基因会导致尼曼—皮克氏病,那个基因能导致伍尔夫—赫茨霍尔综合症,如此种种。你会得到这么一个印象:基因是用来导致疾病的。“新的导致精神症状的基因”,一个关于基因的网站这样宣布来自科研前沿的最新消息:“导致早发性肌无力的基因、导致肾脏癌的基因被成功分离;幼儿自闭症与血清素传输基因有关;一个新的老年痴呆症基因;偏执行为的遗传学。”

    但是,用它们可能导致什么疾病来定义基因,跟用人体器官能得什么病来定义这些器官一样,有些荒唐。好像是在说:肝脏的功能是得肝硬化,心脏的功能是得心脏病,大脑的功能是中风。基因名录之所以如此,不是因为它反映了我们对于基因的了解,而是反映了我们对于基因的无知。对于某些基因来说,我们对于它们的了解仅限于它们出故障的时候会导致什么疾病,这是事实。但这只是关于这些基因的所有知识里细微得可怜的一个信息,而且还误导性极大。它导致这样一个简单的说法:“某人有伍尔夫—赫茨霍尔综合症的基因。”错!所有人都有伍尔夫—赫茨霍尔综合症的基因,除了那些——这听起来有点滑稽——有伍尔夫—赫茨霍尔综合症的病人。他们之所以有这种病,是因为这个基因从他们身体内丢掉了。在剩下的人里,这个基因起的是积极的而不是消极的作用。病人有病不是因为他们有什么特殊基因,而是他们有正常基因的突变种。

    伍尔夫—赫茨霍尔综合症如此少见又后果严重——也就是说,它的基因的作用非常关键——病人通常很年轻就死去了。但是坐落在四号染色体上的伍尔夫—赫茨霍尔综合症基因,事实上却是“致病基因”里最著名的一个,因为它与另一种非常不同的病也是联系在一起的:亨廷顿舞蹈病。这个基因的一个突变种导致亨廷顿舞蹈病;这个基因的整个丢失导致伍尔夫—赫茨霍尔综合症。我们不太了解正常情况下这个基因每天的功能是什么,但是我们对于这个基因可以怎样出错、为什么出错、在哪里出错,以及出错之后对于我们的身体后果是什么,却有无比清晰的了解。这个基因含有一个词:CAG、CAG、CAG、CAG,……这个词被重复了很多次。有时候这个词被重复6次,有时候30次,有时候100多次。你的命运、你的神智、你的生命,就都悬在这条重复的线上。如果这个词重复35次或以下,你就没事。大多数人体内这个词是重复10~15次。如果这个词重复39次以上,到了中年之后你就会慢慢开始失去平衡能力、生活逐渐变得不能自理,最后过早死亡。能力的下降先是表现在智力开始出现轻微的问题,这之后,四肢出现震颤,最后出现深度抑郁,间或有幻觉和妄想。得了这种病是没法“上诉”的:这种病无法医治。但是,这种病的病人死之前要受15~25年的折磨。很少有什么命运比这更悲惨了。事实上,一旦家族里有人出现了这种病的早期症状,那种恐惧感对于很多自己还没有得病的人来说,也是很严重的:等待疾病袭来的时候,那种压力和紧张,简直是摧毁性的。

    致病的原因在基因里,而不是其他任何地方。你要么带有亨廷顿突变,会得病;要么没有亨廷顿突变,不会得病。这种决定论、这种事先注定的命运,是加尔文做梦也没想到的。乍看上去,这简直是基因决定论的终极证明,基因决定一切,我们对其无可奈何。你吸烟也好,补维生素也好,有锻炼习惯也好,整天窝在沙发上看电视也好,都没关系。亨廷顿舞蹈病在什么年龄发作完全是由那一个基因上CAG这个词被重复的次数决定的,一点通融余地都没有。如果一个人带有39次重复,那么,有90%的可能是他在75岁的时候已经成了痴呆,按平均值来看,他会在66岁的时候出现这个疾病的第一个症状;如果带有40次重复,那么平均是在59岁发病;41次重复,54岁发病;42次重复,37岁发病。如此类推下去。那些带有50次重复的人,平均在27岁就会因病失去正常思维。这样打个比方:如果你的染色体长得能够绕赤道一圈,那么健康与发疯之间的区别只差多长出的一英寸。

    哪一种占星术也不可能如此准确,哪一种人类活动因果关系的理论也没有这么精确,不管这理论是弗洛伊德的、是马克思的、是基督教的,还是泛灵论的。《圣经·旧约》里的先知们,古代希腊那些内视的神喻代言人,英国伯尼奥·瑞吉斯(Bognor Regis)码头上那些玩着水晶球的吉卜赛算命的,不仅没有谁能够预言一个人的生活会在哪一年被毁掉,他们根本就没有假装过自己有这个能力。我们现在在对付的是一种恐怖的、残酷的、无法改变的预言。在你的基因组里有大约10亿个3个字母的词,但是,这一个词的重复次数,就决定了你是正常还是发疯。

    在1967年,歌星伍迪?格思里(Woody Guthrie)死于亨廷顿氏病,之后这种病就变得尽人皆知、臭名昭著。在1872年,它被一位名叫乔治·亨廷顿(George Huntington)的医生在长岛(Long Island)东端首次诊断出来。他注意到这种病似乎是在家族里传播的。他之后的研究发现,长岛的那几个病例是发源自新英格兰(New England)的一个大家族的一部分。在这个家族12代的历史里,可以找到1000多个病人。所有这些病人都是两个兄弟的后代,这两个人是1630年从萨佛克(Suffolk)移民来的。他们的后代中,有几个人在1693年被当成是巫婆,在萨勒姆(Salem)(萨佛克是英国东部的一个郡,新英格兰是美国东北部几个州的总称,萨勒姆则为新英格兰地区的一个城市)被烧死了。这也许是因为她们得病的症状太吓人。但是,因为这种病的症状要在病人到了中年之后才出现,也就是说,当病人有了孩子之后,所以致病的基因突变没有被自然选择淘汰掉。事实上,有几个研究还发现,带有致病的基因突变的人比起他们的没有病的兄弟姐妹来,生孩子生得更多。

    亨廷顿氏病是我们发现的第一个完全显性的人类遗传病。这意味着它跟尿黑酸尿症不一样。要出现尿黑酸尿症的症状,你必须有两份致病突变,从你双亲那里各得一份。而对于亨廷顿氏病,一份致病突变就够了。如果这个突变是来自于父亲,病就好像更加严重。在这个父亲所生的子女里,出生得越晚的孩子,基因里重复的次数越多,突变越严重。

    20世纪70年代晚期,一个意志坚定的妇女决心要找出亨廷顿氏病的基因。伍迪?格思里因亨廷顿氏病而痛苦地死去之后,他的遗孀建立了抗亨廷顿舞蹈病委员会。一个名叫米尔顿?韦克斯勒(MiltonWexler)的医生加入了她的行列,这位医生的太太和她的三个兄弟都有这种病。韦克斯勒的女儿南希(Nancy)知道自己有50%的可能带有致病突变,她着了魔一样想找到这个基因。别人劝她:还是算了,这样一个基因可能是找不到的,找这个基因就好像是在一个跟美国一样大的草堆里找一根针,她应该等几年,等科技进步之后有可能找到这个基因的时候再说。“但是,”她写道:“如果你有亨廷顿氏病,你没有时间等。”在看到一个委内瑞拉医生阿米里柯?尼格里特(AmericoNegrette)的报告之后,她在1979年飞到委内瑞拉,访问了马拉才博湖边的三个村庄:圣路易斯、巴伦其塔和拉古尼塔(SanLuis,Barranquitas,Laguneta)。马拉才博湖(LakeMaracaibo)实际上是个巨大的被陆地环绕的海湾,位于委内瑞拉的西端,在科尔地勒拉?德米里达(CordilleradeMerida)以西。

    这个地区有一个非常大的家族,在家族里亨廷顿氏病的发病率很高。据家族成员之间流传的故事,这种病是从18世纪的一个水手那里来的。韦克斯勒成功地把他们的家族病史追溯到19世纪早期一个名叫玛利亚?康色普申(MariaConcepcion)的妇女那里。这位妇女生活在帕布罗?德阿古阿(PueblosdeAgua),那里有一些由高高地立在水上的房屋组成的村庄。她是个多产的女人,她之后的八代一共有1.1万人,在1981年的时候仍有9000人活着。在韦克斯勒去访问的时候,他们中的371人患有亨廷顿氏病,另外有3600人发病的可能性在四分之一以上,因为他们的祖父母里至少有一人患有亨廷顿氏病。

    韦克斯勒有着超人的勇气。她本人就可能带有致病突变,“看着这些欢蹦乱跳的孩子,真是让人心碎,”她写道:“尽管贫穷,尽管不识字,尽管男孩子要乘着小船在波涛翻滚的湖上打鱼,又劳累又危险,尽管那么小的女孩子就要操持家务照顾生病的父母,尽管无情的疾病夺去了他们的父母、祖父母、姑姑、叔叔、表兄表妹……,他们仍然满怀希望,快乐地尽情地生活——直到疾病袭来。”

    韦克斯勒开始在草堆里捞针了。第一步,她采集了500个人的血样。“炎热、喧嚣的采血的日子。”然后,她把血样送到了吉姆?居塞拉(JimGusella)在波士顿的实验室。他开始通过测试基因标志的办法来寻找致病基因:随机选择一些DNA片段,可能与正常DNA相同,也可能不同。好运向他微笑了。到1983年年中,他不仅分离出了一个与致病基因距离很近的标志,而且确定了它是在四号染色体短臂的顶端。他知道这个基因是在基因组里那百万分之三的序列里。完事大吉了吗?没有这么快。这个基因所在的区域有100万个字母长。草堆变小了些,但还是很大。八年之后,这个基因仍然是个谜。“这项工作是无比辛苦的,”韦克斯勒的语气像是维多利亚时代的探险者:[4]“四号染色体顶端这个地区,环境极其险恶。过去的八年,我们就像是在攀登珠穆朗玛峰。”

    持之以恒得到了回报。1993年,这个基因终于被找到了。它的内容被读出来了,致病的突变被确认了。这个基因所含的配方可以制造一种被称做亨廷顿蛋白的蛋白质:因为蛋白质是在基因之后发现的,所以蛋白质就以基因命名了。CAG这个词在这个基因中部的重复,使得蛋白质的中部有一长串谷氨酰胺(在基因语言里,CAG的意思是谷氨酰胺)。对于亨廷顿氏病来说,蛋白质这一部分的谷氨酰胺越多,发病的年龄越小。

    这个对亨廷顿氏病的解释,看上去很没有说服力。如果亨廷顿蛋白的基因有问题,那为什么它在病人生命的前30年里没有表现出异常?很明显,亨廷顿蛋白的突变型是逐渐积累起来的。与早老性痴呆和疯牛病一样,在细胞里逐渐积累起来的这些黏糊糊的蛋白质团团,最后导致了细胞的死亡。这团蛋白质可能诱导了细胞的“自杀”。在亨廷顿氏病里,这主要发生在大脑里控制运动的区域,所以后果是病人的运动越来越困难越失控。

    最让人没想到的,是CAG这个词的过度重复并不是亨廷顿氏病的专利。另外有五种神经方面的疾病,也是因为所谓的“不稳定的CAG重复”而造成的,不过是在其他基因里。小脑性运动失调就是一例。还曾经有过一个奇怪的科研报告:把一长串CAG插到老鼠体内一个任选的基因里之后,老鼠出现了一种发病较晚的神经性疾病,跟亨廷顿氏病很像。所以,不管CAG的过度重复出现在什么基因里,它也许都可能导致神经疾病。还有,其他一些因神经退化而导致的疾病,也是由于一些词的过度重复而造成的,每一个这种词都以C开始以G结尾。有六种病是因为CAG重复造成的。在X染色体上有一个基因,如果CCG或CGG在它的开头部分重复了200次以上,就会导致“脆弱X综合症”。这是一种很常见的痴呆症,病人与病人之间症状区别很大。正常人的重复在60次以下,病人体内的重复可以高达1000次。在第十九号染色体上有一个基因,如果这个基因里CTG重复次数在50~1000次之间,就会出现肌萎缩症。有一打以上的疾病都是因为三个字母的词重复过多引起的,这些病被称为多谷氨酰胺病。在所有这些疾病里,比正常长度长的蛋白质都有一种倾向,就是积累成无法被降解的蛋白质块,导致它们所在的细胞死亡。这些疾病有不同症状只是因为在身体不同部位基因的表达不太一样。

    以C开头以G结尾的这些词,除了代表谷氨酰胺之外,还有什么特殊之处?一种名叫“预期效应”的现象给了人们一些启发。人们早已知道,那些患有严重亨廷顿氏病或“脆弱X综合症”的人,他们的孩子发病时间一般会早于父母,病情也更严重。预期效应是这样一种现象:父母体内的重复越长,基因复制给下一代的时候,加长的长度就越长。我们知道,这些重复的DNA会绕圈,形成一个名叫“发夹式结构”的东西。DNA喜欢自己跟自己黏在一起,形成发夹式的结构,以C开头以G结尾的词里面的C和G在“发夹”中间连接起来。当DNA复制的时候,“发夹”被打开,复制机器可能会滑一下,多余的词就被插到DNA里了。

    有个简单的比方,也许可以帮助读者理解。如果我在这句话里重复说六个词:CAG,CAG,CAG,CAG,CAG,CAG,你会不费劲地数清楚它们。但是我如果把这个词说36次:CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,CAG,我敢打赌你很容易会数错。DNA也是这样。重复的次数越多,复制机器在复制DNA的时候就越容易再插一个进去。它指在“书”页上的手指稍微一动地方,就忘了自己数到哪儿了。另外一种解释(可能两种解释都对)是说,检查复制情况的系统,被称为错配修复系统的系统,只善于查出比较小的错误,而不是这种一个词被大量重复的错误。

    这也许可以解释为什么这些疾病都是在一定年龄之后才发病。伦敦戈爱恋医院的劳拉?曼吉亚丽尼(LauraMangiarini)造出了一些转基因老鼠,它们携带有亨廷顿基因的一个片段,里面有100次CAG重复。当这些老鼠长大了的时候,在它们的所有器官里(只有一个除外),重复的次数都增加了。最多的增加了10次。那个例外的器官是小脑,是后脑里分管运动机能的部分。小脑里的细胞自从老鼠学会了走路之后就不需要再变化了,所以它们也不再进行分裂。复制错误都是在细胞分裂的时候产生的。在人体内,重复的次数在小脑里是逐渐减少的,尽管在其他器官里重复越来越多。在那些制造精子的细胞里,CAG的重复越来越多,这就解释了为什么一个人发病的年龄跟他出生时他父亲的年龄有关:父亲年龄越大,孩子发病年龄越早,病情越严重。(顺便提一句,现在人们知道,整个基因组里的基因突变率,在男性里是女性的五倍。这是因为男性DNA在男性的一生中都在不断复制以提供新鲜的精子细胞。)

    亨廷顿基因的自发突变好像在有些家族里比在其他家族里更容易出现。原因不仅仅是在这些家族里CAG的重复次数刚刚在临界值以下(比如说,在29次与35次之间)。与其他带有同样CAG重复次数的人相比,这些家族里的人基因里CAG的重复次数更容易越过临界值——容易一倍。原因仍然很简单,完全是由序列里有些什么字母决定的。比较一下这样两个人:一个人带有35次CAG重复,后面接着的是一堆CCA和CCG。如果复制DNA的机器滑了一下,加了一个CAG上去,重复次数就加了一次。另外一个人也有35次CAG重复,后面跟着一个CAA,再后面是两个CAG。如果复制DNA的机器滑了一下,把CAA读成CAG了,结果就不是多重复了一次,而是多重复了三次,因为后面已经有两个CAG等在那儿了。

    虽然我劈头盖脑地扔给你那么多有关亨廷顿基因CAG这个词的细节,好像离题越来越远,但是想一想吧,几乎所有这些知识在五年以前(这本书第一次出版是在2000年。——译者注)还没人知道呢。亨廷顿基因还没有被发现,CAG重复还没有被查出,亨廷顿蛋白还是未知物,没有人猜到亨廷顿氏病与其他神经萎缩类疾病是相关的,亨廷顿基因的突变率和突变的原因都还很神秘,也没人能解释父亲的年龄为什么对孩子的病情和发病年龄有影响。从1872~1993年,人们几乎不掌握与亨廷顿氏病有关的任何知识,只知道它是遗传的。自从1993年,有关亨廷顿氏病的知识像蘑菇云一样一夜之间就出现了,这朵蘑菇云如此之大,需要在图书馆里泡上好几天才能把这些知识都读一遍。从1993年以来,有将近100位科学家发表过与亨廷顿基因有关的研究论文。所有这些论文都是关于一个基因的,人类基因组的6万到8万个基因之一。詹姆斯?沃森和弗兰西斯?克里克在1953年打开的那个潘多拉盒子具有无比的力量,如果你还对此不太确信,亨廷顿基因的故事怎么也应该说服你了吧?跟我们从基因组里搜罗到的知识相比,我们从生物学的其他分支得到的知识顶多算是一小勺。

    尽管如此,仍然没有一例亨廷顿氏病被治愈。我宣扬了这么久的这些知识连一个怎样治疗亨廷顿氏病的建议都没有提出来。如果说这些知识对那些正在寻找疗法的人有什么影响,也许CAG重复这个一点不带感情色彩的简单病因使现实变得更加苍白了。大脑里有1000亿个细胞。我们哪能进去把每一个细胞里的CAG重复都人为缩短一些呢?

    南希?韦克斯勒讲了一个她在马拉才博湖畔作研究时一个女人的故事。这个女人到韦克斯勒的草屋去做检查,想看看自己是否有神经疾病的征兆。她看上去很健康,但韦克斯勒知道,在病人发病之前很久,亨廷顿氏病的一些细微征兆就可以被一些医学检查测出来。这个妇女无疑是有这些征兆的。但是,跟其他很多人不一样的是,这位妇女在做完检查之后,固执地问医生,结论是什么。她到底有没有病?医生反问她:你自己认为呢?她说她觉得自己很正常。医生们最终没有告诉她检查结果,只是说在做诊断之前,他们需要更多地了解她。这个女人刚刚离开房间,她的一个好朋友就冲进来了。这个朋友近乎歇斯底里地问医生:你们跟她说什么了?医生们复述了他们的话。“谢天谢地”,这位朋友说,然后解释说,这个女人曾说过,她一定要问医生诊断结果是什么,如果医生发现她有亨廷顿氏病,她马上就去自杀。

    这个故事有几个让人不安的地方。第一,是这个虚假的欢乐结局。这个女人带有亨廷顿突变,她已经被判了死刑,死刑也许由她自己执行,也许由疾病缓慢地执行。不管那些专家对她多友善,她也逃不脱自己的命运。当然,她有全权选择怎样面对她有亨廷顿氏病这个事实。就算她愿意选择自杀,这些医生也无权不告诉她事实真相。但是,医生们没有告诉她真相,也可以说是做了“正确”的事。最敏感的话题,莫过于一个关于病人是否有某种致死疾病的检查的结果;例行公事般直截了当地告诉病人检查结果,对于病人来说也许不是最好的。只报告检查结果却不给病人提供战胜疾病的办法,是给人痛苦的一种方法。不过,在这些之上,这个故事讲述的最深刻的道理,就是如果没有疗法,诊断就是没用的。那个女人认为自己是没病的。假设她还能有五年毫不知情的高兴日子,告诉她说那之后她要面对的是精神错乱一点好处也不会有。

    如果一个人眼看着自己母亲死于亨廷顿氏病,她就会知道,她本人有50%的可能会染上这种病。但是这是不对的,是不是?没有人能够得这种病的50%。她要么100%地有病,要么100%地没病,这两种情况各有50%的可能。遗传测试所能做的,只是检测风险,然后告诉她,表面上的50%,对她来说是100%还是0%。

    南希?韦克斯勒担心科学现在站在一个像第比斯(Thebes)的盲人预言家特瑞西阿斯(Tiresias)那样的位置上。特瑞西阿斯偶然看见了雅典娜(Athena)洗澡,于是她刺瞎了他的眼睛。后来雅典娜又后悔了,但是因为没有办法恢复他的视觉,她就给了他预言未来的能力。然而,具有看到未来的能力是一种可怕的命运,因为他可以看到未来,却无法改变未来。“具有智慧,却不能从这智慧获益,那么剩下的只有悲哀”,特瑞西阿斯对俄狄浦斯(Oedipus)说。或者如南希?韦克斯勒所说:“你想知道你什么时候死吗?尤其是在你没有力量改变你的命运的时候?”从1986年以来,很多有可能患亨廷顿氏病的人可以通过检查来确定自己是否带有致病突变,但是他们选择了不去知道。只有20%左右的人选择了去做检查。有点奇怪却又可以理解的是,男人选择不去做检查的是女人的三倍。男人为自己想的多,女人为孩子想的多。

    就算是那些有得病风险的人想确知自己是否有病,这里包含的伦理也很错综复杂。如果一个家庭里有一个人去做检查,他或她实际上是在替整个家庭做检查。很多父母自己并不愿去检查,但是为了孩子的缘故还是去了。而且,就算在教科书上和医学知识小手册里,对于亨廷顿氏病的错误理解也到处都是。有一个小册子告诉带有致病突变的父母说:你们的孩子有一半会得病。这是不对的:每一个孩子得病的机会是50%。跟一半孩子得病是完全不同的两码事。检查结果如何告诉受检查者也是一个敏感度极高的问题。心理学家发现,如果告诉一个人他的孩子有四分之三的可能性是健康的,而不是有四分之一的可能性有病,他通常会觉得更宽心,虽然这两种说法是一回事。

    亨廷顿氏病是遗传的一个极端。它是纯粹的宿命论,一点不受环境因素的影响。好的生活方式、好的医疗条件、健康的饮食习惯、相亲相爱的家庭、大把的钱,都于事无补。你的命运完全在你的基因里。就像奥古斯丁教派所说的,你上天堂是因为上帝的仁慈,不是因为你做了好事。它提醒我们,基因组这部伟大的书或许会告诉我们最灰暗的一些关于我们自己的知识:关于我们的命运的知识,不是那种我们可以改变的命运,而是特瑞西阿斯那样的命运。

    但是,南希?韦克斯勒对寻找致病基因如此着迷,是因为她的愿望是在找到基因之后修复它从而治愈疾病。她现在离这个目标比起十年前无疑是近多了。“我是个乐观的人,”她写道:“尽管我知道,当我们处于一个能够诊断疾病却无法治疗它的阶段的时候,我们会很痛苦,但我仍然相信关于疾病的知识最终是值得我们为之痛苦的。”

    南希?韦克斯勒自己怎样了?80年代晚期,她和姐姐爱丽丝(Alice)与父亲米尔顿曾好几次坐下来商量是否要去做检查。这些争论气氛紧张、言辞激烈、结论不明确。米尔顿反对去做检查,理由是检查的结果并不是百分之百准确,可能会出现误诊。南希本来已经下了决心要去接受检查的,但是她患病的可能是客观存在的,在现实面前她的决心慢慢“蒸发”了。爱丽丝在日记里记录了这些争论,后来写成了一本书:《探索命运》。最终,两姐妹都没有去做检查。南希现在跟她母亲被确诊的时候是同一年龄。

    第六号染色体 智慧

    遗传论者的错误,并不在于他们认为智商在某种程度上是受遗传影响的,而是他们把“遗传”与“不可避免”等同起来了。 ——斯蒂芬·杰·古尔德

    我一直在误导你们,而且一直在破坏我自己定的规矩。作为惩罚,我应该把下面这句话写100遍:
    基因的存在不是为了致病的。

    即使一个基因在坏了的时候会致病,大多数我们体内的基因都没有坏,它们只是在不同的人体内有一些区别。蓝眼珠基因不是坏了的棕眼珠基因,红头发基因也不是坏了的棕头发基因。用术语来说,它们是等位基因——是遗传信息中同一个段落的不同形式,对于环境有同样的适应性,都是“合法”存在的。它们都是正常的,正常的基因有不止一种形式。

    好了,该停止东一下西一下地拨拉那些杂草了,到了集中精力对付那丛最枝蔓缠绕的灌木的时候了,到了对付基因森林里最粗壮、最扎人、最密不透风的那丛荆棘的时候了。这就是智力的遗传性。

    这丛荆棘最有可能存在于第六号染色体上。1997年末,一个大胆的(也可能是傻大胆)科学家向全世界宣布说,他在六号染色体上找到了一个“决定智力的基因”。这确实需要勇气,因为不管他的证据多么有力,很多人根本就不相信“决定智力的基因”这种东西有可能存在,更别说相信它们真的存在了。他们之所以怀疑,不仅仅是因为在过去的几十年里这方面的科研被政治化,任何提及智力的遗传因素的人都会被人“另眼相看”,也是因为大量的生活常识说明智力有非遗传因素。自然母亲可不放心让一个或几个基因去盲目地决定我们的智力,她给了我们父母、学习、语言、文化、教育,让我们通过这些去塑造我们的智力。

    但是,罗伯特·普洛民(Robert Plomin)宣布,他和他的实验伙伴们在智力的遗传性方面做出了一项重大发现。每年夏天,都有从全美国挑选出来的一组智力超常的孩子到爱荷华州去度夏令营。这些孩子的年龄在12~14岁之间,他们之所以被选中是因为他们在做学校作业的过程中表现出相当高的天分。在去夏令营的五年之前他们接受智商测试的时候,得到的分数比99%的人都高,他们的智商高达160以上。普洛民的研究小组认为,如果有一些基因能够对人的智力有影响,那么这些孩子一定拥有这些基因的最佳形式。他们取了所有这些孩子的血样,开始用第六号染色体DNA的片段做“鱼饵”来钓智力基因这条“鱼”。(他选择了第六号染色体,是因为他从以前的研究中得到了些启发。)渐渐地,他发现这些孩子的第六号染色体长臂上有一小段DNA序列往往跟普通人的不一样。并不是每一个聪明孩子在那个地方的DNA序列都与普通人不同,但是带有这个不同序列的孩子多得足以引起研究人员的注意。这个序列位于一个名叫IGF2R基因的中间。

    智商的历史并不让人乐观。在科学史上的所有争论里,很少有像关于智慧的争论那样充满着愚蠢意见的了。我们中的很多人,也包括我自己,是带着不信任和偏见来谈这个话题的。我不知道我的智商是多少。我上学的时候测过智商,但是从来没人告诉过我我的分数。因为我当时没意识到那个测验是有时间限制的,所以我没抓紧时间做题,结果是我只来得及答了一小部分题,分数应该高不了。当然话说回来,我没意识到测验是有时间限制的,这本身就不像是聪明人干的事。这个事件让我对用一个数字来衡量人的智力这种十分粗糙的做法失去了敬意。想在半小时之内测量出智力这么一件复杂的事,在我看来很荒唐。

    事实上,最早的智商测验出发点就带着偏见。弗兰西斯·高尔顿(Francis Galton,19世纪和20世纪初探险家、人类学家、优化人种论者)最早开创了用研究孪生子来把先天能力和后天能力分开的办法,他一点不隐瞒他这样做的原因:
    我的目的是要记录不同的人之间不同的由遗传而得到的能力,家族和种族之间的不同,以了解人类历史允许我们在多大程度上用更优良的人种去代替那些不够优秀的人种,以思考用适当方法来完成此举是否是我们的义务。惟其如此,我们才能够更快地推进进化的过程,避免因为把进化完全交到自然进程的手中而引起的焦虑。
    换句话说,他想把人当成牛那样有选择地繁殖。

    但是,智商测试在美国才变得真正丑陋起来。H.H. 戈达德(Goddard,心理学家)把法国人阿尔弗雷德·比内(Alfred Binet)发明的智商测试题搬到美国来,让美国人和未来的美国人接受测试。他满不在乎地总结道:很多从国外来到美国的移民不仅仅是“白痴”,而且训练有素的专家一眼就能把这些“白痴”辨别出来。他的智商测验主观得可笑,而且题目对中产阶级和受过西方文化熏陶的人有利。有几个来自波兰的犹太人知道网球场的正中间有一个网子?他一点都不怀疑智慧是天生的:“每一个人头脑的能力和智力水平都是由精、卵细胞融合的时候染色体的结合而形成的。这之后,任何因素都不会对其有什么影响,除非是严重事故破坏了染色体。”

    持有这种观点的戈达德明摆着是不正常。但是他在政府决策过程中施展了足够大的影响,以至于他被批准去测试那些刚刚到达爱丽丝岛(美国纽约市附近的一个小岛,过去从欧洲乘船到达美国的移民要先在该岛办理入境、检疫等手续)的移民。他之后还有些人比他还极端。第一次世界大战期间,罗伯特·亚尔克斯(Robert Yerkes,比较心理学家)说服了美国陆军让他给招募来的上百万新兵进行智商测验。尽管陆军根本没有太在乎这些测验的结果,这些测验却给亚尔克斯和其他人提供了发表意见的讲坛和数据。他们的意见是,智商测验可能有很高的商业价值,对国家也很重要,因为它能够轻易快捷地把人分类。在陆军里进行的这些测验,最终影响了国会,于1924年通过了一个限制移民法案。这个法案严格限制了来自南欧和东欧的移民人数,根据是这些地区的人比在1890年以前占了美国人口大多数的北欧移民要笨。这个法案的目的跟科学没有一点关系。它更多地反映了种族的歧视与工会的地方保护主义。但是,它在智力测验这个伪科学中找到了借口。

    优化人种论的故事我要留到后面一个章节再讲,但是一点不奇怪的是,智力测验的历史背景使得大多数学者,特别是社会学者,对任何与智商测验有关的东西都有很重的不信任感。在第二次世界大战之前,钟摆摆向了与种族歧视和优化人种论相反的方向,那时候,智慧的遗传性简直成了一个禁忌。亚尔克斯和戈达德那样的人把环境对人的能力的影响忽视得如此彻底,他们居然用英文试卷来考非英语国家的人,用需要书写的试卷考那些文盲——这些人在接受考试的时候才第一次拿起笔来。他们对于遗传性的相信是如此一厢情愿,以至于后来的批评者们都认为他们的主张一点儿根据都没有。归根结底,人类是具有学习能力的。他们的智商可以受他们所受的教育的影响,所以,心理学也许应该假设智慧没有任何遗传成分:智慧完全是学习的结果。

    科学应该是通过建立假说然后试图证伪它们而前进的。但是有时候事情并不如此。20年代的基因决定论者们总是在寻找能够证明他们观点的证据,从来不去寻找可能证伪他们观点的证据,60年代的环境决定论者们也同样总是在寻找能够证明他们观点的证据,对于相反的证据,他们本应是去积极寻找,但他们却对这些证据视而不见。与常识相违的是,在智力遗传性研究这个科学的一角里,“专家”们总是比外行犯更大的错误。普通人一直就知道教育非常重要,但他们同时也一直相信人的内在能力是有差异的。反而是“专家”们荒唐地在走极端。

    没有一个智慧的定义是被普遍接受的。智慧的标志是思考的速度,还是推理的能力?是记忆力、词汇量、心算能力?是进行智力活动时精力旺盛?还是仅仅是一个人对于智力活动的追求?聪明人在某些事上可以是惊人地笨——知识面是否广、有没有心计、是否能不撞到路灯柱上,如此等等。一个在学校里成绩很差的足球运动员也许可以在瞬息之间抓住机会作一记妙传。音乐能力、语言能力、揣测别人心理的能力都是能力,但一个人不一定同时具有这些能力。霍华德?加德纳(HowardGardner)(当代美国教育学家、心理学家。——译者注)卖力地提倡过一个理论,把智慧分成许多种,每一种天赋都是一种独立存在的能力。罗伯特?斯滕伯格(RobertSternberg)(当代美国哲学家。——译者注)则提出,智慧可以分成三类:分析能力、创造力、实践能力。需要分析能力的问题是由别人提出的,问题界定得很清楚,解决问题的所有信息都已存在,只有一个正确答案,跟日常生活经验没有关系。说简单一点,就好像是学校里的考试。实际问题则要求你把问题认识清楚、表达出来。这种问题常常没有清楚的定义,缺少一些解决问题所需的信息,不一定只有一个答案,但与日常生活直接有关。巴西街头的孩子们可能有些在学校里数学不及格,但他们在日常生活所需要的数学面前却不比谁傻。对于职业赛马手来说,用智商来预测他们的成绩,结果是很不准的。如果用需要动手的模型来测智商,一些赞比亚儿童成绩会非常好,用纸和笔来测,他们的成绩就会很糟。英国孩子则正相反。

    学校注重的是分析能力,智商测验也是如此,这几乎成了定义。智商测验不论在形式和题目上有多么大的区别,它们总是青睐具有某一种特定思维的人。不过,它们还是能测出一些东西。如果你比较人们在各种智商测验里的成绩,你会发现它们有一定的一致性。统计学家查尔斯·斯皮尔曼(Charles Spearman)在1904年首先发现了这一现象,一个孩子如果在一科测验里得到高分,在其他科目里也容易得高分,各种智力能力不是独立存在,而是互相关联的。斯皮尔曼把这称为广义智慧,或者简称为g。有一些统计学家提出,g只是统计上的一种托词,只是测量人在考试中的表现的诸多方法中的一种。另外一些人则认为g是民间流传的说法的一种直接量化:在谁聪明谁不聪明这个问题上,多数人的意见往往都是一致的。不管怎么说,g无疑是管用的。在预测一个孩子今后在学校里成绩如何方面,它比几乎其他任何测量方法都准确。在g是否客观存在方面,也确有一些真正客观的证据:人们在做需要检索和找出信息的任务时,他们完成任务的速度与智商是成正比的。广义智慧在人的不同年龄惊人地一致:在6岁到18岁之间,你的智慧当然是在快速增长,但是相对于你的同龄人来说,你的智慧却是几乎不变的。事实上,一个婴儿适应一种新的感官刺激所需的时间跟他今后的智商有很强的关联。就好像是说,如果能够对一个婴儿将来受的教育有一定估计,我们就能在一个几个月大的婴儿身上看出他将来的智商。智商分数与在学校里的考试成绩有很强的关联。高智商的孩子好像能更多地吸收学校里教的那些东西。

    所有这些不是要肯定教育无用论:学校与学校之间、国家与国家之间学生在数学和其他学科上平均成绩的巨大差异,显示出教育能够取得多大的成就。“智慧基因”不是在真空里运作的,智慧需要环境刺激才能发育。

    现在,就让我们接受这个一看就有点愚蠢的关于智慧的定义:智慧就是几种智商测验的平均得分——“g”——然后看看这个定义把我们领向何处。智商测验以前很不精确,现在也离完美很远,还谈不上真正客观,因此,各种测试的结果比较一致,就更显得不可思议了。如果智商与某些基因的联系透过被马克·菲尔波特(Mark Philpott,哲学家)称为“不完美的测试之雾”都能够显现出来,那就更说明智力有很强的遗传因素。另外,现代的测试已经有了很大改进,客观性更强,也更不会使受试人的成绩因文化背景和是否懂得某种专门知识而受到影响。

    在20年代,以优化人种为目的的智商测试达到高峰,当时关于智力的遗传性还没有任何证据,它只是对人们进行智商测试的那些专家们的假设。现在已经不同了。智商(先不说智商到底是什么)的遗传性已经在两种人里检验过了:孪生子和被领养的孩子。不管你怎么看,研究结果都叫人吃惊。在什么决定了智力这个题目上,所有研究都发现,遗传占有相当重要的地位。

    在60年代的时候有一个时尚,就是把孪生子从一出生就分开,特别是如果想让别人领养他们的时候。在很多情况下,人们这样做时并没有多想,但是有些人是故意这样做的,他们的动机是科研:去检验并希望能够证实当时占主导地位的理论——人的个性取决于孩童时期的养育方式和环境,与遗传无关。最著名的例子是纽约的两个女孩,贝丝和爱咪,她们一出生就被一个极富好奇心的弗洛伊德学派心理学家分开了。爱咪的养母是个很穷的人,很胖,没有安全感,没有爱心,所以一点不奇怪地,爱咪长大之后成了一个神经质的、内向的人。这正符合弗洛伊德理论的预言。但是,贝丝也成为了这样的人,跟爱咪一模一样,尽管她的养母富有、安详、愉快而有爱心。20年之后当贝丝和爱咪重新见面的时候,她们二人性格上的差别小得简直看不出来。对于她们二人的研究,远没有证明养育在塑造我们性格方面的重要性,相反地,它证明了天性的力量。

    研究被分离开的孪生子,最初是由环境决定论者开始的。但是后来他们的对手也开始用这一方法,代表人物之一,是明尼苏达大学的托马斯?布沙尔(ThomasBouchard)。从1979年开始,他在世界各地寻找那些被分离开了的孪生子,并利用测试他们的个性与智商的机会让他们团聚。同时,其他的研究则注重于比较被收养的人与他们的养父母、亲生父母、同胞手足之间智力的差异。把所有这些研究放在一起,把成千上万人的智商测验结果集中起来,就得到了以下这个表。每一个数字都是一个百分比,代表的是两种人的智力之间的相关性,百分之百的相关性意味着两人智力完全一样,百分之零意味着两个人的智力完全无关。

    同一个人接受两次智商测验87
    在一起长大的同卵双生子86
    从小被分离开的同卵双生子76
    在一起长大的异卵双生子55
    同胞兄弟姐妹47
    父母与子女(生活在一起)40
    父母与子女(没有在一起生活过)31
    亲生父母不同却被同一个家庭收养的孩子0
    没有血缘关系又不住在一起的人0

    毫不奇怪地,相关性最大的是在一起长大的同卵双生子。他们有共同的基因、在共同的子宫里被孕育、生活在同一个家庭里,他们智商的区别与一个人做了两次智商测验的区别一样。异卵双生子虽然是在共同的子宫里被孕育,他们的基因却并不比两个普通兄弟的基因更相似,但是他们的相关系数比两个普通兄弟更相似,说明胚胎在子宫里的经历或者是孩子最初经历的家庭生活有一点点作用。但是让人目瞪口呆的结果,是那些有不同的亲生父母却被同一个家庭收养、一起成长的孩子,他们的智商分数之间的相关性是零。住在同一个家庭里对于智商一点影响也没有。

    子宫的重要性是最近才被人们认识到的。有一项研究表明,孪生子在智力方面的相似性,有20%可以归结到子宫环境上,而对于两个非孪生的兄弟姐妹来说,子宫环境对智力的影响只占5%。区别在于,孪生子不仅是在同一个子宫里被孕育,而且是在同一时间;非孪生的孩子则不是。子宫里发生的各种事件与变化对于我们智力的影响,比我们出生之后父母对我们的教育所起的作用高两倍。所以,即便是智力中可以归结到“后天”因素而不是先天因素的那一部分,也是一种早已成为过去、不可更改的后天因素。但是另一方面,属于先天因素的那些基因,直到青少年时期都在表达。所以,是先天因素,而不是后天因素,要求我们不要在一个孩子很小的时候就对他的智力下定论。

    这真是怪异之极。它简直是对基本常识的挑战:我们孩童时期读过的书、家庭成员间的对话,肯定对我们的智力有影响吧?没错,但问题不在这里。因为遗传因素可以决定在一个家庭里父母和孩子都喜欢智力活动。除了研究孪生子和被领养的孩子之外,还没有任何其他研究试图把父母的遗传与父母的教育对智力的影响分开。对孪生子和被领养的孩子的研究,在目前清楚地有利于这样一个观点:父母与孩子智力水平之间的相似性是由遗传因素决定的。对孪生子和被领养的孩子的研究当然可能会误导,因为毕竟这些研究只局限于某一类家庭。他们主要是白人中产阶层的家庭,极少有黑人或穷人家庭被列入研究之列。在美国的白人中产家庭里,如果读书范围和谈话内容大同小异,也并不是什么奇怪的事。有一项研究的对象是那些被另一种族的家庭收养的孩子,在这项研究里,人们发现孩子的智商与他们的养父母的智商有一点相关(19%)。

    但是这仍然是很小的效应。所有这些研究得到的一致结论,是你的智商大约有一半是由遗传决定的,不到五分之一是由你和你的兄弟姐妹们共同的环境——家庭——决定的。剩下的是子宫的作用、学校的作用和其他外部影响,比如朋友的影响。即使是这个结论,也有点误导。你的智商不仅随年龄会有变化,遗传因素对它的影响也会变化。现在你长大了,积累了很多经验,遗传对你智力的影响也增加了。你会说:什么?是减小吧?不是的。在儿童时期,遗传对智商的影响占51%。在你长大的过程中,你内在的智力逐渐显露出来,其他因素对你智力的影响渐渐消失。你会选择与你的能力、喜好相符的环境,而不是调整你的能力、喜好去适应环境。这证明了两个至关重要的观点:基因的影响并不是从精子和卵子结合的时候起就固定不变了,环境的影响并不是一直不断地累积的。智力是遗传的不等于它是不变的。

    在先天还是后天这个漫长的争论刚开始的时候,弗兰西斯?高尔顿用了一个也许很贴切的比喻:“很多人都这样取乐过:把小树枝扔进溪流中,观察它们随水流走的过程,观察它们怎样停止运动。一开始遇到偶然的一个小障碍,然后又是一个;以及它们的前进又是怎样被环境里的因素加速的。我们可能会认为这些因素每一个都对小树枝起了作用,认为小树枝的最终命运就是被这些微不足道的事件左右的。但是不管怎样,所有的树枝都成功地顺水流下去了,而且速度总体来说都差不多。”所以,有证据表明,让孩子接受更好的、高强度的教育确实可以戏剧般地提高孩子的智商,但这只是暂时的。小学毕业的时候,那些曾经在“好的开端”(HeadStart,是美国运行着的一个对幼儿实行早期教育的计划。——译者注)这样的早期教育班里受过教育的孩子,与其他没有上过这些班的孩子已经没有区别了。

    对于这些研究的一种批评,是它们都只研究了社会里一个阶层的家庭,因此把遗传的作用放大了一些。如果你同意这样的批评,那随之而来的就是:在一个公平的社会里,遗传的作用比在一个不公平的社会里大。事实上,一个完美的英才社会的定义,就是一个人的成就取决于他们的基因的社会,因为所有人都有同样好的环境。在身高方面,我们已经在飞速地接近这样一个状态了:在过去,营养不良使得很多孩子在长大之后没有达到他们应该达到的“遗传”高度。今天,随着儿童期营养的普遍提高,个体之间身高的差异主要是由遗传原因决定的。所以,我猜想遗传在身高方面的决定作用是在增加的。同样的说法还不能用在智力这一方面,因为在有些社会里,环境的变量——例如学校质量、家庭习惯、财富——是在变得越来越不平等,而不是越来越平等。不过不管怎么说,在公平社会里基因的作用反而更大,这听起来像个悖论。

    对于智力遗传因素的这些估量,只适用于解释个体间的差异,却不能用来解释群体间的差异。虽然遗传对于智力的影响在不同人群和种族里不一定一样,但事实证明它的影响是一样的。但是,如果因为两个个体之间智商的差异有50%是因为遗传因素,就得出结论,认为黑人的平均智商与白人的平均智商间的差异或白人与亚洲人平均智商的差异是由遗传决定的,那就犯了逻辑错误。其实,这种结论不仅在逻辑上有错误,到目前为止,与事实依据也是不符的。这样,支撑最近的一本书《钟形曲线》(20世纪90年代在美国出版的心理学书籍,探讨种族之间智力的区别及其原因。它宣称不同人种智力上的差异是由遗传决定的,有些种族的基因比其他种族优秀,因此出版之后在美国引起轩然大波。批评者认为该书曲解科学研究的成果,宣扬种族主义。——译者注)里那些观点的“栋梁”就倒塌了。黑人与白人的平均智商确有区别,但没有任何证据表明这个差异是遗传的。事实上,跨种族领养儿童的一些例子,说明在白人家庭里长大的黑人孩子,其智力与一般白人并无区别。

    如果对于一个个体来说,智力有50%是遗传的,那么肯定有些基因对智力有影响。但是我们现在说不出有多少这样的基因。关于这些基因,我们现在所能说的只是:有一些基因是可变的,也就是说,它们在不同的个体里可以以不同形式存在。遗传性与决定论是非常不同的事情。对于智力影响最大的基因完全有可能在个体之间是不变的,在这种情况下,这些基因就不会导致个体差异。比如说,我每只手上有五个手指头,大多数人也是如此。原因是我们得到的遗传配方里写明了要有五个手指头。如果我走遍世界去找只有四个手指的人,那么我找到的人里,95%以上都是因为意外事故失去了一个手指头的。我会发现,有四个手指不是因为遗传因素,几乎在所有情况下四个手指都是因为环境原因造成的。但是这不说明基因不决定手指的数量。一个基因可以决定不同的个体拥有同样的身体特征,正如基因也可以决定不同的人有不同的身体特征。罗伯特?普洛民寻找智商基因的“大海捞针行动”,只会找到那些可以用多种形式存在的基因,却找不到那些在所有个体里都一样的基因。这样,他们可能会找不到一些决定智力的关键基因。

    普洛民的第一个基因标识,六号染色体长臂上的IGF2R基因,乍看上去可不像是一个“智力”基因。在普洛民把它和智力联系在一起之前,它出名是因为它与肝癌有关。以前,它可能被称为“肝癌基因”,这显示了用一个基因可能导致什么病来命名这个基因的错误之处。将来我们总会决定:这个基因抑制癌症的功能和它对智力的影响哪个是主要的功能哪个只是“副作用”。当然了,也可能这两种功能都是副作用。由这个基因编码的蛋白质有着如此枯燥的功能,让人真怀疑是否有什么神秘之处我们还没发现。它的功能是“细胞内的传输”:把磷酸化了的溶酶体酶从高尔基体(存在于细胞内的小体,对于蛋白质和脂类分子的加工和分类在其中进行。——译者注)运到细胞表面的溶酶体中去。它是个分子水平的送货车。在有关它的功能方面,没有一个字提到脑电波之类的事。

    IGF2R是个异常庞大的基因,总共有7473个字母,但是在编码蛋白质方面有意义的那些字母分布在基因组里由9.8万个字母所组成的一段上,中间被没有意义的字母(称为内含子)打断过48次。就好像杂志上一篇文章被广告打断了48次,怪烦人的。在这个基因的中间,有一些重复的片段,它们的长度容易变化,也许会在人与人之间智力水平的不同方面起作用。因为这个基因看起来跟胰岛素之类的蛋白质及与糖的分解隐约有些关系,所以要提一下,另外一项研究发现,智商高的人的大脑利用起葡萄糖来“效率”更高。在学着玩一个名叫“叠四块”的电脑游戏的时候,高智商的人与低智商的人相比,一旦熟练之后,大脑对葡萄糖的需要量降低得更快。但是这有点像是在抓救命稻草的样子。普洛民的这个基因如果被证明真的与智力有关,也只会成为许许多多的能够用各种不同方式影响智力的基因中的一个。

    普洛民的发现,最重要的价值在于:人们可以声明,研究孪生子和被领养的孩子是太过间接的方法,不足以证明遗传因素对于智力的影响,但是面对一个随着智力水平高低而变化的基因,人们很难提出有力的反对意见。这个基因的一种形式,在爱荷华那些智力超常的孩子体内比在一般人体内多一倍,这极不可能只是偶然现象。但是它的作用肯定很小:平均来说,这个基因的这种形式只会给你的智商加4分。这就有力地说明这个基因不是什么“天才基因”。普洛民暗示过,他对爱荷华那些智力超常孩子的研究还发现了至少10个“智力基因”。但是,遗传决定智商这一说法在重新受到科学界尊重的同时,在很多角落引起的却是疑惑和惊讶。它仿佛让人们看到了二三十年代使科学声誉受损的优化人种论的幽灵。斯蒂芬?杰?古尔德就是一个对过分强调基因作用持严厉批评的人,他说过:“一个部分由遗传因素而形成的低智商者,通过适当的教育可能会有极大的改进,也可能不会。低智商是由遗传因素造成的这一点,并不足以让我们对这个人的智力下定论。”这是对的,但同时这也是麻烦所在。并不是说人们一看到遗传的作用就不可避免地都成了宿命论者。导致“阅读困难症”(一种因为神经系统原因而出现的学习困难,患儿智力正常,但无法准确、流利地识别字词,常有拼写、阅读困难。——译者注)的基因突变被发现之后,老师们并没有认为这种病没救因而放弃有病的孩子。相反,这个发现促使老师们用特殊的教学方法去教有病的孩子。

    事实上,智商测试方面最著名的先驱者法国人阿尔弗雷德?比内强烈地提倡过:智商测试的目的不是为了奖励那些有天赋的孩子,而是为了更多关注那些没有天赋的孩子。普洛民却认为自己就是一个从智商测试中受益的最好例子。在芝加哥地区他们这个大家族里,他这一代的32个孩子中,他是惟一一个上过大学的。他的好运源自于他在一次智商测试里的高分数,正是这个分数促使他的父母送他进了一所强调学业的学校。美国对这类测试的热衷与英国对它的厌恶形成鲜明的对照。短命而名誉不好的“11岁以上”考试,是英国惟一存在过的一个所有11岁以上学生都必须参加的智商测试。它的依据是西里尔·伯特(Cyril Burt,心理学家)的研究数据(有可能还是伪造的)。在英国,“11岁以上”考试在人们的记忆里是灾难性的,它把有些智力很好的孩子打入了质量不好的学校。但是在以“英才社会”自居的美国,类似测试却是那些有天赋的穷人孩子在学术上取得成就所需的敲门砖。

    也许,智商的遗传性暗示了一些完全不同的东西,这些东西一次性地证明了,高尔顿试图把先天与后天因素区别开来的努力,从观念上就错了。想想这么一个看上去愚蠢的事实:平均来讲,智商高的人比智商低的人耳朵更对称。智商高的人整个身体都更对称:脚的宽度、膝盖的宽度、手指长度、手腕宽度以及手肘宽度都与智商有关联。

    90年代早期,对于身体对称性的兴趣又复活了。这是因为它可以揭示出发育早期的一些奥秘。身体的有些不对称性,在人群里是很有规律的。例如,在大多数人体内,心脏位于胸腔的左侧。但是,另外一些不那么明显的不对称性,却是比较随机的,哪边都可以。有些人的左耳比右耳大,另外一些人却刚好相反。这种被称为“起伏性不对称”的不对称性,它的程度,是对于身体在发育过程中受到了多少压力的很敏感的量度:感染、毒素和营养不良造成的压力。智商高的人身体更对称这一事实,说明这些人在母亲子宫里和在童年时期身体所受压力比较小。也许他们的身体有更高的抗压性。这种抗压性也许也是有遗传因素的。这样,智商的遗传性也许并不是由“智慧基因”直接决定的,而是由那些抗毒素、抗感染的基因间接决定的。也就是说,是由那些与环境相互作用的基因决定的。你遗传到的不是一个高智商,而是在某种特定环境下发展出高智商的能力。如果是这样,那么怎么能把影响智力的因素分成遗传因素和后天因素呢?明明白白地是不可能的。

    支持这个理论的证据来自于所谓的“弗林效应”。詹姆斯?弗林(JamesFlynn)是一个在新西兰工作的政治学家,在80年代,他注意到这样一个现象:在世界各国,一直以来人们的智商都是在增长的,大约每十年增长三点。原因却很难确定。也许原因与身高的增长是一样的:童年时期营养的提高。危地马拉有两个村庄在几年里一直得到由外界援助的大量蛋白质补剂,十年之后再去测试,发现儿童的智商有了显著的提高,这是弗林效应在局部地区的表现。但是,在营养充分的西方国家里,人们的智商仍然是在迅速提高的。学校跟这个现象也没有什么关系。因为中断学校教育只会给人的智商带来暂时影响,而且,分数迅速上升的那些测试项目,恰好测的是学校里不教的东西。分数上升最快的,是那些测试抽象推理能力的项目。一位科学家,乌瑞克?耐瑟(UlricNeisser),(当代美国认知心理学家。——译者注)相信弗林效应的原因是当今社会日常生活中充斥着高强度的、复杂的视觉图案:动画片、广告、电影、海报、图像和其他光学显示,而这些是以书面语言的减少为代价的。儿童的视觉环境比以前丰富得多,这有助于培养他们解决视觉方面的智力测试题,而这正是智商测试里最常见的题型。

    但是,这个环境因素乍看起来很难跟研究孪生子所得出的智商高遗传性的结论调和起来。就像弗林本人说的,50年来人们的智商平均增加了15点,要么是因为50年代的人好多是傻子,要么是因为现在的人好多是天才。因为我们并没有处在文化复兴的时期,所以他认为,智商测试并没有测到人的内在能力。但是,如果耐瑟是对的,那么当今世界环境只不过是一个有利于人们发展智力中的一种——对于视觉符号的娴熟——的环境。这对于“g”这个提法(智力是综合能力)是个打击,但并没有否定不同种类的智力是有遗传性的。在200万年的人类文化之间,我们的祖先传下来了通过学习才能掌握的各地不同的习俗,人脑也许已经通过自然选择学会了怎样发现和掌握在自己本地文化里重要的技能,以及自己能够掌握得比较好的技能。一个孩子所处的环境不仅与他的基因有关,也与外界因素有关,而一个孩子有能力找到甚至创造出适合自己的环境。一个有动手能力的孩子会学习需要动手的技能;一个书虫子会去找书。也许,基因创造的是一种欲望,而不是一种能力?不管怎么说,近视的遗传性不仅来自于眼球形状的遗传性,也来自读书习惯的遗传性。因此,智力的遗传性也许不仅仅是先天能力的遗传,也是后天因素的遗传。对于由高尔顿发起的这一世纪以来关于智力遗传性的争论,这真是个内容充实又令人满意的结局。

    第七号染色体 本能

    人类本性这张“白纸”从来就不是白的。 ——W •D •汉密尔顿(W. D. Hamilton,生物学家)

    没有人怀疑基因能够影响身体结构。基因能够影响行为这个说法,却不那么容易让人接受。但是,我希望能够说服你,在第七号染色体上有一个基因,它的一个重要作用是使人拥有一种本會巨,一种在所有人类文化里都占有中心地位的本能。

    本能是一个用在动物身上的词:三文鱼会寻找它出生的那条溪流;黄蜂会做它那早已去世的父母做过的事;燕子迁移到南方过冬。这些都是本能。人类不需要对本能有太多的依靠,他们学习,他们是有创造力的、生活在文化环境里的、有意识的生物。他们做的每一件事都是自由意志、巨大的脑子和父母教育的产物。

    在20世纪里,心理学和其他社会科学里占主导地位的说法就是这样的。如果谁不这样想,谁相信人类行为有其内在性,那就等于是掉进了决定论的陷阱,就等于在一个人出生之前就给了他一个无情的宣判:他的命运是写在他的基因里的。其实,社会科学发明了很多比基因决定论更让人心惊的决定论:弗洛伊德的父母决定论、马克思的社会经济决定论、列宁的政治决定论、弗朗兹•博厄斯(Franz Boas)与玛格丽特W•D•Hamilton:20世纪英国著名生物学家。米德(Margaret Mead)的同龄人压力文化决定论、约翰•沃森和B •F •斯金纳(Skinner)的刺激一反应决定论、爱德华•萨皮尔(Edward Sapir)和本杰明•沃夫(Benjamin Whorf)的语言决定论。(弗朗兹•博厄斯:19世纪末20世纪初著名人类学家,生于德国,后来在美国从事研究工作;玛格丽特•米德是他的学生,也是20世纪初美国著名人类学家。约翰•沃森和B•F•斯金纳都是20世纪上半叶美国著名心理学家。爱德华•萨皮尔生于德国,后移民到美国,与本杰明•沃夫同为20世纪上半叶美国著名语言学家。)在一个世纪里,社会学家们告诉具有各种不同思想的人:说生物学因素决定行为就是决定论,而说环境决定行为就不违反人有自由意志的说法;动物有本能,人类则没有。这是历史上规模最大的误导行为之一。

    从1950年到1990年,环境因素决定论这座大厦轰然倒塌了。20年的精神分析法都没有能够治好的狂郁症,用一剂锂疗法就治好了,弗洛伊德的理论也就在那一时刻衰落了。(1995年,一位妇女状告她的心理医生,因为这位医生给她进行了三年多心理治疗都没有治好的病,在她服用了三个星期的百忧解(一种治疗抑郁症的药物。)之后就痊愈了。)德里克•弗里德曼(Derek Freeman)(德里克•弗里德曼:当代澳大利亚人类学家。)发现,玛格丽特•米德的理论(少年的行为有无限的可塑性,可以被文化任意塑造)是建造在主观偏见、不充分的材料以及她的那些少年研究对象恶作剧故意撒谎的基础上的。这样,文化决定论也破灭了。行为主义的破产,则源于1950年在威斯康辛州所做的一个著名实验。在这个实验里,尽管失去了妈妈的猴子幼婴只有从一个铁丝做的猴妈妈那里才能得到食物,它们仍然建立了对布做的猴妈妈的情感依恋。这违反了这样一个理论:我们哺乳动物能够对任何给我们食物的东西都建立感情。看来,喜欢柔软的母亲的感觉也许是天生的。

    在语言学里,大厦出现第一个裂缝是在诺姆•乔姆斯基(Noam Chomsky,语言学家)发表了《句法结构》一书的时候。在这本书里他阐述说,人类语言,人类行为里最有文化特征的一种行为,与文化的关系和与本能的关系一样强。乔姆斯基重新提出了一个关于语言的旧观点,亦即达尔文描述过的“掌握一种艺术的本能倾向”早期心理学家威廉*詹姆斯(Wiiliam James),小说家亨利•詹姆斯(Henry James)的兄弟,强烈地支持这样一个观点:人类的行为表明,人类比动物有更多种的本能,而不是更少。但是他的说法在20世纪的大部分时间里被忽视了。是乔姆斯基把这些理论重新发掘出来。

    通过研究人们说话的方式,乔姆斯基得到结论,认为在所有语言之间都存在着内在的一致性,因此说明人类语言存在一种共同的语法。我们都知道怎样用这种语法,但我们对我们的这种能力并无知觉。这就意味着我们的大脑里有一部分由于基因的原因先天就有特殊的、学习语言的能力。说白一些,词汇不可能是天生的,否则我们都会说同一种没有变化的语言。但是,也许一个孩子在学习了本地社会所用的词汇之后,把它们扔进了一套天生的、内在的规则里去了。乔姆斯基的证据是语言学方面的:他发现,我们说话的时候有一种规律,既不可能是父母教的,也不可能是轻易地从日常生活中倾听别人说话的过程中学会的。例如,在英文里,把一句话变成一个问题,我们得把主要动词放到句子的最前面去。可是我们怎么知道哪个动词应该被放到最前面呢?看一看这句话:“A unicorn that is eating a flower is in the garden”(花园里有一只正在吃花的独角兽)。你可以把第二个“is”挪到最前面去,变成一个问句:“Is a unicorn that is eating a flower in the garden?”但是如果你把第一个“is”挪到最前面去,句子就不通了:“Is a unicorn that eating a flower is in the garden?”区别在于,第一个“is”是一个名词词组的一部分,这个词组在大脑里引起的意象不是随便一个独角兽,而是随便一个正在吃花的独角兽。4岁的孩子,还没有学过什么是名词词组的时候,都能够很不费力地运用这个规则。他们好像就会这个规则。他们也不需要听说过或用过“A unicorn that is eating a flower”这个词组,就知道这个规则。这就是语言的美:我们所说的每一句话都几乎是一种新的组字方法。

    乔姆斯基的推测在那之后的几十年里被漂亮地证明了,证据来自许多不同领域。所有证据都可以归结到心理学家、语言学家史蒂文•平克(Steven Pinker)做出的一个结论上:为了学习人类语言,我们需要有的是人类语言的本能。频克被人戏称为第一个写出的东西别人看得懂的语言学家。他令人信服地搜集了多种证据,证明语言能力的内在性。首先,是语言的普遍性。所有人类的成员都会一种或几种语言,不同语言的语法复杂程度都差不多,

    即使是新几内亚高地上那些从石器时代就与外界隔绝的人群所使用的语言也是如此。所有人都很小心很系统地遵守那些没有被言明的语法规则,即使是没有受过教育的人、那些说话比较“俗”、说方言的人,也是如此。大城市里黑人区的“黑人英语”,其语法规则的合理性一点不少于英国女王的英语。认为一种语言比另一种语言“高级”,完全是偏见。例如,双否定“不要有人不对我干这种事”)的用法在法语里是很适当的,在英语里就是土话。但在这两种语言里,人们都是同样遵守语法规则的。

    第二,如果这些语法规则是像学习词汇那样通过模仿得到的,那么,为什么那些4岁孩子明明说“went”说得很准确,却会忽然改口说“goed”(“went”是“go”的过去时的正确形式,“goed”是小孩根据“动词后面加‘ed’就是过去时”这一规则(并不适用于所有动词)自己造出来的词。)?真实情况是,虽然我们必须教孩子读和写——在这些能力上我们可没有任何本能——他们在很小的年龄就可以不需我们帮助地学会说话。没有一个父亲或母亲会说“goed”,

    但是几乎所有孩子在儿童期某一时刻都会这么说。没有一个父亲或母亲会给孩子解释说:“杯子”这个词可以用来指所有杯状物体,而不是单指这一个特别的杯子、不是指杯子把手、不是指造杯子所用的材料、不是指用手指杯子这一动作、不是指抽象的“杯子状”,也不是指杯子的大小和温度。一个电脑如果要学会语言,必须要有一个程序,很费劲地把这些愚蠢的错误含义给过滤掉。而儿童天生就有事先编好的“程序”——本能,天生就知道哪些用法可能合适而哪些不合适。

    但是在语言本能方面,最令人吃惊的证据来自一系列在自然条件下进行的实验:让儿童给一些没有语法规则的“语言”加规则。最著名的一个例子,是德里克•比克顿(Derek Bickerton,语言学家)所作的一项研究。19世纪一组被带到夏威夷的外国劳工发明了一种不伦不类的语言——一些字和词被他们用来在他们内部交流时使用。与其他类似的混杂语言一样,这种语言缺少系统的语法规则,在表达上特别繁复,表达能力却又特别有限。但是,所有这一切在他们的孩子那一代——这些孩子们在幼年学习了这种语言——就改变了。这种混杂语言有了转调规则、字词顺序以及语法规则,成为了一种有效又有表达力的语言——一种新方言。简而言之,正如比克顿总结的,混杂语言只有在被一代孩子学过之后才能变成新方言,因为孩子具有促成这种改变的本能。

    比克顿的假说从手语那里得到了极大的支持。有这样一个例子:在尼加拉瓜,为聋儿而设的专门学校是从80年代才开始出现的。这导致了一种全新的语言的诞生。这些学校教孩子们“读”嘴形,但很不成功,不过在操场上一起玩耍的孩子们把自己在家里所用的手势凑到一起,形成了一种粗糙的混杂语言。几年之内,当更小的孩子入了学,学了这种混杂语言之后,它就被改造成了一种真正的手势语言,与一般语言一样有语法,有其复杂性、实用性和高效性。在这个例子里,又是孩子造就了语言。这个事实好像在说,儿童进入成人期之后,语言的本能就被关闭了。这就能够解释为什么成年人想学习一种新语言,或是新的口音都很困难。因为我们不再拥有语言的本能。这也能够解释为什么即使对孩子,在课堂上学法语也比到法国旅游的时候学法语难:语言的本能是在听到的语言上起作用,而不是在记住的语法规则上起作用。一个敏感期,在它之内有什么东西可以被学会,在它之外则不行,这是动物本能的明显特征。例如,苍头燕雀必须在一定年龄之内常听它自己这个物种的歌,才能够学到标准唱法。同样原理对人类也适用,则是因了一个女孩的真实而残酷的故事而被揭示的。这个女孩名叫吉妮(Genie),在洛杉矶的一个公寓里被发现的时候13岁。她从出生开始就一直被关在一个家具极少的小房间里,几乎从来没有与其他人接触过。她只会两个词:“别这样”和“不要了”。从这样的“地狱”里被解救出来之后,她很快就拥有了很大的词汇量,但是她始终没有学会语法——她已经过了学习敏感期,语言本能已经没有了。

    但是,再坏的理论也得费好大力气才能把它“枪毙”掉。语言是一种能够改变大脑的文化形式(而不是反过来)这种说法,就是长时间不死的这么个东西。尽管有一些历史上人们最熟悉的例子是支持这个说法的,但后来发现这些例子净是假的。比如说,有一个印第安部落,语言里没有时间这个词,因此这个部落的人脑子里也没有时间观念。即便如此,语言是大脑里突触形成的原因而不是结果这个说法,却在许多社会科学分支里继续流传。其实这种说法的荒谬是显然的。比如说,只有德语里有一个词:Schadenfreude,意思是把自己的欢乐建立在别人的痛苦上,但这并不意味着其他国家的人们就不懂这是一个什么概念,尽管他们的语言里没有一个专门的词。

    关于语言本能的更多证据来自多个方面,其中一个就是对儿童在他们出生后的第二年里如何发展语言能力的详细观察。不管大人直接对这些孩子说了多少话,不管是否有人教过这些孩子怎样用词,儿童语言能力的发展都要以特定的方式经过特定的阶段。对孪生子的研究也说明,语言发育早还是晚,也是有很强遗传性的。但是对于大多数人来说,语言本能的最有说服力的证据是来自于实在的科学:神经病学和遗传学。有中风患者和真实的基因做证据,反对派也不好怎么争辩。大脑里有一部分有系统地被用来进行语言处理(在大多数人的大脑里是在左半脑),即使是用手势“说话”的聋子,也是如此,尽管手语也需要用到右半脑。

    如果这一部分大脑的其中一小部分被损坏了,结果就是我们所说的“布鲁卡失语症”,即丧失使用和理解语法的能力,除非是最简单的语法。但是这些人仍然能够理解语言的含义。比如说,布鲁卡失语症患者可以很容易地回答诸如“你能用锤子切东西吗?”这样的问题,但是对于“狮子被老虎杀了,它们俩谁死了?”这样的问题,患者就很难答上来。第二个问题要求听者知道字词97顺序方面的语法规则,而这恰好是被损坏的那一部分大脑的功能。大脑的另一区域,威尔尼克区(Wernicke),如果被损坏则会出现完全相反的症状。这样的患者能够说出一大串语法结构异常丰富却完全没有意义的话。布鲁卡区(Broca)的功能看起来像是制造话语,而威尔尼克区则是在告诉布鲁卡区应该制造什么样的话语。这不是故事的全部,因为还有一些其他大脑区域也参与了语言的加工处理,比较明显的是中间岛区(insula)(这可能是阅读困难症患者大脑里出问题的地方)。

    有两种遗传的情况可以影响语目能力。一种是威廉姆斯症(Williams Syndrome),这是由第十—’号染色体上的一个基因引起的。患这种病的儿童智力水平一般很低,但是他们说起话来却既生动又内容丰富,而且健谈成癖。他们可以一直喋喋不休,用的都是复杂的词、长句子和非常讲究的句子结构。如果让他们举一个动物的例子,他们常常会选一个奇怪的动物,比如土豚(食蚁兽的一种),然后说这是猫(或狗)。他们学习语言的能力高于常人,但是代价却是理解能力:他们是智力迟钝的人。我们中的很多人都曾经认为思考就是一’种不发声的语目,但是威廉姆斯症儿童的存在,似乎说明这种想法是不对的。

    另外一个遗传情况有着与威廉姆斯症相反的症状:它使人的语言能力降低,却不明显影响其他方面的智力,至少,它对智力的其他方面没有什么系统性的影响。它被称为语言特有损害,在一场激烈的科学争论中占有中心地位。争论双方是新兴的进化心理学与旧的社会科学,争论在于是用遗传来解释行为还是用环境解释行为。处在争论中的基因,就在七号染色体上。

    这个基因是否存在,不是争论的内容。对孪生子的仔细研究,明白无误地指出语言特有损害具有极强的遗传性。这种病与出生时的神经损害没有关系,与成长过程中接触语言比较少也没有关系,也不是由于智力低下造成的。虽然各种医学检查对于这种病的定义并不完全一致,但有一些检查发现这种病的遗传性接近百分之百。也就是说,同卵双生的两个孩子,比起异卵双生的两个孩子,都有这种病的机会要大一倍。

    这个基因是在七号染色体上,这一点大家也都没有多少怀疑。1997年,牛津大学的一组科学家发现了七号染色体长臂上的一个基因标识,这个标识的一种形式总是与语言特有损害同时出现。

    这个证据虽然只是从英国的一个大家族里得到的,却很强很明确。

    那么为什么争论呢?争论的焦点是,语言特有损害到底是什么。对有些人来说,它是大脑整体的病变,影响的是语言产生中多方面的功能,主要是影响到话语从嘴里表达出来和耳朵听话语的能力。根据这个理论,病人在语言方面遇到的困难,是从这两个方面延伸出来的。对于另外一些人来说,这个理论纯属误导。当然,在很多病人身上的确存在听力与发声方面的问题,但是另外还存在一些更能引人好奇的东西,那就是这些病人真的有理解方面的问题,而这一问题与听力与发声方面的缺陷是无关的。争论双方都能同意的一件事,就是媒体把这个基因炒成是一个“语法基因”,过于简单,太不理性,是很让媒体丢面子的事。

    故事是围绕着一个英国大家庭展开的,我们称它为K家庭吧。这个家庭现有三代。一个患有语言特有损害症的妇女与一个正常男子结了婚,生下四女一男,除了一个女儿之外,所有孩子都患有语言特有损害症。这些孩子又结婚生子,在他们的总共24个孩子里,有十个有同样症状。这个家庭里的人跟心理学家们都很熟了。其他科学家们则用一系列新的检查来“争夺”他们。是他们的血液把牛津的科学家们引到了七号染色体的基因上面。牛津的这个小组是与伦敦的儿童健康研究院合作的,这两处的科学家大都是“综合病变”论的持有者,他们认为K家庭的成员表现出来的语法能力缺陷是源于他们在听、说方面的问题。他们的主要反对者和“语法病变”理论的领头人,是加拿大语言学家默娜•高布尼克(Myrna Gopnik)。

    1990年,高布尼克第一次提出,K家庭的人以及其他有相似病症的人,在理解英文的基本语法规则方面有问题。并不是说他们无法理解语法规则,而是他们必须有意识地、专心地去学,才能学会这些规则,而不是本能地把这些规则内化。举一个例子。如果高布尼克给一个人看一幅漫画,画上是一个想象出来的生物,还有这样的字:“这是一个wug”那么如果高布尼克给这个人看一99幅画有两个这种“生物”的漫画,边上写着:“这是……”,那么大多数人都可以在眨眼的工夫里就回答说:“wugs”(wug是瞎编的一个词,加上s即成复数形式)。但是有语言特有损害症的人大多回答不出,即使能够回答上来,也是在长时间考虑之后。英文里的复数规则是在大多数词后面加S,而他们好像不知道这个规则。但这并不妨碍患语言特有损害症的人掌握大多数词的复数形式,只是一旦遇到一个他们没有见过的新词他们就被绊倒了。而且,他们还会犯这样的错误,即在那些我们正常人不会加s的词后面加s,比如说“saess”。高布尼克推断到,这些病人把每一个词的复数形式都当做一个单独的词存在记忆里,就像我们储存那些单数的词一样。他们没有储存单数变复数的规则。

    问题当然不仅仅是在复数方面。过去时、被动语态、一些字词顺序的规则、后缀、字词组合的规则,以及所有我们下意识地就知道的英文语法规则,对于患语言特有损害症的人来说,都很困难。当高布尼克研究了那个英国的家庭,把这些结果第一次发表出来的时候,她立刻就遭到了猛烈的攻击。有一个批评家说,如果把症状归结为语言处理系统的问题,而不仅仅是语法规则的问题,是远为合理的。这是因为在说英语的人里,类似复数与过去时这样的语法形式对有口语表达障碍的人特别困难。另外两个批评家说,高布尼克是在误导读者,因为她“忘记”提及一些K家庭成员有先天性的口语表达障碍,这种障碍使他们在单个的词、音素、词汇量、语义、句法方面都有问题。他们在理解其他句法结构的时候也有问题,例如可逆被动式、跟在名词后面的修饰词、比较从句、内藏形式,等等。

    这些批评颇有一丝争夺地盘的味道。K家庭不是高布尼克的发现,她怎么敢对他们做出与以往完全不同的结论?其实,在那些对她的批评之中,起码有一部分实际上是支持了她的观点,这就是K家庭的症状在所有句法规则上都表现了出来。说语法上的困难来自口语表达问题是因为口语表达问题与语法困难是同时出现的,这就是循环论证了。

    高布尼克不是一个轻言放弃的人。她把自己的研究扩展到希腊和日本,在那里做了一些设计独到的实验,目的是要找到与K家庭同样的现象。例如,在希腊,“likos”这个词是狼的意思,“likanthropos”是狼人的意思,而“lik”,狼这个词的词根,却从来不会单独出现。但是大多数说希腊语的人都很自然地就知道,如果他们想把狼这个词的词根与一个以元音开头的词(比如说,“anthropos”)组合起来,他们应该把“likos”里的“os”去掉;而如果是要把狼这个词的词根与一个以辅音开头的词组合起来,他们需要把“likos”里的“s”去掉。听起来这好像是个复杂的规则,但正如高布尼克指出的,即使是只说英语的人也能够一下就看到这个规则的熟悉之处,我们在造新词的时候都是遵守这个规则的,比如说,“technophobia”(technophobia,对于高科技有恐惧的人。这个词是把“technology”(技术)里的“logy”去掉,与“phobia”(极度恐惧)组合成的。)。

    患有语言特有损害症的希腊人,不能掌握这个规则。他们可以学习一个词,比如“likophobia”和“likanthropos”,但是他们在认识这种词的复杂结构方面很差,不能认识到这种词是由不同的词根和后缀组成的。结果,为了补救这一缺陷,他们需要有一个比正常人大得多的词汇量。高布尼克说:“你得把这些人想象成没有母语的人。”他们学习自己的母语就像我们成年人学习一门外语一样费劲,需要有意识地吸收词汇和规则。

    高布尼克承认有些语言特有损害症患者在不需要语言的测试中也表现出低智商,但是另一方面,有些患者的智商高于平均水平。有一对异卵双生的孩子,有语言特有损害症的那个在不需要语言的方面智商高于没病的那个。高布尼克也承认,多数语言特有损害症患者在听和说方面都有问题,但她强调,并不是所有患者都有这些问题,听、说方面的问题与语法规则方面问题的巧合不能说明什么。比如,语言特有损害症在学“ball”和“bell”的时候没有困难,但他们想说“fall”的时候却常常说成“fell”。这是因为“fall”和“fell”之间的区别是语法上的,不是词汇上的(fell是fall(掉下、摔倒)的过去时,而ball(球)和bell(铃铛)是两个不同的词。)。还有,他们在区别押韵的词的时候没有问题。因此,当一个高布尼克的反对者说K家庭的人说话外人都听不懂的时候,高布尼克火冒三丈。她跟K家庭的人一起度过了很多个小时,一起聊天,吃比萨饼,一起参加家庭聚会,她说他们说话她完全听得懂。为了证明听、说能力的缺陷与语言特有损害症无关,她还设计了书写测试。例如,考虑以下两句话:“他上周得了第一名,他很高兴”,“他得了第一名,他上周很高兴”。大多数人立刻就知道第一句话是对的,而第二句话语法不对。语言特有损害症患者却认为两句话都对。很难想象这个问题跟听、说能力有什么关系。

    尽管如此,听、说能力论的那些理论家并没有放弃。最近,他们证明语言特有损害症患者在“声音屏蔽”方面有问题,也就是说,当一个纯音之前或之后出现一些噪音的时候,他们无法听到这个纯音,除非这个纯音的音量比正常人所需音量高45分贝。换句话说,语言特有损害症患者无法像正常人那样“挑出”一串大声说话的声音中那些细微的声音。那么也许他们会漏掉一些轻声说的词,比如说:“去了”中的“了”。

    但是,这个证据与其说是支持了听、说问题是语言特有损害症(包括语法问题)的根源,不如说它支持的是一个更有意思的进化方面的理论:主管听、说方面能力的大脑区域与语法区域是相邻的,在语言特有损害症中两者都有损坏。语言特有损害症是由七号染色体上的一个基因的一种特殊形式造成的,在怀孕晚期这个基因造成了大脑的损伤。核磁共振成像技术已经使我们确认了大脑损伤的存在与大致位置。一点也不奇怪,损伤发生在专管语言处理与口头表达的两个区域——布鲁卡区和威尔尼克区——之一。

    猴脑中有两个区域与人脑中的这两个区域完全对应。布鲁卡对应区是用来控制猴子脸部、喉部、嘴和舌头的肌肉运动的,威尔尼克对应区是用来识别一串声音、识别其他猴子的叫声的。这些正是语言特有损害症患者常有的语言之外的问题:控制脸部肌肉、识别声音。换一句话说,当人类的祖先第一次进化出语言本能的时候,它是从发声与声音处理的区域发展出来的。发声与声音处理的区域仍然存在,与脸部肌肉和耳朵都有连接,但是语言本能的区域在这之上发展起来,形成了一种内在能力,可以把自己这个物种其他成员所用的语法规则加在由声音而产生的词汇上。这样,尽管其他灵长类动物都不能学会语言一为此,我们得感谢那么多勤奋、有时容易上当又一厢情愿的训练员,是他们试验了所有可能的办法,才终于让我们知道黑猩猩和大猩猩是学不会语言的——语言却是与发声与声音处理有密切联系的。(但是,也并不是密切得不可分。聋人脑中,语言区的输入信号与输出信号分别给了眼睛和手。)因此,大脑的那一部分因遗传而造成的损伤,就会影响语法、口语和听力三个方面。

    这是对于威廉•詹姆斯在19世纪提出的假说的最好证明。他的假说认为,人类复杂行为的形成是因为在人类祖先的本能之上加了新的本能,而不是以学习代替本能。詹姆斯的理论在80年代晚期被一伙自称为进化心理学家的人给复活了。他们当中著名的有人类学家约翰•图拜(John Tooby)、心理学家里达•科斯米兹(Leda Cosmides)和心理语言学家史蒂文•频克。大致归纳一下他们的论点,应该是这样的:20世纪社会科学的主要目的是寻找社会环境影响我们行为的途径,我们可以把这个问题大头朝下反过来,寻找我们的内在社会本能是怎样影响社会环境的。这样,

    所有人高兴的时候都笑,焦虑的时候都皱眉,所有文化背景下的男性都发现女性身上代表年轻的特征有吸引力,这些也许都是本能的表现,而不是文化的表现。或者,浪漫爱情与宗教信仰在所有文化里的普遍性也许暗示着它们是受本能的影响,而不是传统的影响。图拜和科斯米兹提出一个假说,认为文化是个人心理的产物,个人心理不是文化的产物。还有,把先天与后天对立起来也是一个巨大的错误,因为不管学习什么,都要求一个人有内在的学习能力,学到什么是由内在因素限定的。例如,教一只猴子(或人)害怕蛇比教它害怕花容易得多,但是你还是得教它才能学会怕蛇。怕蛇是一种需要学习的本能。

    进化心理学里的“进化”二字,并不是指人们对于世代延续过程中大脑变化的兴趣,也不是指对于自然选择本身的兴趣。虽然这两者都很有意思,但它们还无法用现代手段去研究——这两者都需要非常缓慢的过程。在这里,“进化”二字指的是达尔文的理论框架的第三点:“适应”的概念。复杂的生物体内器官可以被“逆向设计”,以发现它们是被设计出来做什么用的。用同样的方法我们也可以研究复杂机器的功能。史蒂文•频克特别喜欢从兜里掏出一个用来除橄榄核的小玩意,来解释逆向设计的过程。里达•科斯米兹则倾向于用一把瑞士军刀来解释同样一个过程。在这两种情况下,除非用一件物品的功能来描述它,否则它就是没有意义的,比如说,这个刀刃是干什么的?如果在描述照相机工作原理的时候不提到它是用来拍摄图像的这样一个事实,那就是没有意义的。同样地,描述人的(或动物的)眼睛却又不提它是记录图像的,那么这就是无意义的。

    频克和科斯米兹都认为同样道理也适用于对人脑的描述。它的不同区域就像是一把瑞士军刀的不同刀刃,极可能是为了特殊功能才出现的。另外一种解释则认为,人脑的复杂性是随机的现象,人脑不同区域的不同功能只是从复杂性的物理原理中掉出来的副产品,得到这些不同功能只是因为我们很幸运。这个说法到现在还被乔姆斯基欣赏,虽然它与一切证据都矛盾。很简单,没有任何证据支持这么一个假说:你把一个由许多微处理机组成的网络做得越复杂,它所能得到的功能就越多。事实上,研究神经网络时常用的“连接学派”方法,对这个假说进行了大量探讨,这是因为这个学派被“大脑是神经元和突触组成的多用途机器”这样一个说法“误导”。结果却发现这个假说站不住脚。要想解决事先存在的问题,需要事先设计好的功能。

    这里有一个历史对我们的嘲弄。“自然界的设计”这样一个概念有一度是反对进化论的最有力论据。事实上,在19世纪上半叶,就是“自然界的设计”这样的观点阻挡住了进化论的被接受。它最出色的表达者威廉•佩利(William Paley,18世纪的神父)有过一段著名的话:如果你在地上发现一块石头,你对于它是怎么到那里的很可能会毫无兴趣。但是如果你在地上发现一块表,你不想承认也得承认,在什么地方肯定有一个钟表匠。因此,生物体精巧、功能完美的结构就是上帝存在的证据。而达尔文却天才地把同一个论据拿来支持反面观点,反驳佩利。用理查德•道金斯的话说,一个名叫自然选择的瞎眼的钟表匠,从生物体上自然出现的差异出发,一步一步地下功夫,经过几百万、上千万年,经过几百万、上千万个生物体,可以与上帝一样做到让生物体用复杂的方法来适应生存环境。达尔文的假说被证据支持得如此之好,现在,用复杂的方法来适应生存环境已经成了自然选择的最强证据。

    我们所有人都有的语言本能就是这样一个适应生存环境的复杂方法,它的优美设计使得个体之间能够清楚地交流复杂的信息。很容易就可以想见,对于在非洲平原上的我们的祖先来说,能够用其他物种都不会的复杂形式共享准确、详细的信息是多么重要。“进那个峡谷,走很短的一段,然后在水塘前那棵树那里向左拐,你会发现我们刚刚杀死的那只长颈鹿的尸体。要躲开树右边正在结果的那丛灌木,我们看见一只狮子进去了。”这样的两句话,对于听者的生存具有很大的价值。这等于是在自然选择这个“抽奖”活动里的两张奖券。但是如果不懂语法,不懂很多语法,还是听不懂。

    支持“语法是内在的”这一理论的证据多种多样。也有一些证据表明,七号染色体上的一个基因在发育中的胚胎大脑构建语言本能的过程中起了作用。这些证据很可信,但是我们仍然不知道这个基因的作用有多大。不过,大多数社会科学方面的专家仍然强烈地拒绝接受这样一个想法,即有些基因的主要用途是使人在发育过程中得到语法本能。从他们关于七号染色体上这个基因的争论就可以清楚地看出,不管有多少证据,这些社会科学家们仍然争辩说,这个基因的主要作用是使得大脑有理解语言的能力,而它在语言“本能”方面的作用不过是个副作用。在一个世纪里占主导地位的学说都是本能只属于动物,人类没有本能,在这种情况下对于语言本能的拒绝就不足为奇了。其实,如果你想一想詹姆斯的观点,即有些本能是要靠学习与接受外界刺激才能建立起来,那么这个学说就要垮台了。

    我们在这一章里跟随了进化心理学的观点,即试图用逆向设计的方法去了解人类行为是为了解决什么问题。进化心理学是一门很新却很成功的学科,它给许多领域里对于人类行为的研究都带来了威力巨大的新见解。在六号染色体那一章里谈到的行为遗传学,也是想要达到相同的目的。但是,进化心理学与行为遗传学的角度如此不同,它们是冲突的。问题是这样的:行为遗传学寻找的是个体之间的差异,并希望把这些差异与基因联系起来。进化心理学寻找的是共同的人类行为——人类行为的普遍性,在我们每个人身上都能发现的特征——并且试图了解这些行为是为什么和怎样成为了部分是本能的行为。因此,它假设个体之间没有区别,起码对于特别重要的行为是如此。这是因为自然选择的任务就是磨掉个体的差异。如果一个基因的一种形式比其他形式好得多,那么,好的这种形式很快就会成为普遍的形式,而差的那些形式就被淘汰了。因此,进化心理学得出这样一个结论:如果行为遗传学家发现哪个基因有几种不同的形式,那么这个基因肯定不会很重要,只能是个起次要作用的。行为遗传学家则反驳说,到现在为止所有被研究过的人类基因都有一种以上的形式,所以,进化心理学的论断肯定有什么地方是错的。

    也许我们在实践中会发现这两个学科的矛盾是被放大了。一个是研究具有普遍性的、常见的、人类特有的特征的遗传学,另一个是研究个体差异的遗传学。两者都有一定的真理在里面。所有的人都有语言本能,所有的猴子都没有,虽然这种本能在不同的人体内不是发展得同样好的。患有语言特有损害症的人,他们的语言能力仍然比瓦殊、扣扣、尼姆(大猩猩或黑猩猩的名字)或任何久经训练的黑猩猩和大猩猩强得多。

    行为遗传学和进化心理学得出的结论对于很多不从事科学研究的人是难以消化的。这些人用一个表面上显得很有道理的说法来表达他们感到的不可思议。一个基因,一串DNA“字母”,怎么就能导致一种行为?在一个蛋白质的配方与一种能够学习英文里过去时的能力之间,有什么我们能够想象的机制把它们联系起来?我承认,初看上去这两者之间确有一条鸿沟,说这两者是有联系的好像需要的是信心而不是理性。但是,其实并不需要如此,因为行为的遗传学从本质上来说与胚胎发育的遗传学并无区别。假设大脑里每一个区域都是通过参考发育过程中在胚胎大脑里建立的一系列化学梯度才得以发育为成年的形式,也就是说,化学梯度形成了给神经元的地图。那些化学梯度本身可以是遗传机制的产物。有些基因和蛋白质能够准确地知道它们在胚胎里的位置,这虽然难以想象却无疑是存在的。到描述第十二号染色体的时候我会讲到,这样的基因是现代遗传学研究最激动人心的发现之一。行为基因的概念并不比发育基因的概念更怪异,两者都让人费思量,但是自然从来就不会因为人类对她不理解而改变自己的方式。

    X和Y染色体冲突

    Xq28——多谢你的基因,妈妈。 ——90年代中期同性恋书店里T恤衫上的字样

    往语言学拐一下,我们就会直面进化心理学所提出的骇人理论。也许它让你有了一种不安的感觉,感到有些其他的东西在控制我们的生命,感到我们自己的能力,语言能力和心理能力,都在某一程度上是由本能所决定,而不像你以前所骄傲地认为的那样,是由你自己的意志决定的。如果是这样,那么事情马上就要变得更糟了。这一章要讲的故事也许在整个遗传学史上是最出人意料的一个。我们已经习惯了把基因想象成是配方,它们在消极地等待着整个机体的“集体决策”,以确定要不要开始转录:基因是身体的仆人。这里我们要遇到另一种现实:身体是基因的受害人、玩具,是基因的载体和战场,为的是基因自己的雄心壮志。

    比七号染色体小的那些染色体中,最大的是X染色体。X染色体是个与众不同的染色体,是不合群的家伙。跟它配对的染色体,也就是说,跟它在序列上有亲和性的染色体,不是像其他染色体那样是跟它一模一样的一条,而是Y染色体,极小,而且几乎没有活性,就像是遗传上的“马后炮”。起码在雄性哺乳动物和果蝇里,以及在雌性的蝴蝶和鸟类里是这样的。在雌性哺乳动物和雄性鸟类里,则有两条X染色体,但是它们仍然有点怪。在身体内的每一个细胞里,两条X染色体不是等量地表达自己携带的遗传信息,而是有随机选择的一条把自己卷成一个小小的卷,没有活性,被称为巴尔小体(Barrbody)。

    X和Y染色体被称为性染色体,原因很明显,因为它们几乎完全准确地确定一个人的性别。每个人都从其母亲那里得到一条X染色体,但是如果你从父亲那里拿到的是一条Y染色体,那么你就是个男的;如果你从父亲那里遗传到一条X染色体,你就是女的。有个别的例外,有人虽然有一条X染色体和一条Y染色体,但是表面看上去是女的。但是这些是特殊的例子,它们的特殊正是为普遍的规则提供了证据。因为在这些人体内,Y染色体上最重要的男性化基因要么缺失要么受了损害。

    大多数人都知道这个事实,在学校里学不了多少生物学就会接触到X和Y染色体。大多数人也知道色盲、血友病以及其他一些疾病在男性里更为常见,因为这些致病基因在X染色体上。因为男性没有一条“富余”的X染色体,他们比起女性来更易罹患由隐性基因导致的疾病。正如一位生物学家说的,男性体内的X染色体是在没有副驾驶的情况下独自飞行。但是,有些关于X和Y染色体的事情是大多数人不知道的,有些事情非常奇怪,让人不安,它们动摇了整个生物学的基础。

    在所有科学研究方面的出版物中,《皇家学会哲学通讯》是最严肃最清醒的之一。在它里面,你很少会读到这样的文字:“这样,哺乳动物体内的Y染色体很可能参与的是一场被敌人在‘枪支’上占了上风的战斗。一种合乎逻辑的结果,是Y染色体应该逃跑、躲藏起来,把所有功能并非必需的序列都扔掉。”“战斗”、“在‘枪支’上占了上风”、“敌人”、“逃跑”?我们可不认为这些是DNA应当做的事。但是同样的语言,比这稍微多一点术语,在另一篇关于Y染色体的科研论文里也出现了。那篇文章的题目是《内在的敌人:基因组间的冲突,基因位点间竞争进化(ICE),以及物种内部的红色皇后》。文章的一部分是这样的:“Y染色体与其他染色体基因位点间进化中持续不断的竞争,使得Y染色体上基因的质量由于那些有一定负面作用的突变的‘搭便车’而不断下降。Y的衰落是由于遗传上的‘搭便车’现象,但是基因位点间在进化中的竞争才是持续地推动雌雄两性之间相互对抗共同进化的催化剂。”就算以上这段话对你来说就像“爪哇国”的文字一般,有些词还是能够吸引你的注意,比如“对抗”。最近还有一本教科书,也是关于同样的题材的。书的名字很简单,叫做:《进化,40亿年的战争》。这都是怎么回事呢?

    在我们的过去,有某一时刻,我们的祖先从像两栖类动物那样让环境温度决定性别,改成了用遗传决定性别。改变的原因,也许是因为这样每一种性别的个体都可以从卵子受精就开始为自己的特殊角色而接受训练。在我们人类里,决定性别的基因使我们成为男性,如果没有这些基因就是女性,在鸟类里却正好相反。这个基因很快就在它周围吸引了一些对于男性有好处的其他基因,比如说,能够使肌肉发达的基因,或者是造成暴力倾向的基因。但是,因为这些基因是女性身体不想要的——不能浪费本来可用于抚养后代所需的能量——这些次要的基因就变得对一个性别有利而对另一性别有害。这样的基因就被称做性别对抗基因。

    当另一个基因突变抑制了两条性染色体之间进行正常的遗传物质交换的时候,难题解决了。现在,性别对抗基因就可以分道扬镳了。一个基因在Y染色体上的形式可以利用钙来造出鹿角,而它在X染色体上的形式却可以用钙造出乳汁。这样,一对中等大小的染色体,本来是各种各样“正常”基因的所在地,就被性别决定这个过程给“劫持”了,最终成为了性染色体,各自“吸引”了不同的基因。在Y染色体上积累了对雄性有好处对雌性却常有坏处的基因,在X染色体上则积累了对雌性有好处而对雄性有坏处的基因。例如,有一个新近发现的基因叫做DAX,是在X染色体上的。有极少的一些人生来是有一条X染色体一条Y染色体的,但是X染色体上却有两份DAX基因。结果就是,虽然这些人从基因角度说是男性,他们却发育成为正常的女性。我们对其原因的理解,是DAX和SRY——Y染色体上让男性成为男性的基因——是互相对抗的。一份SRY会打败一份DAX,但是两份DAX就要打败一份SRY了。

    这种基因之间互相对抗的升级是很危险的事。如果打个比方,我们可以觉察到,这两条染色体不再把对方的利益放在眼里了,就更不要提整个机体的利益了。或者更确切地说,一个基因在X染色体上的传播对X染色体可以是好事,但同时对Y染色体又是坏事;反过来也有可能。

    举一个例子吧。假设有一个基因在X染色体上出现了,它携带的配方是一种致死的毒药,只杀死带有Y染色体的精子。一个带有这个基因的男性不会比其他男性有更少的子嗣,但是他只可能有女儿,不可能有儿子。他的所有女儿都携带有这个基因,而如果他有儿子,他们是不会携带有这个基因的。于是,在他的下一代里这个基因就多了一倍。这样一个基因会传播得很迅速。如果这样一个基因停止传播,惟一的原因就是它已“杀死”了太多的男性,使物种本身的存在都受到了威胁——男性变得很稀罕。

    这是异想天开吗?根本不是。在一种学名叫做Acreaencedon的蝴蝶里,这种情况就发生了。结果就是这种蝴蝶的97%都是雌性。这只是我们所知的这种形式的进化冲突中的一例,我们称为“性染色体的推动力”。大多数已知的类似事例只限于昆虫,但是这只是因为科学家们对昆虫研究得比较详细。我在前文中引用过的那个奇怪的词,“冲突”,现在开始更加有意义了。有一个简单的统计资料:因为雌性有两条X染色体,雄性有一条X一条Y,所以在所有性染色体中有四分之三是X,只有四分之一是Y。换句话说,一条X染色体三分之二的时间是在雌性体内度过的,只有三分之一的时间是在雄性体内度过。这样,X染色体进化出攻击Y染色体能力的可能性,是Y染色体进化出攻击X染色体能力的可能性的三倍。Y染色体上的任何基因都可能受到来自一个新进化出来的X基因的攻击。结果就是Y染色体扔掉了尽可能多的基因,把剩下的“关闭”,以“跑得远远地藏起来”,剑桥大学的威廉?阿莫斯(WilliamAmos)用“科技术语”这样说。

    人类的Y染色体在关掉它的大多数基因方面做得如此之有效,使得现在的Y染色体上绝大多数都是没有编码功能的DNA,什么功能也没有,但是这样它们就不给X染色体以任何可以用来瞄准的目标。有一段短短的序列看上去像是最近才从X染色体上“溜”过来的,这是所谓的“假常染色体”区域。除此之外还有一个极为重要的基因,就是我们前面提到过的SRY基因。这个基因启动一系列的事件,导致胚胎雄性化。一个单个基因能够有这样的能力是很少见的。尽管它的作用只是类似于拨一个开关,但很多事件紧随其后。生殖器官发育得像阴茎与睾丸,身体的形状与组成变得不再像女性(在我们这个物种里所有个体一开始都被当成女性对待,但在鸟类和蝴蝶就不是这样),各种激素也开始在大脑里起作用。几年以前,《科学》杂志上曾刊登过一幅搞笑Y染色体图,声称已经找到了那些典型的男性行为的基因,这些行为包括不停地拿遥控器换电视频道、记忆和复述笑话的能力、对报纸上体育版的兴趣、沉迷于包含摧毁性行为和有人死的情节的电影,以及不会通过电话表达感情。这个搞笑图之所以好笑,是因为我们认出了它提到的这些行为都是典型的男性行为。这个笑话强化了“这些行为是由基因决定的”这种说法,而远不是在嘲笑这种说法。这个图惟一错的地方在于,并不是每一种男性行为来自于一个特殊的基因,而是所有这些行为来自于因睾丸激素等引起的大脑的普遍雄性化,其结果,就是男性在现代社会里的这些表现。这样,从某种角度来说,很多男性特有的习惯都是SRY基因的产物,因为正是SRY基因启动的一系列事件导致了大脑与身体的男性化。

    SRY基因比较特别。它的序列在不同男性体内惊人地相似:在人体内,它的序列中几乎没有点突变(也就是一个字母的区别)。在这种意义上说,SRY基因是一个没有变化的基因,从大约20万年前人类的最后一个共同祖先到现在,它就没有改变过。但是,我们的SRY基因与黑猩猩的很不同,与大猩猩的也很不同:这个基因在物种与物种之间的差别比一般基因要高十倍。跟其他活跃(也就是说,被表达的)的基因相比,SRY基因是进化最快的基因之一。

    我们怎样解释这个矛盾的现象呢?据威廉?阿莫斯和约翰·哈伍德(John Harwood)说,答案隐藏在被他们称做“有选择地清扫”的那些逃跑与藏匿之中。时不时地会有一个有推动作用的基因出现在X染色体上,依靠着能够辨认出SRY制造出来的蛋白质的能力,来攻击Y染色体。这样,任何很少见的SRY基因的突变形式,如果能够造出一种不能被识别出来的蛋白质,它就立刻有了进化优势。这种突变形式就会取代其他形式在男性体内传播。有推动作用的X染色体使性别比例向女性倾斜,而SRY的突变形式又把这个比例扳回平衡点。结局就是一种新的SRY基因形式存在于所有男性体内,没有个体之间的差别。也许,这样一种突然爆发的进化发生得如此之快,进化的纪录里都没有能够留下它的痕迹。其结果,就是制造出了在物种之间差别很大而在物种之内又几乎没有差别的SRY基因。如果阿莫斯和哈伍德是正确的,那么这样的清扫至少有一次是发生在人类祖先与黑猩猩祖先分开之后(500万~1000万年前),但又是发生在所有现代人类的最后一个共同祖先之前(20万年以前)。

    你也许会觉得有些失望。我在这一章一开始讲到的暴力与冲突变成了分子进化理论的一个细节。不要担心,我还没讲完呢,而且我很快就会把这些分子与真实的、人与人之间的冲突联系起来。

    在研究性别对抗方面,领头的学者是加利福尼亚大学圣塔克鲁斯(SantaCruz)分校的威廉?赖斯(WilliamRice),他做了一系列了不起的实验来阐明自己的观点。让我们回到一个假设中的我们远古的祖先那里,他刚刚得到了一条新的Y染色体,正在关掉那上面的许多基因,以躲避有推动力的X染色体基因。用赖斯的话说,这条新的Y染色体是对男性有利的基因的温床。因为一条Y染色体永远不可能到一个女性体内,它就可以随意地获得那些对女性非常不利的基因,只要这些基因对男性有一点点好处(如果你还认为进化是为了让整个物种得益,你就别再这样想了)。在果蝇里,(在这一点上人类也一样,)雄性射出来的液体是含有精子的内容丰富的“汤”,称做精液。精液里有蛋白质,基因的产物。它们的作用还属于未知,但是赖斯有一个很厉害的想法。在果蝇交配的过程中,这些蛋白质进入雌蝇的血液里,并且转移到其他地方,包括“她”的脑。在那里,它们的功能是降低“她”对交配的兴趣,并提高“她”的排卵率。30年以前我们会把排卵率的提高说成是对物种有利的事情:母蝇到了停止寻找性伴侣的时候了,取而代之的是“她”寻找做巢的位置,公蝇的精液使得母蝇的行为发生了变化。你可以想象,国家地理节目的解说词就是这么说的。现在,这个现象却有了一层邪恶的光环。公蝇是在试图操纵母蝇不要再去与其他公蝇交配,让“她”为了自己多产些卵,“他”这样做是受了那些性别对抗基因的指使。这些基因也许是在Y染色体上,也许是被Y染色体上的基因启动的。母蝇则在自然选择的压力之下对这样的操纵越来越抵触。最后陷入僵局。

    赖斯用了一个匠心独运的实验来验证他的想法。他在29代果蝇中,制止了母蝇抵抗力的发展,这样,他就保留了一支与其他分支不同的母蝇。同时他又通过让公蝇与另一些抵抗力越来越强的母蝇交配,使公蝇制造出越来越有效的精液蛋白质。29代之后,他把公蝇与没有抵抗力的母蝇交配。结果一目了然。公蝇的精液在操纵母蝇的行为方面是如此高效,它已经变成有毒的了,它可以把母蝇杀死。

    现在赖斯相信性别对抗在各种环境之下都是存在的,所留下的线索就是那些飞速进化的基因。例如,在一种带壳的鱼——鲍鱼里面,精子需要用一种名为细胞溶素的蛋白质,在卵子细胞表面由糖蛋白组成的“网”上钻出一些洞来。这种细胞溶素是由一个变化非常快的基因制造的(在我们人体里可能也是如此)。这也许是因为细胞溶素与糖蛋白网之间进行着“军备竞赛”。精子如果能够飞快地进入卵子,这对精子有好处,对卵子则有坏处,因为其他寄生物或是第二个精子也有可能进来。再举一个与人类关系比较大的例子,胎盘是由来自父方的变化飞快的基因控制的。以戴维?黑格为首的现代进化理论家们现在相信,胎盘更有可能是由胚胎里来自父方的基因控制的、寄生在母体内的东西。不顾母体的反对,胎盘试图控制母亲体内的血糖水平以及血压,以利于胚胎的成长。在讲第十五号染色体的章节里我们还会再回到这一点。

    但是,交配行为又是怎么回事呢?传统的观念认为,雄孔雀那繁复的尾巴是用来吸引雌孔雀的设备,而且它是依照着过去的雌孔雀的欣赏标准设计出来的。赖斯的同事布雷特?霍兰(BrettHolland)却有一种不同的解释。他认为雄孔雀的尾巴的确是进化来吸引雌性的,但这是因为雌孔雀对这种吸引方式越来越抵触。雄孔雀实际上是用交配前的展示来代替用力量强迫,而雌性用对于展示的欣赏与否来自己控制交配的频率与时间。这就能够解释出现在两种蜘蛛里的让人吃惊的现象。一种的前腿上长有一束尖刺,与交配有关。在观看雄蜘蛛展示自己前腿的录像时,雌蜘蛛会用自己的行为来表示她是否被这只雄蜘蛛撩拨得动了情。如果我们把录像加工一下,把雄蜘蛛前腿上的尖刺去掉,雌蜘蛛仍然同样有可能觉得雄蜘蛛的展示很“煽情”。但是,在另一种蜘蛛里,雄蜘蛛没有这些尖刺。如果在录像里人工加上尖刺,那么雌蜘蛛“接受”雄蜘蛛要求的机会就被增加了一倍以上。换句话说,在进化过程中雌性渐渐地“反感”了雄性的展示,而不是越来越喜欢。就这样,性别之间的选择是“勾引”基因与“抵制”基因之间的对抗的表达。

    赖斯和霍兰得到了一个让人不安的结论:越是有社会性、越是个体之间交流多的物种,越会受到性别对抗基因的影响。这是因为两性之间的交流给性别对抗基因提供了一个兴盛的场所。在地球上最有社会性、最善于交流的物种,当属人类。这样,一切豁然开朗——为什么人类的两性关系像个雷区一样,为什么男性在什么是来自女性的性骚扰这个问题上有着那么多不同的标准。从进化角度来说,驱动两性关系的不是什么对男性有利或什么对女性有利,而是什么对他们或她们的染色体有利。在过去,能够吸引女性对Y染色体是有好处的,而能够拒绝一个男性的吸引则对X染色体有好处。

    像这样的基因群之间的冲突(Y染色体上的所有基因就是一个基因群),并不只是在“性”方面才有。假设有一个基因的某种形式能够让人更易说谎(这不是一个在现实中很有可能性的假设,但是也许确实有一大批基因可以间接影响一个人是否诚实),这个基因也许会靠着把它的“主人”变成一个成功的诈骗犯而更好地繁殖自己。但是,再假设也许在另一条染色体上有另一个(或一群)基因有一种形式能够提高人辨别谎言的能力,这个基因要想更好地繁殖自己,就得使得它的拥有者避免上那些骗子的当。这两个基因会互相对抗着进化,每一个基因的进化都刺激着另一个的进化,即使这两个基因是被同一个人拥有。它们之间是赖斯和霍兰所说的“位点之间的竞争进化”(ICE)。在过去的300万年间推动人类智力进步的也许正是这样一个竞争过程。以前有一种说法,即人脑的增长会帮助我们的祖先在非洲平原上制造工具和点火,这种说法早就没人感兴趣了。取代它的是大多数进化生物学家都相信的马基亚维里(Machiavelli)16世纪意大利政治家、历史学家和政治理论家。他的理论认为,道德与政治无关,狡猾与欺骗在统治者夺得与保持权力的时候是正当的。——译者注)理论——在操纵别人和抵御操纵这两者的“军备竞赛”中,体积大的脑子是很必要的。赖斯和霍兰写到:“我们称做智力的现象,也许只是基因组之间冲突的副产品。这种冲突,是用语言做武器的进攻与防守基因之间的冲突”。

    原谅我偏题偏到智力上去了,让我们回到性上面吧。遗传学上最引起轰动、最有争议、大家争论得最激烈的发现之一,是1993年迪安·哈默(Dean Hamer)(当代美国生物学家。——译者注)宣布他发现了X染色体上的一个基因对于人的性取向有很强的影响,或者如媒体所说,一个“同性恋”基因。哈默的研究是那个时候发表的几项研究之一,它们都指向同一个结论,即同性恋有其生物学原因——而并非来自环境压力或一个人自己有意识的选择。有些研究工作是由同性恋者自己完成的,例如萨奥克研究院(Salk Institute)(位于美国加利福尼亚州的生物学研究院。——译者注)的西蒙·勒威(Simon LeVay),他们中的一些人急于在公众心目中建立一个在他们自己心目中已经牢牢扎根的概念:同性恋者是生来如此的。他们相信,如果一种生活方式由与生俱来的倾向性决定,而非由人的意志决定,那么它所遭到的偏见就会小一些。这种想法有一些道理。而且,如果同性恋确由先天因素引起,那么家长们也就不会觉得同性恋那么有威胁性了,因为,除非孩子本身已有同性恋倾向,否则孩子崇拜的人物中那些同性恋者就不会使自己的孩子也成为同性恋。事实上,那些保守的、不宽容的人最近开始攻击同性恋的遗传因素方面的证据。1998年7月29日,保守的杨女士(The Conservative Lady Young)在《每日电讯报》上写道:“我们在接受‘同性恋’是天生的这一说法时一定要谨慎,不是因为它不正确,而是因为它为那些给同性恋者争取权利的组织提供了借口。”

    但是,不管有些研究人员多么希望看到某种特定的结果,他们的研究还是客观坚实的。同性恋有高度的遗传性这一点,是无可怀疑的。例如,有一项研究,研究对象中54位有异卵双生兄弟的同性恋者中,他们的兄弟有12位也是同性恋。而研究对象中56位有同卵双生兄弟的同性恋者中,他们的兄弟有29位也是同性恋。不管是同卵还是异卵双生,孪生子的生活环境是一样的,这个结果就说明,一个或一些基因是一个男性成为同性恋者的一半原因。有一打其他的研究都得到了相似的结果。

    迪安·哈默被这个结果迷住了,开始寻找可能的基因。他和他的同事访问了110个家里有男性同性恋者的家庭,并且注意到了一些不寻常的事情。同性恋似乎是在女性中传递下来的。如果一个男人是同性恋,那么最有可能的是,在他的上一代里他父亲不是同性恋,他母亲的兄弟却是。

    这个观察立刻让哈默想到这个基因也许是在X染色体上,因为一个男性只从他的母亲一方得到一套X染色体上的基因。他比较了他研究的那些家庭里中同性恋男性与“正常”男性基因标识的区别,很快发现了一个“可疑”区域:Xq28,位于X染色体长臂的顶端。同性恋的男性中,有75%的人都带有这个基因的一种形式,而“正常”男性中,有75%的人都带有这个基因的另一种形式。从统计学角度说,我们有99%的信心相信这个结果不是巧合。之后,其他结果也证明了这个结果的可靠性,而且还排除了这个区域与女性中的同性恋倾向的关系。

    对于罗伯特?特里弗斯(RobertTrivers)这样的敏感的进化生物学家,同性恋基因有可能在X染色体上这一说法立刻让他有所联想。如果一个基因能够影响性取向,那么有一个问题就是,使人成为同性恋的那种形式很快就应该灭绝掉。但是,同性恋在当代人群里占有可观的比例。或许有4%的男性毫无疑问地是同性恋,还有更少的一些人是双性恋。因为平均来讲,同性恋的男性比“正常”男性更不可能有孩子,那么同性恋的基因就应该从很久以前就在人群中逐渐减少直到消失,除非它带有其他什么好处来弥补这一弱势。特里弗斯论述说,因为一条X染色体存在于女性体内的时间是它存在于男性体内时间的两倍,一个性别对抗的基因如果能够有助于女性的生殖能力,那么它即使对男性的生殖能力有两倍的损害,也仍然能够存留下来。比如说,假设哈默发现的基因决定女性青春期开始时的年龄,甚至是乳房的大小(记住这只是一个假设啊)。这些性质每一个都能够影响女性的生殖能力。在中世纪的时候,大乳房也许意味着更充足的奶水,或是能够嫁到一个有钱的丈夫,于是生下的孩子也就更有可能避免在婴儿期就夭折。就算同一个基因的同一种形式使得儿子觉得男性才有吸引力,因此降低男性后代的生殖能力,但是因为它给女儿带来益处,所以它仍然能够存在下来。

    在过去,同性恋与两性之间冲突的联系只是一个大胆的猜想,直到哈默的基因被发现和被解码。事实上,Xq28与性别取向之间的联系仍然有可能是误导。麦克?贝利(MichaelBailey)最近对于同性恋家族遗传性的研究就没能发现同性恋由母系遗传的倾向。另外一些科学家也没能发现哈默声称的Xq28与同性恋之间的联系。现在看来这种联系也许只存在于哈默研究过的那些家族里。哈默本人也提醒大家,在同性恋基因被真正确定之前,轻易下结论是错误的。

    而且现在又有了一个让事情变得更复杂的因素:另一种完全不同的解释同性恋的理论。现在变得越来越清楚的是,性取向与出生的顺序有关。一个男人,如果有一个或几个哥哥,那么他与那些没有兄弟姐妹、只有姐姐没有哥哥或者在家里是老大的男性相比,就更容易成为同性恋。出生顺序对性取向的影响如此之强,每多一个哥哥,一个人成为同性恋的可能性就增加三分之一。(这仍然是很低的可能性,3%再增加三分之一也只是4%。)这种现象现在已经在英国、荷兰、加拿大和美国都被发现和报道过了,而且在很多研究对象里都发现了。

    对大多数人来讲,他们首先想到的是类似于弗洛伊德理论的想法:在一个兄长很多的家庭里长大,也许兄弟之间的关系使得一个人具有了同性恋的倾向。但是,就像我们常常发现的那样,用弗洛伊德理论作为对事物的第一反应往往是错的。(在旧的弗洛伊德理论中,同性恋被认为是由一个过于保护孩子的母亲和一个有隔膜的父亲造成的,这几乎肯定地是本末倒置了。其实是儿子正在形成的女人气让父亲对儿子有了隔膜,而母亲因为要补偿儿子,就变得保护过度。)回答也许又一次存在于两性之间的对抗中。

    出生顺序对于女性同性恋倾向没有影响,她们在家庭里兄弟姐妹中的排行是随机的。这给了我们一个重要线索。除此之外,一个男人有几个姐姐也与他是否是同性恋无关。在一个已经孕育过男孩子的子宫里被孕育是一件不一般的事情,它会增加一个男人成为同性恋者的可能性。最好的解释与Y染色体上的一套三个活跃的基因有关。它们编码的蛋白质被称为H-Y次要组织相容性抗原。一个与它们相似的基因编码一种名叫抗谬氏激素的蛋白质,这种蛋白质对于人体的男性化有着至关重要的作用:它使得男性胚胎体内的谬氏小管萎缩,而谬氏小管正是子宫和输卵管的前身。这三个H-Y基因的功能是什么,还不确定。它们对于生殖器官的男性化并不是不可或缺的,有睾丸激素与抗谬氏激素就够了。H-Y基因的重要性在现在才开始显现出来。

    这三个基因编码的蛋白质之所以被称为抗原,是因为它们“挑衅”母体的免疫系统产生一种反应。其结果就是母体的免疫系统在母亲孕育下一个男孩的时候更强了。(女婴不会制造H-Y抗原,也就不会引起免疫系统的反应。)雷?布兰查尔德(RayBlanchard)是研究出生顺序对同性恋的作用的人员之一,他论述说,H-Y抗原的任务是把一些器官中的一些基因激活,特别是大脑里的一些基因。事实上,在对于老鼠的研究中人们得到了一些证据说明这个说法是正确的。如果如此,那么母亲体内强壮的免疫系统就会对大脑的男性化起部分抑制作用,但却不会影响生殖器官的男性化。这样的男性就会被其他男性吸引,或者至少是对女性不太动心。有一个实验是让年幼的老鼠对H-Y抗原免疫,与对照组相比,这样的老鼠长大之后在很大程度上不能成功地交配。但急人的是研究人员们在报告里并没有说明不能正常交配的原因是什么。同样的,在果蝇发育过程中的某个关键时期,如果把一个叫做“转化器”的基因给激活,那么雄性果蝇就只会表现出雌性果蝇的性行为。这种变化一旦发生就不可逆转了。

    人不是老鼠也不是果蝇,有足够多的证据表明人脑的性别分化在出生之后还会继续进行。除了个别例子之外,同性恋的男性并不是被禁锢在男性肉体里的女性。他们的大脑至少是被激素部分男性化了的。但是仍然有可能他们在早期的某个关键的敏感时期缺少了一些激素,而这永久性地影响到了一些功能,包括性取向。

    比尔?汉密尔顿(BillHamilton)是最早形成性别对抗这一理论的人,他明白这会多么深远地影响我们对于什么是基因的认识。他后来写道:“现在有了这样一种认识,即基因组并不是为了一个项目——生存,生孩子——而存在的一个资料库再加一个实行计划的团队,就像我以前想象的那样。它开始越来越像一个公司的会议室,是自我中心的人和派系之间权力斗争的舞台”。汉密尔顿对于基因的新的理解开始影响到他对自己的头脑的理解:

    我自己这个有意识的、看上去是不可分割的自我,结果竟与我的想象差别如此之远,我一点也不必因为怜悯自己而感到羞愧。我是被一个脆弱的联盟送到外面去的大使,带着一个四分五裂的帝国里那些心情紧张的统治者们给我的互相矛盾的命令。……当我写下这些字的时候,为了能够写下这些字,我就得装着自己是一个统一体,而在内心深处我知道这样一个统一体是不存在的。我是一个混合体,男性与女性、父辈与子辈、相互争斗的染色体片段,它们之间的冲突是在胡斯曼(Housman,诗人;塞汶河,英国最长的河流)诗中说的塞汶河(River Severn)看到凯尔特人与萨克逊人之前几百万年就形成了。

    基因之间有冲突,基因组是父辈基因与孩子的基因、男性基因与女性基因之间的战场,这样一种说法,是除了少数进化生物学家之外鲜为人知的故事。但它却深深地动摇了生物学的哲学基础。

    第八号染色体 自身利益

    我们是生存机器——糊里糊涂的、被事先编好程序的自动化机器,用来保存那些名叫基因的自私的分子。这是一个仍然让我感到目瞪口呆的事。  ——理查德•道金斯:《自私的基因》

    随着新电器而来的使用手册总是很使人恼火。它们好像永远没有你最需要的那一条信息,弄得你团团转,让你气急败坏,而且它们在从中文被翻译过来(因为在西方国家销售的电器很多是中国制造的)的过程中肯定有些内容被丢掉了。但是它们倒不会添什么东西进去,不会在你正读到要紧之处的时候忽然加五份席勒的《欢乐颂》或是一份半份套马指南。一般来说,它们也不会把一份怎样安装机器的说明重复五次,或把使用说明分成27段,每两段之间再插上好几页不相关的文字,让你连找自己想要的段落都很困难。但是这却描述了人类的视网膜细胞瘤基因。而且,就我们所知,这个基因是一个很典型的人类基因。它有27段有意义的段落,却被26页其他玩意给打断。

    自然母亲在基因组里藏了一个卑污的小秘密。每一个基因都比它所必要的更繁复,它被打断成很多不同的“段落”也叫外显子;在它们之间是长长的随机、无意义的序列(叫做内含子),有些跟这个基因完全无关的有意义的片段在内含子里大量重复。这些重复片段有时候是另外一个完全不同的(不吉利)基因。

    之所以出现这种“文字结构”上的混乱,是因为基因组是自己写自己,而且不断地加减、更改了40亿年。自己写自己的文件有着不同寻常的特性。尤其是它们很容易被别的东西当成寄生地。在这个时候打比方是有点不太沾边,但是,试想一个写使用手册的作家,他每天早晨到了自己的电脑前都会发现他文章里的各个段落都吵闹着要吸引他的注意。那些声音最大的逼着他把自己又重复了五遍,放在下一页里。结果就是,使用手册还是存在的,否则机器就永远没法组装起来了,但是其中充满了那些贪婪的像寄生虫一般的段落,它们因为作家的顺从得到了好处。

    实际上,随着电子邮件的发展,这个比喻已经不再像以前那样不着边际了。假设我发给你一份电子邮件,读起来是这样的:“注意,有一个很厉害的电脑病毒出现了;如果你打开一个标题里有‘橘子酱’的邮件,它会洗掉你的硬盘!请把这个警告转发给所有你能想到的人。”这是我编的。就我所知,到目前为止还没有名叫“橘子酱”的电子邮件在游走。但是我却有效地夺走了你的早晨,让你发出我的这个警告。我的电子邮件就是病毒。

    至此,这本书里的每一章都集中讲述了一个或一组基因,这背后的假设基因是基因组里最重要的东西。别忘了,基因是DNA的片段,是用来编码蛋白质的。但是我们的基因组里97%都不是真正的基因。它们是一大群各种各样的怪东西:有的叫伪基因,有的叫逆转录伪基因,有的叫卫星体、小卫星体、微卫星体、转座子、逆转录转座子。所有这些放在一起被统称“垃圾DNA”,有些时候也被叫做“自私DNA”,这种叫法比较准确。这些东西里有些是一种特殊的基因,但大多数就是一段一段的永远也不会被转录成蛋白质语言的DNA。因为它们的故事很自然地是接在上一章讲过的性别冲突的故事后面,所以,这一章我们就专门讲垃圾DNA。

    碰巧这是一个适合讲述垃圾DNA的地方,因为关于八号染色体我没有什么特别可说的。这可不是在暗示这是一条枯燥乏味的染色体,也不是说它没有几个基因。这只是因为我们在这条染色体上发现的基因中没有一个引起了我这个没有耐心的人的注意。

    (从它的大小来讲,八号染色体比较而言是被忽略了,它是基因图谱中被绘制得最不详细的染色体之一。)在每一条染色体上都有垃圾DNA。好笑的是垃圾DNA是人们在人类基因组里发现的第一个有真正实际的用途、在日常生活里能够用到的东西。它导致了DNA“指纹”检验。

    基因是蛋白质的配方。但是并不是所有蛋白质的配方都是受身体欢迎的。在整个人类基因组里最常见的蛋白质配方是编码一个名叫逆转录酶的蛋白质的基因。逆转录酶基因对于人体来说一点用处也没有。如果在一个受精卵刚刚形成的时候把基因组里逆转录酶基因的每一个拷贝都小心地、魔术般地去掉,这个人有可能更健康、更长寿、更快乐,而不是相反。逆转录酶基因对于一种“寄生虫”来说才是至关重要的。它是艾滋病毒的基因组里非常有用——虽然不是必不可少——的组成部分,它在艾滋病毒侵入并杀死其他生命体的能力中起着重要的作用。相反,对于人体来说,这个基因是个讨厌的、有威胁的东西。但是它却是整个基因组里最常见的基因之一。在人类的染色体上散布着几百甚至上千个拷贝。这是个让人吃惊不小的事实,就像是我们突然发现了汽车的最常见用途是逃离犯罪现场。那么这个基因为什么存在呢?

    从逆转录酶的功能里我们得到了一个线索。它把一个基因的125RNA拷贝翻录成DNA,又把这段DNA“缝”回基因组里去。它是一个基因的回程车票。利用这种方法,艾滋病毒可以把自己基因组的一部分整合到人体的基因组里去,以便更好地把自己隐蔽起来,更好地保存自己和更有效地复制自己。人类基因组里很多逆转录酶基因的拷贝之所以在那里,是因为我们能认得出来的一些

    “逆转录病毒”把它们放在了那里,在遥远的过去或是最近的时期。人类的基因组里含有几千种病毒的几乎完整的基因组,大多数现在已经不再活跃,或者最关键的基因已经缺失了。这些“人体内在的逆转录病毒”占了人类基因组的1.3%。这听起来好像不算多。但是那些“合用的”基因也只占了3%。你要是觉得你是猿猴的后代这一事实打击了你的自信,那你就试着习惯于你也是病毒的后代这个想法吧。

    但是,何不甩掉逆转录酶这个中间人呢?一个病毒的基因组完全可以去掉大部分基因,而只留下逆转录酶基因。这样,一个轻装过的病毒可以用不着那么辛苦地试图通过唾液或趁人性交的时候从一个人跑到另一个人,它就可以留在一个人的体内并且搭他的便车一代一代传下去。这是一个真正的寄生病毒。这种“逆转录转座子”比逆转录病毒还更普遍。最常见的是一串被称做LINE-1的“字母”这是一段DNA,大约1000〜6000字长,在靠近中间的地方有一份逆转录酶的完全编码。LINE-1不仅仅是多——在每一个人类基因组里面大约有10万份拷贝——而且还总集中在一起,也就是说在一条染色体上往往有好几段LINE-1紧紧挨在一起。它们占了整个基因组的14.6%,一个让人吃惊的数字。也就是说,它们比“真正”的基因多四倍。这个现象的含义很吓人。LINE-1有它们自己的回程车票。一个LINE-1可以让它自己被转录,然后造出它自己的逆转录酶,再用那个酶造出一份自己的DNA的拷贝并把它插回到基因组中随便一个位置上去。这也许就是为什么在基因组里有那么多份LINE-1。换句话说,这个重复性那么强的段126落之所以有那么多,就是因为它善于复制自己,没有其他原因。

    “一个跳蚤身上还有更小的跳蚤,它又会挨比它更小的跳蚤的咬。”如果LINE-1存在于人类的基因组里,那么又会有其他的序列寄生在它中间,把自己的逆转录酶丢掉而用LINE-1的。比LINE-1还常见的,是一种很短的段落,叫做Alu。Alu有180〜280个字母,看上去好像非常擅长用别人的逆转录酶来复制自己。在人类的基因组里,Alu也许被重复了100万次——加起来大约占整个基因组的10%。

    因为一些我们还不知道的原因,Alu的序列与一个真正基因的序列很相似,这个基因编码的是核糖体一制造蛋白质的机器——的一部分。这个基因与众不同的地方是它有一个内部启动子,也就是说,“读我”这个信号是写在基因中间的一段序列里的。这样,它就成了一个进行大量繁殖的最佳选择,因为它带有自己转录所需的信号,而没有必要把自己放在另外一个转录信号附近。结果就是每一个Alu基因可能都是一个“伪基因”。用一个通俗的比喻,伪基因就是那些生锈的基因残体,被一个很厉害的突变给扯到了水线以下,沉没了。它们现在歇息在基因海洋的底部,逐渐地长了越来越多的锈(也就是说,积累了越来越多的突变),直到它们与它们过去的样子再也不像了。举一个例子。在九号染色体上有一个很难描述的基因,如果你拿一份它的拷贝,在整个基因组里寻找与它相似的序列,你会发现有14个拷贝分布在11条染色体上:14条沉没的船体的鬼魂。它们是多余的拷贝,一个挨一个地,有了突变,不再被使用了。对于大多数基因来说可能都是如此。每一个正常的基因,都在基因组里的其他地方有一批坏了的拷贝。对于这14个拷贝来说,有意思的是它们不但在人类基因组里被找到,人们还试图在猴子基因组里寻找它们。人类体内的14份伪基因中,有3份是在旧世界猴子和新世界猴子(旧世界猴子指非洲和亚洲的猴子,新世界猴子指南美洲的猴子)分开之后才“沉没”的。科学家们激动地上气不接下气地说:这就意味着,它们从自己的编码功能上“下岗”,“只是”3500万年前的事。

    Alu疯狂地复制了自己,但是它们也是在相对较近的时期才这样做的。Alu只在灵长类动物里才被发现过,被分成五个不同的家族,有些家族只是在猩猩和人分离之后才出现(也就是说,过去的500万年之内)。其他动物有其他的大量重复的短片段,在老鼠里有一种叫B1。

    所有这些有关LINE-1和Alu的信息加在一起,是一个重要的却又在意料之外的发现。基因组里到处都是被乱丢的垃圾,甚至可以说基因组被电脑病毒那样的东西、自私的寄生序列给堵上了。它们存在的原因很简单很单纯,就是因为它们善于复制自己。我们的基因组里满是连环信件和关于橘子酱的警告。大约35%的人类基因组是各种形式的自私DNA,也就是说,要想复制我们自己的基因需要多花费35%的能量。我们的基因组太需要除虫了。

    没有人猜到这一点,没有人预见到,当我们读出生命密码的时候,我们会发现它被自私的DNA这么没有限制地利用。但是我们其实应该预见到,因为生命的所有其他层次都充满了寄生现象。动物的肠道里有虫子,血液里有细菌,细胞里有病毒,为什么在基因里不能有逆转录转座子?再说,到了70年代中期的时候,很多进化生物学家,尤其是那些对行为感兴趣的,已经意识到了自然选择的进化方式主要不是关于物种之间的竞争、不是关于群落之间的竞争,甚至也不是关于个体之间的竞争,而是关于基因之间的竞争。这些基因用个体,也有个别时候用一个群体,作为它们暂时的载体。例如,如果让一个个体要么选择一个安全、舒适、长寿的生活,要么选择有风险、辛苦、危险地繁殖后代,几乎所有动物(事实上植物也如此)都选择后者。它们为拥有后代而选择增加自己死亡的几率。实际上,它们的身体被有计划地设计了废弃的过程,叫做衰老,它使动物在达到了生育年龄之后就逐渐开始机能的衰退,或者像枪乌贼或太平洋大马哈鱼那样,马上死亡。除非你把动物的身体看成是基因的载体,看成是基因在让它们自己长生不死的竞赛中的工具,否则这些便无法解释。与给下一代以生命这个目标相比,一个个体在生育之后是否继续存活是次要的。如果基因是自私的复制机器,而身体是可以丢弃的载体(用理查德•道金斯的颇有争议的术语来说),那么当我们发现有些基因可以用不着建立自己的身体就能够复制自己的时候,我们就不会那么惊讶了。当我们发现基因组也像身体一样,充满了它们独特的生存竞争与合作,我们也就不必惊讶了。在70年代,进化第一次成了遗传学概念。

    为了解释基因组里充满了的大块大块的没有基因的区域,两组科学家在80年代提出,这些区域充满了自私的序列,它们的惟一功能就是在基因组里生存下来。“寻找其他解释的努力,’他们写道:“也许会证明不仅在学术上没有创意,最终也会是徒劳的。因为做了这么一个大胆的预言,他们在当时受到了不少嘲弄。遗传学家们当时仍然被这么一个思维上的框框束缚着:如果人的基因组里有一个什么东西,那么它肯定是为了人的目的而存在,而不是为了它自己的自私的目的。基因不过是蛋白质的配方。把它们想象成是有自己的目标与梦想的东西,没有任何道理。但是,那两组科学家的预言被精彩地验证了。基因的行为确实像是它们有自己的自私的目标,不是它们有意识地如此,而是我们回过头来研究它们的时候发现如此:看上去像有自己目标的基因繁衍下去了,而其他的基因则没有。

    一段自私的DNA并不仅仅是个过客,它的存在不仅仅是把基因组加长了一些,使得复制基因组的时候需要更多的能量。这样一段DNA对于基因的完整性是个威胁。因为自私的DNA有从一处跳到另一处的习惯,要么就把自己的一个拷贝送到新的地点去,所以它很有可能跳到一个正常工作的基因的正中间,把这个基因搞得面目全非,然后又跳到一个新的地方去,突变也就又消失了。在40年代晚期,转座子就是这么被有远见而又被人忽视的巴巴拉•麦克林托克(Barbara Mc Clintock)(巴巴拉•麦克林托克:20世纪美国遗传学家。)发现的(她最后终于在1983年得到了诺贝尔奖)。她注意到,玉米种子颜色的突变只能够用这样一种理论解释,即有些突变是在色素基因里跳进跳出的。

    在人体里,LINE-1和Alu通过跳到各种各样基因的中间而制造出了很多突变。例如,它们通过跳到凝血因子基因的中间而导致了血友病。但是,因为一些我们还没有很好理解的原因,作为一个物种,我们没有像有些其他物种那样被寄生的DNA困扰得那么厉害。大约每700个人类基因的突变里有一个是由“跳跃”的基因造成的,但是在老鼠里大约有10%的突变是由“跳跃”基因造成。跳跃基因潜在的危害有多大,被50年代一些很自然的实验在果蝇身上揭示出来了。果蝇是遗传学家心爱的实验动物。他们研究的这种果蝇学名为Drosophilamelanogaster,已经被运到全世界各地,在实验室里繁殖。它们常常会逃出来,从而遇到自然环境中其他种类的果蝇。有一种果蝇学名为Drosophilawillistoni,带有一种跳跃的基因名叫P因子。大约在1950年的时候,在南美某地,不知怎么一来(也许是通过一种吸血的尘螨),Drosophila willistoni的P因子进入了Drosophilamelanogaster。(人们对于所谓“异源器官移植”的一大担心,就是把猪或狒狒的器官移植给人的时候会不会把一种新的跳跃基因也引入到人体中去,就像果蝇中的P因子一样。)P因子从那时起就像野火一样蔓延开来,现在大多数果蝇都有P因子了,只除了1950年之前从自然界采集来又一直被与其他果蝇分开的那些。P因子是个自私的DNA,它通过破坏那些它跳上去的基因来表现出它自己的存在。逐渐地,果蝇基因组里的其他基因开始反攻了,它们发明了抑制P因子到处乱跳的手段。现在,P因子逐渐安顿下来,成了基因组里的旅客。

    人体中没有像P因子这样邪恶的东西,起码现在没有。但是,一种类似的因子在大马哈鱼中被发现了,它叫做“睡美人”。当在实验室里被引入到人类细胞里之后它呈现出蓬勃生机,充分表现出剪贴DNA的能力。类似P因子的传播那样的事,也许在人类体内的九种Alu因子那里都发生过。每一个传遍整个物种,破坏其他基因,直到其他基因确定了它们的共同利益并合力抑制了这样一个跳跃因子,这样,这个跳跃因子就安顿下来,进入了它现在的这个比较沉寂的状态。我们在人类基因组里看到的不是什么飞速发展的寄生DNA感染,而是沉睡着的许多过去的寄生DNA,每一个都曾经传播得飞快,直到基因组抑制了它们。但是基因组却没有能够把它们清理出去。

    从这个角度来说(从很多角度来说),我们比果蝇要幸运。如果你相信一种新的理论,那么我们好像有一种可以被普遍运用的功能,来抑制自私的DNA。这个抑制机能被称做胞嘧啶甲基化。胞嘧啶是遗传密码里面的那个C。把它甲基化(真的就是在它上面接一个由碳原子和氢原子组成的甲基)就使它不再被阅读和转录出来。基因组的大部分区域在大部分时间里都处于甲基化——被挡住——的状态,或者起码大部分的启动子(就是位于基因前面、转录开始的部分)是这样的。大家普遍假设甲基化的作用是把一种组织里面用不着的基因关闭,这样就使得大脑与肝脏不同,肝脏与皮肤不同,如此等等。但是另一个与之抗衡的理论正在越来越有影响力。甲基化也许与基因在不同组织里的不同表达形式一点没有关系,而与抑制转座子和基因组内部的奇生DNA有很大关系。大多数甲基化的部位都是在LINE-1和Alu这样的转座子中间。这个新的理论称,在胚胎发育早期,所有的基因都短暂地失去了甲基的保护,全都被“打开”了。接下来的,是由一些特殊的分子对整个基因组进行审查。这些分子的工作是发现那些高度重复的片段,并用甲基化来把它们关闭。在癌组织中所发生的第一件事就是基因的去甲基化。结果就是自私的DNA从它们的镣铸里被解脱出来,在癌组织里大量地表达。因为它们在破坏其他基因方面很在行,这些转座子就使得癌症变得更加厉害。根据这个理论,甲基化的作用就是抑制自私的DNA的影响。

    LINE-1的长度一般是1400个字母。Alu则一般起码是180个字母。但是,有一些序列比Alu还要短,它们也大量地积累起来,像口吃的人说话那样地不断重复。也许把这些序列也称做是寄生DNA有些不着边际,但是它们的繁殖也是通过很类似的方法进行的——也就是说,它们之所以存在是因为它们自己带有一小段序列,能够把它们自己很好地复制出来。这些短序列中的一种,在法医学和其他学科里有很实际的用处。见一见“超可变微卫星体”吧。这个小小的序列在所有染色体上都找得到。在整个基因组里它占有1000多处位置。在每一个位置上它的序列都只含有一个“词组”大约20个字母长,重复很多次。这个词组可以因位置不同而有差别,也可以在不同的人体内有不同,但是它通常含有这些核心字母:GGGCAGGAXG(X可以是任何字母)。这个序列的重要性在于它与细菌中的一段序列非常相似,而细菌中的这段序列是用来与同一物种的其他细菌交换基因的。在人体内,它似乎也是参与了促进染色体之间基因交换的过程。就好像每一个这种序列都在它的正中间写有“把我换到别处去”的字样。

    这是一个多次重复的微卫星体的例子:
    hxckswapmeaboutlopl hxckswapmeaboutloplhxckswapmeaboutlopl hxckswapmeaboutlopl hxckswapmeaboutlopl hxckswapmeaboutloplhxckswapmeaboutlopl hxckswapmeaboutlopl hxckswapmeaboutlopl hxckswapmeaboutlopl。
    在这个例子里一个序列有10次重复。在其他地方,那1000个位置上的每一处可能有一个词组的五次重复,也可能有50次重复。根据词组里的指令,细胞开始把这些词组与另一条相同染色体上同样位置的词组进行交换。但是在这个过程中细胞经常出错,以至于会增加或减少几次重复。这样,每一个序列的长度都在逐渐变化,变化的速率之快使得它们的长度在每个人体内都不一样,但是又慢得使得一个人体内这些重复的长度大多数都与他父母体内的一样。因为存在着上千个这种重复序列,结果就是,每个人都有一套独特的数字。

    1984年,亚列克•杰弗里斯(Alec Jeffreys)(生物学家)与他的实验员维基•威尔逊(Vicky Wilson)偶然地发现了微卫星体。他们当时正在研究基因的进化,方法是比较编码人类肌肉里的蛋白质——肌球蛋白——的基因与海豹肌球蛋白的区别。他们发现在这个基因的中间有一段重复序列。因为每一个微卫星体都有相同的12个“核心”字母,但是重复的次数却变化很大,把这些序列找出来并比较它们在不同个体里长度上的区别是一件相对容易的事情。结果,它们在每一个个体里的重复次数变化如此之大,每一个人都有自己独特的“基因指纹”:一串黑色的条带,就像商品上的条带码一样。杰弗里斯立刻意识到了他这个发现的重要性。他放下了一开始研究的肌球蛋白,开始探索独特的基因指纹可以有些什么用处。因为陌生人之间的基因指纹区别非常大,移民局的官员立刻就对它有了兴趣,他们可以用这个办法来判断那些申请移民的人与他们声称的自己在美国的近亲是否真的有血缘关系。基因指纹测试显示,大多数人说的都是真话,这减轻了很多人的忧虑。但是,基因指纹的另一个更戏剧性的应用还在后头呢。

    1986年8月2日,在英国莱切斯特(Leicestershire)郡一个名叫纳尔伯罗(Narborough)的小村子附近的灌木丛中发现了一个女学生的尸体。15岁的唐•阿什沃思(Dawn Ashworth)是被人强暴之后勒死的。一个星期之后,警方逮捕了一名医院的搬运工,这个年轻人名叫理查德•巴克兰(Richard Buckland),他对犯罪行为供认不讳。事情到此似乎就终止了。巴克兰理应被判有谋杀罪,然后去坐牢。但是,警方当时还急于侦破另外一粧悬案一三年之前,一个名叫琳达•曼(Lynda Mann)的女孩的命案。琳达死时也是15岁,同时她也是纳尔伯罗村人,另外,她也是遭强暴后被勒死并被弃尸荒野的。这两起谋杀是如此相似,很难想象它们不是同一个人干的。但是巴克兰却坚决不承认曼也是他杀的。

    亚列克•杰弗里斯在基因指纹方面取得重大突破的消息,通过报纸传到了警察那里。而且因为杰弗里斯就在莱切斯特工作,离纳尔伯罗只有不到10英里路程,当地的警察就与他取得了联系,询问他是否能够证明巴克兰在曼的谋杀案中也是有罪的。他同意一试。警方给他提供了从两个少女身体内取到的精液和巴克兰的血样。

    杰弗里斯没费任何力气就在三份样品里都找到了各种各样的微卫星体。一个多星期的工作之后,基因指纹就准备好了。两个少女体内的精液完全一样,肯定是来自同一个男人。就此结案?但是杰弗里斯在下一份样品里看到的事情让他非常震惊。巴克兰的血样与那两份精液的基因指纹完全不同:巴克兰不是杀人者。

    莱切斯特郡警方对此表示了强烈的抗议,他们认为杰弗里斯肯定是什么地方搞错了,才得出这么一个荒谬的结论。杰弗里斯重新分析了样品,警局法医实验室也对样品进行了分析。他们得到了同样的结论。被搞糊涂了的警察很不情愿地撤销了对巴克兰的指控。在历史上,这是第一次以DNA序列为依据宣告一个人无罪。

    但是让人揪心的疑点仍然存在。不管怎么说,巴克兰交待了犯罪的行为。如果基因指纹能够替无辜者昭雪又能抓住真凶,那才能让警察们信服呢。于是,阿什沃思死了5个月之后,警方鉴定了纳尔伯罗一带5500个男人的血液,以寻找一个与那个强奸杀人犯的精液相符的基因指纹。没有任何血样与精液的“指纹”相符。

    之后不久,一个在莱切斯特的一个糕饼店里工作的伙计,伊恩•凯利(Ian Kelly),碰巧向他的同事提到这么一件事:他虽然住得离纳尔伯罗很远,但却参加了血样鉴定,他是应糕饼店的另外一个伙计的请求,才这样做的。另外一个伙计叫科林•皮切弗克(Colin Pitchfork),住在纳尔伯罗。皮切弗克告诉凯利说,警察是想陷害自己。凯利的同事把这件事对警察又复述了一遍,于是警察就逮捕了皮切弗克。皮切弗克很快就供认,自己杀了那两个少女,但是这一次,他的口供被证明是真的:他的血样的DNA“指纹”与两具尸体上找到的精液吻合。1988年1月23日,他被判终生基因指纹检测立刻就成了法医学最可靠与最有力的武器之一。皮切弗克一案是这项技术的一次精彩过人的演示,此后数年中,它给基因指纹检测定了基调:即使是面对着似乎占压倒优势的罪证,基因指纹鉴定仍然可以为清白的人洗清罪责;仅仅是用它来威胁罪犯就可以使人招供;它惊人地准确与可靠——如果使用正确;它依靠很少的身体组织,甚至鼻涕、唾液、毛发或死去很长时间的人的尸骨,就可以完成检测。

    在皮切弗克案件之后的年代里,基因指纹鉴定走过了很长的路。仅仅在英国,截止到1998年年中,法医科学局就通过32万个DNA样品查出了2.8万名与犯罪现场的痕迹有关的人,还几乎有两倍多的样品被用来开释了无罪的人。这项技术被简化了,使得人们不再需要检查多个微卫星体,一个就可以了。这项技术也被发展得更灵敏了,极小的微卫星体或甚至“超微”卫星体都可以被用来提取出独特的“条形码”。不仅仅是微卫星体的长度,它们的序列也可以被测出来,使得DNA鉴定更加成熟。这样的DNA鉴定也在法庭上被滥用和不信任过,你可以想象在有律师掺和进来的时候就会如此。(大多数时候,对DNA鉴定的错误使用反映的是公众对于统计学的不了解,而与DNA没什么关系:如果你告诉一个陪审团,一个DNA样品与犯罪现场DNA吻合的随机概率是0.1%,而不是对他们说每1000个人里面有一个人的DNA会与犯罪现场的吻合,那么他们判被告有罪的可能性就高了三倍,而其实这两种说法是一回事。)

    DNA指纹鉴定并不仅仅是给法医学带来了革命,对其他很多学科也是如此。在1990年,它被用来鉴定从墓中挖出来的约瑟夫•门格尔的尸体的真实性。它被用来鉴定莫尼卡•莱温斯基(Monica Lewinsky)裙子上的精液到底是否是克林顿总统的。它被用来鉴定那些自称是托马斯•杰斐逊(Thomas Jefferson)私生子(美国第三任总统)的后代的那些人到底是否说了真话。在亲子鉴定这个领域它是如此被发扬光大(不管是被政府部门公开地做还是被父母亲在私下做)。在1998年,一个名叫“基因身份”的公司在全美国的高速公路旁边都树起了广告牌,上面写着:“孩子的爸爸到底是谁?请拨打1-800-DNA-TYPE。”他们每天接到300个电话,咨询他们那600美元一次的鉴定。这些电话要么是那些正在要求孩子的父亲拿抚养费的单身妈妈打的,要么是那些心存怀疑的“父亲”打的,因为他们不知道女方生的孩子究竟是不是他们的。在三分之二以上的案例里,DNA证据显示母亲是说了真话的。DNA鉴定使有些“父亲”因为发现配偶不忠而受到伤害,它却又能够使其他父亲确知自己的怀疑完全没有根据。好处是否能抵偿坏处,还是一个可以争论的话题。可以想见,当第一个DNA鉴定方面的私人公司挂牌营业的时候,在英国出现了一场媒体上的激烈争论:在英国,这样的医学技术被认为是应该由国家而不是个人所掌握。

    从一个更浪漫的角度来说,基因指纹检测在亲子测试方面的应用使我们对鸟类的歌唱有了更好的了解。你有没有注意过,鸫、知更鸟等鸣禽在春天与异性配对之后要持续地唱很长时间?对于那种鸟鸣的主要功能是吸引配偶的传统说法,这个现象简直就是当头一棒。生物学家从80年代末期开始对鸟进行DNA检测,以决定在每一个鸟巢里,哪只雄鸟是哪只幼鸟的父亲。他们很惊讶地发现,在那些“一夫一妻”制的鸟类里面,虽然一只雄鸟与一只雌鸟很忠实地互相扶助以抚养后代,雌鸟却不顾自己已有配偶这个明显的事实,还常常与邻居的雄鸟交配。不忠实、给“丈夫”戴绿帽子的现象比任何人想象得都多(这些都是非常隐秘地进行的)。DNA指纹鉴定将人们引入了一个爆炸性的研究阶段,最后产生了一个回报颇丰的理论:精子竞争。这个理论可以解释一些有趣的现象,比如说,虽然黑猩猩的身体只有大猩猩的四分之一大小,黑猩猩的睾丸却是大猩猩的四倍大。雄性大猩猩对它们的配偶是完全占有的,所以它们的精子没有竞争对手。雄性黑猩猩是与其他雄性“共有”配偶的,所以它们需要制造大量的精子、频繁交配,来增加自己做父亲的机会。这也能够解释为什么雄性的鸟在“结婚”之后叫得那么起劲,它们是在寻找“婚外恋”的机会。

    第九号染色体 疾病

    一种令人绝望的疾病需要危险的疗法。 ——盖伊·福克斯

    在第九号染色体上有一个知名度很高的基因:决定你的ABO血型的基因。在DNA指纹测试之前很久,血型测试就在法庭上出现了,因为警察有些时候会偶尔能够把犯罪嫌疑人的血液与犯罪现场的血液对上号。血液的对照是以假设犯罪嫌疑人无罪为前提的。也就是说,如果血样没对上,就证明你肯定不是杀人犯,但是如果对上了,却只能说明你有可能是杀人犯。

    这个逻辑对于加利福尼亚州的最高法院倒并没有什么影响。在1946年,它判决查理·卓别林(Charlie Chaplin)毫无疑问地是某个孩子的父亲,虽然血型鉴定表明卓别林与那个孩子的血液根本不相配,不可能是孩子父亲。不过呢,法官们从来就不太懂科学。在关于谁是父亲的官司里,与在谋杀案里一样,血液对照就像DNA鉴定和手指指纹鉴定一样,是无辜者的朋友。在有了DNA鉴定之后,血样鉴定就是多余的了。血型在输血的时候是极为重要的,但也是以一种负面形式出现:被输入了错误的血的人是会死的。血型可以给我们提供一些人类迁移方面的见识,但是它们在这一方面的作用也被其他基因取代了。所以,你现在可能觉得血型这件事很没意思,那你就错了。从1990年开始,它们的一个新的用处被发现了:它们有望让我们了解我们的基因为什么有很多种形式,以及这么多的形式是如何产生的。它们掌握着人类多样性之谜的钥匙。

    在血型方面第一个被发现也是我们了解最多的,是ABO系统。它们在1900年被首次发现,这个系统一开始有三套不同的名字,所以把人搅糊涂了:在莫斯(Moss)的术语里的Ⅰ型血与詹斯基(Jansky)的术语里的Ⅳ型血是一样的。理智逐渐占了上风,由血型的维也纳发现者卡尔·兰德斯坦纳(Karl Landsteiner)所发明的一套术语成了统一的术语:A,B,AB和O型。兰德斯坦纳形象地描述了输错血可以造成的灾难:“红血球都粘在一起了。”但是血型之间的关系不是那么简单的。A型血的人可以很安全地给A型或AB型的人献血;B型血的人可以给B型和AB型的人献血;AB型血的人只能给AB型的人献血;O型血的人可以给任何人献血——所以O型血的人被称为是万能献血者。在不同的血型背后也没有地域或种族的原因。欧洲人有大约40%是O型血,40%是A型血,15%的B型血和5%的AB型。在其他大陆上,比例也跟这个差不多,只除了在美洲有明显的不同。美洲的印第安人几乎全是O型血,只除了住在加拿大的一些部落和爱斯基摩人是例外,在加拿大的这些部落里有很多A型血的人。另外,爱斯基摩人也有些AB型和B型的人。

    直到1920年,ABO血型的遗传性才被搞清楚,到了1990年,与这些血型有关的基因才见了天日。A和B是同一个基因“共同显性”的两种形式,O是这个基因的隐性形式。这个基因在第九号染色体上,靠近长臂的顶端。它的“正文”有1062个字母长,被分成六个短的和一个长的外显子(“段落”),分散在染色体的几页——总共有1.8万个字母——上面。它是一个中等大小的基因,被五个比较长的内含子打断。这个基因编码的蛋白质是半乳糖基转移酶,也就是说,是一个能够催化化学反应的酶。

    A基因与B基因之间的区别只在1062个字母里的七个上面,这七个里面还有三个是相同意义的字母或是“不出声”的,也就是说,它们对于哪个氨基酸被选中加到蛋白质链上没有任何作用。那四个有作用的字母是第523、700、793和800个字母。在A型血的人体内这四个字母是C、G、C、G。在B型血的人体内则是G、A、A、C。另外还有其他一些极少见的区别。个别人会有几个A型的字母也有几个B型的字母,有一种极少见的A型血人是在基因末尾处丢了一个字母。但是,这四个字母的区别就足以使蛋白质上的区别大到在输错了血的时候可以引起免疫反应的程度了。

    O型血的基因与A型只有一个字母的区别,但是,这并不是一个字母被另一个字母代替,而是一个字母的被删除。在O型血的人体内,第258号字母不见了,那里本来应该有个G的。它的影响却很深远,因为它所造成的是所谓的“阅读错位”或称“移码突变”,后果很严重。(还记得吗?弗兰西斯·克里克在1957年提出的那个巧妙的“没有逗号”的密码如果是正确的,那么移码突变就不会存在了)遗传密码是三个字为一个词被念出来的,中间没有标点符号。由三个字母的词组成的一句英文也许是这样的:thefatcatsattopmatandbigdogranbitcat(胖猫坐在垫子上,大狗跑过去咬了猫)。我承认,这句话不怎么优美,但是你能理解它的意思。如果换一个字母,它仍然可以理解:thefatxatsattopmatandbigdogranbitcat。但是你如果把这个字母去掉,然后把剩下的仍然三个字母一组地念出来,那就一点意义也没有了:thefatatsattopmatandbigdogranbitcat。在那些O型血的人体内,他们的ABO基因就出了这种事。因为他们的基因在比较靠近开头的地方就缺了一个字母,那之后的信息就成了完全不同的东西。结果是一个具有不同性质的蛋白质被造了出来,它无法催化正常的化学反应。

    这听起来好像很严重,但实际上对人并没有什么影响。O型血的人在生活的各个方面都没有什么看得出来的缺陷。他们也并不会更容易得癌症,也不会在体育上不如人,也不会缺少音乐才能,等等。在优化人种论最盛行的时候,也不曾有政治家呼吁给O型血的人做绝育手术。实际上,关于血型的最可叹之处,也是它们之所以有用和在政治上又“中立”的原因,就是它们却是彻头彻尾的“隐者”,它们与人的任何事情都没有关系。

    但是,这也是事情变得有趣了的时候。如果血型既是看不见的又是中性的,它们是怎样进化到现在这种状态的呢?美洲的印第安人都是O型血,是纯粹巧合吗?乍一看上去,血型好像是中性进化论——由木村资生(Motoo Kimura)(进化生物学家)在1968年提出的理论——的一个例子:这个理论认为大多数遗传多样性的存在不是因为它们在自然选择的过程中出于某种目的被选中,而是因为它们的存在对任何事情都没有妨碍。木村的理论说,突变就像水流一样源源不断地被注入基因组之中,然后又逐渐地被基因漂移——随机变化——而去掉。也就是说,变化是随时都存在的,也并没有什么适应环境方面的重要性。100万年之后如果回到地球上来看看,人类基因组的大部分都会与现在的不同了,而且纯粹是由于中性的原因。

    “中性学派”与“选择学派”在有一段时间内都对自己的学说有忧虑。尘埃落定之后木村倒确实有了一批为数不少的跟随者。很多基因变异的后果的确是中性的。特别是当科学家们观察基因变异是如何影响蛋白质的时候,他们观察得越仔细,越发现大多数蛋白质的变化都不影响它的“活跃位点”,也就是蛋白质发挥自己功能的地方。有一种蛋白质,在两种生物体里面从寒武纪到现在积累了250个不同之处,但是只有6个对其功能有影响。

    但是我们现在知道了,血型不是像它们看起来的那样中性。在它们的背后是有一个原因的。从60年代早期到现在,逐渐变得明显起来的是血型与腹泻之间有着某种联系。A型血的孩子常常会在婴儿期得某些类型的腹泻,而其他孩子却不会;B型血的孩子则会得其他一些类型的腹泻;如此这般。80年代晚期的时候,人们发现O型血的人更容易感染霍乱。在完成了十几项研究之后,细节变得更加清晰了。除了O型血的人更易感染霍乱之外,A、B和AB型血的人在霍乱易感性上面也有区别。抵抗力最强的是AB型血的人,其次是A型血的人,再次是B型血的人。但是所有这些人都比O型血的人抵抗力强得多。AB型血的人抵抗力如此之强,他们对霍乱几乎是有免疫力的。但如果因此就说AB型血的人能够喝加尔各答(Calcutta)下水道里的水也不会有病,那就是不负责任了——他们也许会得另一种什么病——但是千真万确的是,即使导致霍乱的细菌进入这些人体内并在肠道里安顿下来,这些人都不会有腹泻。

    目前还没有人知道AB基因型是怎样给人体提供了保护以对抗人类疾病里最恶性最能致命的一种。但是它给自然选择提出了一个迷人而又直接的问题。别忘了,每一条染色体在我们的细胞里都有两份,所以,A型血的人实际上是AA,也就是说他们的两条九号染色体上各有一个A基因,而B型血的人实际上是BB。现在想象一个人群,只有这三种血型;AA、AB和BB。在抵抗霍乱方面A基因比B基因强。那么,AA的人就比BB的人可能有更多的孩子能够幸存下来。那是否B基因要从基因组里消失了呢?——这就是自然选择啊。但是这并没有发生,这是因为AB的人存活下来的可能性最高。所以,最健康的孩子是AA人和BB人的孩子。他们的所有孩子都是AB型,最抗霍乱的类型。但是如果一个AB型的人与另一个AB型的人生育后代,他们的后代里只有一半会是AB型;其他的孩子要么是AA要么是BB,后一种是最容易染上霍乱的。这是一个运气起伏不定的世界。在你这一代里最有利的组合,保证会给你一些容易染病的孩子。

    现在想象一下,如果一个镇上所有的人都是AA,只有一个新来的女人是BB,那么事情会怎么样。如果这个女人能够抵挡住霍乱,达到生育年龄,那么她会有AB型的孩子,对霍乱有免疫力。换句话说,优势总是在较少的基因型那边,所以,A和B都不会消失,因为它们中的任何一个如果少了,它就会变成“时髦”的东西,又“流行”起来。在生物学上这叫做由频率决定的选择,而这是我们的基因为什么如此多样的最常见原因之一。

    这解释了A与B之间的平衡。但是,如果O型血让你更容易感染霍乱,为什么自然选择没有让O型消失呢?答案与另一种疾病——疟疾——有关。O型血的人似乎比其他血型的人对疟疾更有抵抗力,他们好像也更不容易得一些类型的癌症。这一点生存优势也许就足以使O型基因免于灭绝了,尽管它与易得霍乱有关。一个大致的平衡就这样在血型基因的三种形式之间建立起来了。

    疾病与突变之间的联系是在40年代晚期被一个肯尼亚血统的牛津研究生安东尼·阿利森(Anthony Allison)第一个注意到的。他怀疑一种在非洲流行的名叫镰刀型贫血症的疾病发病频率也许和疟疾是否普遍有关。镰刀型贫血症的突变导致血红细胞在无氧的时候缩成一个扁镰刀形,这对于那些带有两份拷贝的人是致命的,但是对于那些只有一份的人危害并不太重。但是,那些有一份突变的人对疟疾的抵抗力很强。阿利森检验了住在疟疾高发区的非洲人的血样,发现那些带有镰刀型贫血症突变的人带有疟原虫的可能性比其他人小得多。镰刀型贫血症突变在西非一些疟疾肆虐的地方尤其普遍,在非洲裔美国人里也很普遍,这些非洲裔美国人的祖先有些是坐着贩卖奴隶的船从非洲西部来到美国的。镰刀型贫血症是现在的人类为了过去的疟疾抵抗力而付出的代价。其他形式的贫血症,例如在地中海与东南亚一些地区比较普遍的地中海贫血症,看上去对疟疾也有同样的抵抗作用,这就能解释为什么它在曾经的疟疾高发区比较普遍了。

    在这一点上,血红蛋白基因——镰刀型贫血症的突变就是这个基因上一个字母的改变——并不特殊。有一位科学家说它只是疟疾的基因防线的冰山一角,这样的基因可能多达12个,不同形式对疟疾有不同的抵抗力。在这一点上,疟疾也没有什么特殊的。起码有两个基因的不同形式对肺结核有不同的抵抗力,包括编码维生素D受体的基因,这个基因与人们对于骨质疏松症的不同的抵抗力也有关系。牛津大学的阿德里安·希尔(Adrian Hill)写道:“很自然地,我们忍不住要说,在很近的过去,自然对于肺结核抵抗力的选择,也许增加了对于骨质疏松症缺少抵抗能力的基因。”

    在那同时,人们新发现了一个类似的关系,就是囊性纤维增生这个遗传病与传染性伤寒这个遗传病之间的关系。七号染色体上CFTR基因的一种形式会引起囊性纤维增生,这是一种很危险的肺与肠道的病变。但是同时CFTR基因的这种形式又能够保护人体免受伤寒——一种由沙门氏菌引起的肠道疾病——的危害。带有一份CFTR基因的这种形式的人会得囊性纤维增生,但是他们对伤寒带来的高烧和让人虚弱的痢疾几乎是免疫的。伤寒需要CFTR基因的正常形式,才能够侵入它瞄准了的细胞;被改变了的形式缺了三个DNA字母,伤寒就达不到目的了。因为伤寒杀掉了那些带有其他形式的CFTR基因的人,它就给这种有了改变的形式施加了压力,促使了它们的蔓延。但是,因为带有两份这种改变了的CFTR基因的人能活下来就不错了,这种形式也就从来不会太普遍。就这样,一个基因的少见又恶毒的形式,因为另外一个疾病的原因,被保留下来了。

    大约每五个人里就有一个由遗传因素决定不能把ABO血型蛋白质的水溶形式释放到唾液与其他体液中去。这些“不分泌者”更容易得一些疾病,包括脑膜炎、酵母菌感染和重复发生的尿道感染。但是他们得流感或是受呼吸道合体细胞病毒影响的可能性又比一般人低。不管你往哪儿看,基因多样性背后的原因好像都与传染病有关。

    对这个话题我们只是蜻蜓点水。在过去给我们的祖先带来过极大痛苦的那些大规模的传染病——瘟疫、麻疹、天花、斑疹伤寒、流感、梅毒、伤寒、水痘,等等——把它们的痕迹留在了我们的基因里。赋予了我们抗病能力的突变繁盛起来,但是抗病能力常常是要付出代价才能换来的,代价有的很高昂(镰刀型贫血症),有的只在理论上存在(不能接受错误血型的输血)。

    实际上,直到最近,医生们仍然习惯于低估传染病的重要性。很多被普遍认为是由环境因素、职业因素、饮食习惯及偶然因素而造成的疾病,现在开始被认为是由一些人们不太了解的细菌和病毒的长期感染而造成的。胃溃疡是最精彩的一个例子。好几个医药公司因为发明了旨在对抗胃溃疡症状的新药而发了大财,但实际上只有抗生素才是惟一需要的药物。胃溃疡不是因为油腻食物、心理压力或是运气不好而造成的,它是由名叫Helicobactorpylori的螺旋菌引起的,这种细菌通常是在儿童时期就进入了人体。与此类似,现在有数据明显显示在心脏病与疱疹病毒之间可能有某种联系,各种形式的关节炎与各种病毒有关,甚至在精神分裂症或者抑郁症与一种少见的主要感染马和猫的脑病毒(称为伯尔诺脑病)之间也有关系。这些联系里,有一些或许会被发现是错的,有些可能是有病之后才引来的病毒与细菌,而不是病毒与细菌引来了病。但是,一个已经被证明了的事实是,人们在对诸如心脏病的各种疾病的遗传而来的抵抗能力上差异很大。也许基因上的不同也与对于细菌、病毒感染的抵抗力有关。

    从某种意义上说,基因组就是一份我们过去的病史的书面记录,是每一个民族每一个种族的医学圣经。美洲印第安人中O型血那么多,也许反映的是这样一个事实:霍乱与其他形式的腹泻通常是在人口密集和卫生状况差的地方出现的,而在西半球新近才有人居住的新大陆上这些疾病还没有蔓延起来。不过,霍乱本来就是一种少见的疾病,在1830年以前也许只限于恒河三角洲地带,在1830年左右才突然扩展到欧洲、美洲和非洲。我们需要一种更好的解释来说明美洲印第安人中O型血非常普遍这一让人迷惑的现象,特别是从印第安人的干尸中找到的证据表明,在哥伦布到达美洲之前,印第安人里有不少是A型或B型血的。这看上去几乎像是有一种西半球特有的生存压力使得A型和B型从人群里很快消失了。有些迹象表明原因也许是梅毒,这似乎是一种在美洲一直存在的病(在医学史的圈子里这仍然是被激烈争论的一个观点,但事实是,在1492年以前的北美人骨骼里就可以发现梅毒的损害,而在1492年以前的欧洲人骨骼里则没有)。O型血的人与其他血型的人相比,似乎对梅毒的抵抗力更强。

    现在我们来考虑一下一个很奇怪的发现,在血型与对霍乱的抵抗力之间的关系被揭示之前,这个发现是很令人不解的。假设你是一个教授,如果你让四个男人和两个女人都穿棉质的T恤衫,不许用香水和除味剂,在两个晚上之后还必须把T恤衫脱下来交给你,可能有人要嘲笑你有那么一点点性变态。如果你还请121位男人和女人来闻这些脏T恤衫的腋窝处并把它们按照对自己多么有吸引力来排个顺序,那么说婉转一点也是你这人太古怪。但是真正的科学家是不会感到尴尬的。克劳斯·维得坎德(Claus Wedekind)和桑德拉·菲里(Sandra Füri)(生物学家)就做了这么一个实验,结果他们发现,男人和女人都最喜欢(或最不讨厌)另外一个性别里与自己在基因组成上区别最大的那个成员的体味。维得坎德和菲里研究了六号染色体上的MHC基因群,它们是免疫系统用来定义什么是自我和用来识别寄生物和入侵者的。它们是可变性非常大的基因。如果其他条件都一样,那么,一只母老鼠会喜欢MHC基因与她自己区别最大的公老鼠,这是她通过闻他的尿来确定的。就是在老鼠身上的这个发现点醒了维得坎德和菲里,让他们想到,我们自己可能也仍然保有这样的能力,根据对方的基因来选择配偶。只有正在服用避孕药片的妇女才没有能够在闻T恤衫的实验中表现出对与己不同的MHC基因型的兴趣。但是我们知道避孕药片能够影响人的嗅觉。就像维得坎德和菲里说的:“没有一个人让所有的人都觉得好闻,关键在于是谁在闻谁。”

    在老鼠身上的实验被一直用远系繁殖来解释:母老鼠是在试图找到一个基因很不同的公老鼠,这样她才能生下基因变化较多的孩子,因而不会有近亲繁殖所造成的疾病。但是,也许她——还有那些闻T恤衫的人——是在做一件知道了血型的故事之后才能理解的事。记住,在霍乱期间寻找性伴侣的时候,一个AA型的人找到一个BB型的人才是最理想的,这样他们的所有孩子都会是对霍乱有抵抗力的AB型。如果同样的机制在其他基因与其他疾病那里也有作用——而且,MHC基因群又似乎是抵抗疾病的最主要地点——那么,被基因组成与自己正相反的人所吸引,就是有显见优势的事情了。

    人类基因组计划是建立在一个谬见上的。根本就没有一个“人类基因组”,在时间和空间上,都无法定义这么一个东西。在遍布于23条染色体上的几百个位点上,有着一些在每个人之间都不一样的基因。没有人可以说A型血是“正常”的而O型、B型和AB型是“不正常”。所以,当人类基因组计划发表了一个“典型”的人类基因组的时候,在九号染色体上的这个血型基因的序列应该是什么样子呢?这个计划公布的目标是发表平均的或具有“共性”的299个人的基因组。但是这在ABO基因这里就失去意义了,因为它的功能很重要的一条就是它不能够在每个人体内都一样。变化是人类基因组内在与不可分割的一部分,其实,对于任何其他基因组也是一样。

    在1999年这一个特定的时刻,给基因组拍一张快照,并且相信这代表了一幅稳定和永久的图像,这也是不对的。基因组是在变化的。基因的不同形式被疾病的起伏驱动着,它们在人群里的普遍性也在起伏。人类有一个很值得遗憾的倾向,就是夸大稳定性的作用,太过相信平衡。实际上,基因组是不断变化的动态图景。过去,生态学家相信过所谓的“高峰”植被——英国的橡树、挪威的枞树。他们现在已经学乖了。生态学与遗传学一样不是关于平衡态的学科,它们是关于变化的学科。变化,再变化,没有任何事是永远不变的。

    第一个瞥见这个道理的人可能是J.B.S. 霍尔丹(Haldane)(遗传学家),他曾试图找出人类基因如此多样的原因。早在1949年,他就推测到基因的多样性也许与寄生因素对其施加的压力有很大关系。但是,霍尔丹的印度同事苏莱士·贾亚卡尔(Suresh Jayakar),在1970年才真的把船摇动了。他认为稳定性根本就没有必要,那些寄生的因素会导致基因频率永远周而复始地变化。到了80年代,火炬传到了澳大利亚罗伯特?梅(Robert May)那里。他证实即使在一个最简单的寄生物与宿主系统里,也可能没有一个平衡状态:在一个因决定果的系统里也会永远有混沌的潮流在涌动。梅就这样成了混沌学说的奠基人之一。接力棒又传到英国人威廉·汉密尔顿(William Hamilton)那里,他发展了一些数学模型来解释有性生殖的进化,这些模型依靠的是寄生因素与宿主之间的“军备竞赛”,这种竞赛最终就会导致汉密尔顿所说的“很多基因永不安宁”。

    在70年代的某个时候,就像在那之前半个世纪发生在物理学方面的事情一样,生物学的确定性、稳定性、决定论这个旧世界坍塌了。取而代之的是,我们需要建立一个起伏不定的、变化的、不可预测的世界。我们这一代人破解的基因组密码只不过是一份不断变化的文件的一张照片。这个文件没有一个权威性的版本。

    第十号染色体 压力

    这真是一个这世界上绝对愚蠢的做法:当我们遇到败运的时候——常常由我们自己行为的过度造成——我们把我们的灾祸归罪到太阳、月亮和星星上,就仿佛我们必须是坏蛋,是天国的力量才让我们成为蠢货。……这是嫖客逃避责任的一个壮举:把自己那好色的性子说成是星星的命令。 ——威廉·莎士比亚,《李尔王》

    基因组是记载着过去的瘟疫史的圣经。我们的祖先对疟疾和痢疾的长期抗争被记录在人类基因的多样性中。你有多大机会能够避免死于疟疾,是在你的基因里与疟疾病原体的基因里事先编排好了的。你把你的队伍送出去参加竞赛,疟原虫也把它的队伍送出来。如果它们的进攻队员比你的防守队员棒,它们就赢了。抱怨你的差运气吧,你没有替补队员可换。

    但是,应该不是这样的吧?基因对疾病的抵抗能力应该是我们的最后一道防线,有各种各样比这简单的办法来打败疾病的。睡在蚊帐里面,把臭水沟抽干,吃药,在村子里撒DDT。吃好,睡好,避免精神压力,让你的免疫系统保持健康状态和在多数时候保持愉快的情绪。所有这些都与你是否会染上疾病有关。基因组可不是惟一战场。在前面几章里我进入了简化论的习惯。我把生物体拆开,把基因分离开,去辨别它们每一个有什么兴趣。但是没有一个基因是孤岛。每一个都存在于一个巨大的联盟之内,也就是身体。现在是把生物体的各部分放回到一起的时候了,现在是去探访一个“社交很广”的基因的时候了。这个基因的惟一功能就是把身体里一些不同的功能组织到一起。这个基因的存在昭示出我们有关肉体—精神的二重性是个谎言,它侵蚀着我们对人的认识。大脑、身体和基因组是被捆在一起的三个舞伴。基因组与另两者相互控制。这多少说明了为什么基因决定论是一个神秘的东西。人类基因的激活与关闭可以被有意识的与下意识的外界活动所影响。

    胆固醇是一个充满危险的词。它是心脏病的病因,是个坏东西,是红肉,你吃了就要死的。其实,把胆固醇与毒药等同起来的做法是错得不能再错了。胆固醇是身体的一个基本成分,它在一个微妙的将身体各部分组织到一起的生物化学与遗传系统里占有中心位置。胆固醇是一类很小的有机物,能溶解在脂肪里,不能溶解在水里。身体利用来自饮食的糖类合成它所需要的大部分胆固醇,没有它,人就活不下去。起码有五种至关重要的激素是由胆固醇出发制成的,每一个都有独特的功能:孕酮、醛固酮、皮质醇、睾酮和雌二醇。它们总称类固醇。这些激素与身体中的基因的关系既亲密又迷人,却也让人不安。

    类固醇激素被生命体使用了很长时间,也许比植物、动物和真菌的分道扬镳还要早。促使昆虫蜕皮的激素就是一种类固醇。在人类医学里那个被人们称为维生素D的谜一般的物质也是类固醇。有些人工合成的(或说是合成代谢)类固醇可以骗身体去抑制炎症,另外一些则可以用来强化运动员的肌肉。但是还有一些类固醇,虽然是从植物中提取出来的,却与人类的激素足够相似,可以用做口服避孕药。还有另外一些是化学公司的产品,也许它们要为被污染的水流中雄鱼雌化以及现代男人精子数目的减少负责。

    在第十号染色体上有一个基因名叫CYP17。它制造一种酶,使得身体能够把胆固醇转化成皮质醇、睾酮和雌二酮。如果没有这个酶,这个转化途径就被堵上了,那个时候,从胆固醇就只能造出孕酮和皮质酮。没有这个基因的正常形式的人无法制造出其他的性激素,所以他们就无法进入青春期之后的阶段。如果他在基因上是男性,他也会长得像个少女。

    但是先把性激素往旁边放一放,来考虑一下用CYP17造出的另一种激素:皮质醇。人体内的几乎每一个系统都用得上皮质醇,它名副其实地是一个把身体和精神结合起来的激素,因为它可以改变大脑的结构。皮质醇干预免疫系统,改变耳朵、鼻子和眼睛的灵敏度,改变各种身体机能。当你的血管里流动着很多皮质醇的时候,你就处于压力之下,这是压力的定义。皮质醇与压力几乎就是同义词。

    压力是由外部世界造成的,一个将要来临的考试、最近一个亲人的死亡、报纸上的什么吓人的消息或者因为照顾一个早老性痴呆症病人而感觉到的无休止的劳累。造成短暂压力的因素会导致肾上腺素与去甲肾上腺素的迅速上升,这两种激素使心跳加快,双脚冰凉。这两种激素在紧急情况下让身体做好“打还是跑”的准备。造成长期压力的因素激活一条不同的路径,结果是皮质醇缓慢而持续地增加。皮质醇最惊人的效应之一是它能够抑制免疫系统的工作。那些准备一个重要考试并出现了受到心理压力之后特有的生理特点的人更容易得感冒或受到其他感染,这是一个很重要的事实,因为皮质醇的效应之一就是减少淋巴细胞—白细胞的活性、数量和寿命。

    皮质醇靠激活基因来做到这一点。它只激活内含皮质醇受体的细胞里的基因,皮质醇受体则是由其他某些开关来控制的。它激活的那些基因的主要功能,是激活其他一些基因,有些时候,再激活的基因又去激活其他的基因,如此下去。皮质醇的间接影响可以多至几十甚至几百个基因。但是这个过程的开端——皮质醇的产生则是因为肾上腺皮质里有一系列的基因被激活了,它们制造出了生产皮质醇所需的酶,CYP17蛋白质就是其中之一。这是一个让人头昏眼花的复杂系统:如果我只是试着列出最基本的化学反应链,就能让你闷得要哭。所以,也许这样说就足够了:你需要几百个基因来生产和调节皮质醇并对皮质醇做出适当反应,而几乎所有这些基因的作用都是把其他基因激活或关上。这是很适时的一课,因为人类基因组里大部分基因的功能就是调节其他基因的表达。

    我说过我不会让你觉得闷,但还是让我们瞟一眼皮质醇的一个效应吧。在白细胞里,皮质醇几乎肯定参与了激活一个名叫TCF的基因,也在十号染色体上,这样,TCF就可以制造自己的蛋白质,然后用它去抑制一个名叫白介素二号的蛋白质的表达。白介素二号是一种使白细胞高度警惕、提防微生物的袭击的化学物质。所以,皮质醇会抑制你的免疫白细胞的警惕性,从而使你更容易得病。

    我想放在你面前的问题是:到底谁是管事儿的呢?是谁在一开始就把这些开关都放在了合适的位置上?又是谁决定什么时候把皮质醇释放出来?你可以说基因是管事儿的,因为身体的分化——身体内形成不同的细胞类型,在每一类型内活跃着的基因都不同——归根结底是个遗传的过程。但是这是不确切的,因为基因并不会引起生理和心理压力。一个所爱的人的死亡或是一个即将来临的考试并不与基因直接对话。它们只是经过大脑处理的信息。

    那么,大脑是管事儿的了?脑子里的下丘脑会发出一个信号,让脑垂体释放一种激素,它会告诉肾上腺皮质去制造和分泌皮质醇。下丘脑则是从大脑里有意识的那些区域接受指令,而这些区域又是从外部世界中得到信息。

    但是这也不能算是个答案,因为大脑也是身体的一部分。下丘脑之所以刺激脑垂体,脑垂体之所以刺激肾上腺皮质,并不是因为大脑认识到了这是一个很好的办法。大脑并没有设立这样一套系统,让你在要考试的时候就容易得感冒。是自然选择设立的这样一个系统(原因我稍后会解释)。而且,无论如何,这样一个系统都是非自主、无意识的举动,也就是说,是考试,而不是大脑,在主导这一切事件。如果考试才是罪魁祸首,那么我们就应该怪社会了,但是社会又是什么?也不过是很多个体的集合,于是我们就又回到身体上来了。另外,对抗压力的能力也因人而异。有些人觉得即将来临的考试非常恐怖,其他人却一路顺利。区别在什么地方?在制造、控制皮质醇与对皮质醇做出反应这一系列事件的链条上,易受压力的人与那些对压力没有什么反应的人相比,肯定有一个地方在基因上有细微的差别。但是,又是谁、是什么控制着这个基因上的差别呢?

    真正的情形是,谁也不是管事儿的。让人们习惯于这样一个事实是太难了,但是,世界充满了错综复杂的系统,它们设计巧妙,部件之间相互紧密地联系着,但是却没有一个控制中心。经济就是这样一个系统。有一个幻觉是如果有人去控制经济——决定什么产品应该由什么人在什么地方生产——它就会运转得更好。这个想法给全世界人民的健康和富裕都带来了巨大灾难,不仅是前苏联,在西方世界也是如此。从罗马帝国到欧洲国家联盟的高清晰度电视计划,由一个中心做出的应该在哪个方面投资的决定比无中心的市场调节而成的“混乱”差远了。所以,经济系统不应有控制中心,它们是由分散的因素来控制的市场。

    人体也是这样。你不是一个通过释放激素来控制身体的大脑,你也不是一个通过激活激素受体来控制基因组的身体,你也不是一个通过激活基因来激活激素来控制大脑的基因组。你同时又是以上所有这些。

    心理学里很多最古老的说法可以概括成此类错误概念。支持与反对“遗传决定论”的理论都事先假设基因组的位置是在身体之上的。但是就像我们看到的那样,是身体在需要基因的时候把它们激活,身体之所以这样做,常常是因为它是在对大脑(有时还是有意识的)对外部事件的反应做出回应。你可以只靠想象那些给人压力的场景——甚至是虚构的——就可以提高你体内的皮质醇水平。与此相似,争论一个人所受到的某种痛苦纯粹是精神上的原因还是也有部分是生理上的原因——例如ME,或叫慢性疲劳综合症,是完全不对的事情。大脑与身体是同一个系统的两个部分。如果大脑在回应心理上的压力时刺激了皮质醇的释放,而皮质醇抑制了免疫系统的活性,从而一个潜伏在体内的病毒感染得以发作起来,或是一个新的病毒得以进入身体,那么症状虽然是生理上的,原因却是心理上的。如果一种疾病影响到大脑,从而改变人的心情,那么原因虽是生理上的,症状却是心理上的。

    这个题目被称做心理神经免疫学,它正在慢慢地成为时尚。抵制它的多是医生,而把它吹得很神的是各种给人实施信心疗法的人。但是,证据却是足够真实的。长期心情不好的护士更容易得冻疮,虽然其他护士可能也带有同样的病毒。焦虑的人比起心情好的乐天派,更容易得阵发性的生殖系统疱疹。在西点军校,最容易得单核细胞增多症和得了这种病之后最容易出现严重症状的,是那些被功课压力搞得焦虑不安的学生。那些照顾早老性痴呆症患者的人(这是个压力很大的工作)的抗病T淋巴细胞要比估计的少。在三厘岛(Three Mile Island)核设施事故(1979年在美国东部宾夕法尼亚州附近三厘岛核电站发生的核泄漏事故)发生时居住在那附近的人,事故发生三年之后得癌症的比估计的多,并不是因为他们受到了放射线的伤害(他们并没有),而是因为他们的皮质醇大量增加,降低了免疫系统对癌细胞的反应。那些受到配偶死亡之痛的人,之后几个星期之内免疫力都比较低。父母如果在前一个星期里吵过架,那么他们的孩子就更容易得病毒感染。在过去的生活中有过心理压力的人,比起那些一直生活愉快的人来更容易患感冒。如果你发现这些研究有点让人难以置信,那么我告诉你,这些研究中的大多数在老鼠身上也能够得到相似结果。

    可怜的老勒内·笛卡儿(René Descartes)(17世纪数学家、科学家、哲学家),人们通常说是他发明了主宰了西方世界的身心二元论,使得我们拒绝接受精神可以影响肉体、肉体也可以影响精神这样一个观点。把这个归罪于他可不公平,这是我们大家都犯的错误。而且,不管怎样,并不都是二元论的错——这个理论本来是说有一个存在于组成大脑的物质之外的精神。我们都犯过一个比这更大的错误,犯这个错误如此容易,我们自己都没有察觉。我们直觉地假设身体里的生物化学反应是因,行为是果,我们还在思考基因对我们生活的影响的时候把这个假设推到可笑的极致。如果基因对行为有影响,那么基因就是因,就是不可变的。这个错误不仅遗传决定论者会犯,他们那些吵闹的反对者也犯,这些反对者认为行为“不是在基因里”,他们说行为遗传学所暗示的宿命论和先决论让人反感。他们给了遗传决定论者太多余地,没有对“基因是因”这个假设提出疑问,因为他们自己也做了同样的假设:如果基因是与行为有关的,那么基因肯定是在金字塔的顶端。他们忘记了,基因是需要被激活的,而外界事件——或者说,由自由意志控制的行为——可以激活基因。我们可远不是缩在我们那无所不能的基因脚下,受它们的恩赐,我们的基因经常是受我们的恩赐。如果你去玩“蹦极”,或者找一份压力很大的工作,或者持续地想象一个可怕的事情,你会提升你体内的皮质醇水平,而皮质醇就会在你的身体内跑来跑去地激活各种基因。(还有一个无可置疑的事实,就是你可以用故意而为的微笑来刺激你大脑里的“高兴中心”,就像你可以用一个愉快的想法来使你微笑一样。微笑真的会让你觉得愉快一些。生理变化可以被行为调动。)

    关于行为怎样改变基因表达,有些最好的研究是用猴子做的。很幸运的,对于那些相信进化论的人来说,自然选择是个俭省得可笑的设计师,她一旦想出了一个基因与激素的系统用来显示和对付身体所受的压力,她就很不情愿修改了。(我们的98%是黑猩猩,94%是狒狒,还记得吧?)所以,在我们体内与在猴子体内,有同样的激素用同样的方法激活同样的基因。在非洲东部有一群狒狒,它们血液中的皮质醇水平被人们仔细地研究过。雄狒狒到了一个特定年龄都惯于加入一个狒狒群。当一只年轻的雄狒狒刚刚加入一个狒狒群的时候,他变得极赋进攻性,因为他要通过打架来建立他在自己选择的这个“集体”里的地位。他的这一行为使得他这位“客人”的血液里的皮质醇浓度大幅上升,他的那些不情愿要他的“主人”们血液皮质醇浓度也上升了。随着他的皮质醇(以及睾丸酮)浓度上升,他的淋巴细胞的数量减少了,他的免疫系统直接受到了他的行为所造成的冲击。与此同时,在他的血液里,与高浓度脂蛋白(HDL)结合在一起的胆固醇越来越少。这样的现象是冠状动脉堵塞的一个经典的前兆。这个雄狒狒通过自己的自由意志在改变自己的激素水平,于是也就改变了自己体内的基因表达,这样,他便增加了自己受微生物的感染与得冠状动脉疾病的机会。

    在动物园里生活的那些得冠状动脉疾病的猴子都是在尊卑顺序里最下层的。它们被那些地位更高的同伴欺负,持续地感受到压力,血液里皮质醇浓度高,大脑里缺乏5-羟色胺,免疫系统永久性地被抑制着,它们的冠状动脉壁上积满了伤疤。到底这是为什么,仍然是一个谜。很多科学家现在相信冠状动脉疾病至少部分是由于微生物感染而引起的,例如一种球状的革兰氏阴性细菌和疱疹病毒。压力带来的是降低免疫系统对这些潜伏的感染的监视,使得它们得以繁荣起来。在这个意义上,也许在猴子那里心脏病是一种传染病,虽然压力也会有一定作用。

    人和猴子很像。在尊卑次序里靠底层的猴子容易得心脏病这一发现,是紧跟着另一个更让人吃惊的发现之后做出的。另外一个发现是:英国的公务员得心脏病的可能性是与他们在这个官僚机构里的地位有多低成正比的。一个大型、长期的研究调查了1.7万名在伦敦警察局工作的公务员,一个几乎令人无法置信的结果出现了:一个人在工作中的地位比他是否肥胖、是否吸烟和是否血压高更能准确地预示这个人是否有心脏病。一个做低级工作的人,比如清洁工,比起一个在人堆儿上面地位稳固的秘书,得心脏病的可能高几乎三倍。实际上,即使这个秘书很胖、有高血压,或者吸烟,在每一年龄段他得心脏病的可能性仍然小于一个很瘦、血压正常且不吸烟的清洁工。在60年代对100万名贝尔电话公司雇员的一个类似调查中也得到了同样的结果。

    把这个结论考虑一分钟。它把别人告诉过你的所有关于心脏病的知识都给削弱了,它把胆固醇推到了故事的角落(胆固醇高是一个危险因素,但是只在那些因为遗传原因而容易高胆固醇的人那里才是如此,而且即使对于这些人,少吃含脂肪食物的收益也很小)。它把饮食习惯、吸烟和血压——医学界喜欢把这三者说成是心脏病的生理原因——变成了间接的致病因素。它把一个陈旧和已经不太为人所信的说法变成了一个脚注,这个说法认为压力和心脏病来自于繁忙的职务高的工作,来自于喜欢快节奏生活的个性。这个说法有一丝真理在里面,但不多。科学研究把这些因素的作用都降低了,取而代之的是与生理状况无关的纯粹环境的因素:你在工作中的地位。你的心脏是否健康要看你拿的薪水怎么样。这到底是怎么回事呢?

    猴子那里有些线索。它们在尊卑次序里越低,它们就越无法控制自己的生活。公务员也如此。皮质醇浓度的提高不是看你面对的工作数量多还是少,而是看你被多少人呼来喝去。实际上你可以通过实验来演示这个效果:给两组人同样多的工作,但是命令一组人用一种规定的方法去做这个工作,必须遵守某个事先规定的进度。这一组被外界控制的人比起另外一个组来,体内因压力而释放的激素浓度更高,血压升高,心率加快。

    在对伦敦警察局雇员进行的研究开始20年之后,同一项研究在一个已经开始私有化的公众服务部门里被重复了一次。在研究一开始,公务员们都不知道失业意味着什么。事实上,当研究者们为这项研究设计问卷的时候,被调查对象对问卷中的一道题提出了异议,这道题是问他们是否害怕失去自己的工作。他们解释说,在公众服务这个行业,这个问题根本没有意义,他们最多会被转到另外一个部门去。到了1995年,他们就清楚地知道失去工作意味着什么了,三分之一以上的人已经尝过失业的滋味了。私有化的结果,是给了每个人这样一种感觉:他们的生活是受外部因素控制的。一点也不令人吃惊地,心理压力增加了,健康情况随之下降了,健康情况恶化的人数之多,无法用饮食、吸烟、喝酒方面习惯的改变来解释。

    心脏病是自己无法控制自己的生活时出现的症状,这样一个事实解释了它的出现为什么是分散的。它也能够解释为什么那么多有高级职务的人退休“享受悠闲生活”之后不久就会得心脏病。他们常常是从指挥一个办公室“沦落”到在由老伴做主的家庭环境里干一些“低级”的需要动手的活儿(洗碗、遛狗之类)。它能够解释为什么人们可以把某一种疾病甚至是心脏病的发生推迟到一个家庭成员的婚礼或是一个重大庆典之后——直到他们操持、忙碌、做出决定之后。(学生也是更容易在紧张的考试之后生病,而不是在考试期间。)它能够解释为什么失业和靠救济金生活是如此有效的让人生病的办法。在猴群里面,没有一只雄性首领是像政府的社会福利署控制那些领救济金的人那样来铁面无私地控制它属下的猴子的。它甚至有可能解释为什么那些窗户不能被打开的现代化大楼会让人容易生病,因为在老式楼房里面人们能够对自己的环境有更多的控制。

    我要再强调一遍我已经说过的话:行为远不是受我们的生物特性所控制,我们的生物特性常常是受我们的行为控制的。

    我们发现的皮质醇的特点对于其他类固醇激素也适用。睾丸酮在体内的水平与进攻性成比例。但这是因为这种激素导致进攻性,还是因为进攻性导致这种激素的释放?我们的唯物主义思维使得我们发现第一种说法比较可信。但是事实上,对于狒狒的研究表明,第二种说法却更接近于真理。心理变化先于生理变化而出现。精神驱动身体,身体驱动基因组。

    睾丸酮和皮质醇一样可以抑制免疫系统。这就解释了为什么在很多物种里雄性比雌性容易染病,染病之后的死亡率也比雌性高。免疫机制的抑制不仅仅只反映在身体对于微生物的抵抗力方面,也反映在对于大的寄生虫的抵抗力方面。牛蝇在鹿和牛的皮肤上产卵,孵出来的蛆虫先要爬进这些动物的肉里去,然后才返回到皮肤上去做一个小“窝”在里面变成蝇。挪威北部的驯鹿就特别为这种寄生虫所困扰,但在雄鹿身上又明显地比雌鹿身上更严重。平均来说,到了两岁的时候,一只雄鹿身上牛蝇的“窝”比雌鹿身上要多两倍。但是,被阉割了的雄鹿身上牛蝇的“窝”又与雌鹿差不多了。类似的模式在观察很多寄生虫的时候都会发现。例如,包括引起南美锥虫病的原生动物,人们普遍认为这种病就是达尔文长期不适的原因。在智利旅行的时候,达尔文曾被传播南美锥虫病的虫子叮咬过,他后来的一些症状也与这种病相吻合。如果达尔文是个女人,他也许就用不着花那么多时间替自己委屈了。

    但是在这里,我们从达尔文那里得到启发。睾丸酮抑制免疫系统的功能这一事实被自然选择的表弟——性别选择——给抓住并且很充分地利用了。在达尔文论进化的第二部著作《人类的由来》里,他提出了这样一个想法:就像育鸽子的人能够培养良种鸽子一样,女人也可以培养“良种”男人。如果雌性动物在连续多代里用固定的标准来选择与谁交配,她们就可以改变她们这个物种里雄性的身体形状、大小、颜色或歌声。事实上,就像我在关于X和Y染色体的那一章里讲过的,达尔文提出过,这样的事在孔雀里就发生过了。在他之后一个世纪,一系列的实验与理论研究在70年代和80年代证明了达尔文是正确的。雄性动物的尾巴、羽毛、角、歌声和身体大小都是由于一代一代的雌性动物在择偶时条件一致而逐渐形成的。

    但是为什么呢?一个雌性动物选了一个长尾巴或是大声唱歌的雄性动物,她能得到什么可以想见的好处呢?在人们的争论中,有两个受人欢迎的理论占了主要位置。一个是说,雌性动物需要迎合时尚,否则她们生的儿子可能就不会被那些迎合时尚的雌性动物选中。另一种理论是我想在这里让读者考虑的,那就是雄性体表那些“装饰物”的质量以某种方式反映了他的基因的质量,尤其是反映了他对流行疾病的抵抗力。他是在对所有愿意倾听的人说:看我是多么强壮啊,我能够长一条长长的尾巴,能够唱这么动听的歌,是因为我没有得疟疾,也没有生寄生虫。睾丸酮能够抑制免疫系统这一事实其实是帮助了雄性,使他的“话”更加真实可信。这是因为他那些“装饰物”的质量取决于他血液里睾丸酮的浓度:他体内的睾丸酮越多,他的外表就越五颜六色,身体就越大,越会唱歌,也越有进攻性。如果他能够在免疫机能被睾丸酮降低了的情况下不仅不生病,还能长一条大尾巴,那么他的基因肯定很了不起。这几乎像是免疫系统把他的基因“真相”掩盖住了,睾丸酮则把帷幕掀开,让雌性直接看看他的基因到底怎么样。

    这个理论被称做免疫竞争力缺陷,它是否正确,取决于睾丸酮对免疫系统的抑制作用是否真的不可避免。一个雄性动物无法既提高睾丸酮的浓度又使免疫系统不受影响。如果这样一个雄性动物存在,他无疑是一个巨大的成功,会留下许多后代。因为他既能长一条长尾巴又能有免疫力。因此,这个理论暗示着类固醇与免疫能力之间的联系是固定不变、不可避免的,也是非常重要的。

    但是这就更让人迷惑了。没有人能够解释为什么这个联系一开始会存在,更别说它为什么是不可避免的了。为什么身体被设计成这样,它的免疫系统要被类固醇激素抑制?这个设计意味着每当生活中的事件使你感到压力的时候,你就更容易受微生物感染,更容易得癌症和心脏病。这简直是在你倒地的时候上去踢你一脚。它意味着每当一个动物提升自己的睾丸酮浓度以与对手争夺配偶或是向异性展示自己的时候,他就更容易受微生物感染,更容易得癌症和心脏病。为什么?

    不少科学家都为这个谜绞过脑汁,但是收获甚微。保罗·马丁(Paul Martin)在他关于心理神经免疫学的书《患病的意识》中,讨论并否定了两种解释。第一种解释是说,这一切只是一个错误,免疫系统与对压力的反应之间的联系只是另外某些系统的副产品。就像马丁指出的,对于人体免疫系统这样一个有着复杂的神经与化学联系的系统来说,这是一个相当不令人满意的解释。身体里很少有哪个部分是偶然形成的、多余的或是没有用处的,复杂的系统更是如此。自然选择会无情地把那些抑制免疫系统的东西砍掉,如果它们确实没有用处。

    第二种解释是说,现代生活方式制造出的压力很多是不自然的、过久的,在以前的环境里这样的压力通常都是短暂的。这个解释同样令人失望。狒狒和孔雀是生活在很自然的环境里,可是它们——以及地球上几乎所有的鸟类和哺乳动物——也因类固醇而遭到免疫抑制。

    马丁承认这是令人不解的事。他不能解释压力不可避免地抑制免疫系统这一事实。我也不能。也许,就像迈克尔·戴维斯(Michael Davies)提出的那样,免疫系统功能的降低是在半饥饿的时候——在现代社会之前这是一种很常见的生存状态——保存能量的办法。也或许,对皮质醇的反应是对睾丸酮反应的副产物(这两种物质在化学成分上非常相似),而免疫系统对睾丸酮的反应则可能是雌性动物的基因故意安排在雄性动物体内的一个机制,用来把那些对疾病的抵抗力更强的雄性与其他的区别开来。换句话说,类固醇与免疫系统的联系也许是某种性别对抗的产物,就像在X和Y染色体那一章里讨论过的一样。我觉得这种解释也不太可信,不过,你要是有本事你想一个出来。

    第十五号染色体 性别

    所有的女人都会变得和她们的母亲一样,这是她们的悲剧;没有一个男人会变得和他们的母亲一样,这是他们的悲剧。  ——奥斯卡·王尔德:《不可儿戏》

    在马德里的普拉多博物馆,挂着两副17世纪宫廷画家胡安·卡瑞尼奥·德·米兰达(Juan Carreo de Miranda)的作品,叫做“穿衣服的恶魔”与“不穿衣服的恶魔”。它们描绘的是一个过于肥胖却一点没有魔相的五岁女孩,她的名字是尤金尼亚·马蒂拉兹·维耶候(Eugenia Martinez Vallejo)。很明显地她有些什么地方不对劲:她很肥胖,对于她的年龄来说是个巨大的人,有着非常小的手和脚和形状怪异的眼睛和嘴。她活着的时候或许是在马戏团被当成畸形人展出的。现在看起来,她很明显地有着一种罕见疾病——普拉德?威利(Prader-Willi)综合症——的所有典型症状。有这种症状的儿童,出生时身体软绵绵的、皮肤苍白,不肯吸母亲的乳头,在后来的生活中却吃饭吃得要把自己撑爆,从来就不觉得饱,所以就变得肥胖。在一个例子中,一个普拉德?威利综合症患儿的父母发现自己的孩子在从商店到家的途中,坐在汽车后座上吃完了一整磅生的熏猪肉。有这种病的人长着小手小脚和欠发育的性器官,智力也有轻微迟钝。他们时不时地会大发脾气,尤其是当他们想要食物而被拒绝的时候,但是他们也有一种能力,被一位医生称为“超群的拼图(jigsaw puzzle,一种游戏,从几百、上千块碎片拼出一副完整的图)能力”。

    普拉德?威利综合症是在1956年由瑞士医生首先确诊的。有一种可能是,它只是另外一种罕见的遗传病,是我在这本书里一再保证不写的那种病,因为基因的存在不是为了致病的。但是,关于这个基因有一些十分奇怪的东西。在80年代,医生注意到,普拉德?威利综合症有时会在一个家庭里以另外一种完全不同的疾病形式出现,不同之处如此之大,可以被称为是普拉德?威利综合症的反面。这种病叫做安吉尔曼综合症。

    当哈里·安吉尔曼(Harry Angelman)在兰开郡沃灵顿(Warrington,Lanca shire)做医生的时候,他第一次意识到,他所说的那些受着罕见疾病折磨的“玩偶孩子”是患有一种遗传疾病。与普拉德?威利综合症的患儿相反,他们身体并不软,反而绷得很紧。他们很瘦、异常地活跃、失眠、头很小、下巴很长,常把他们的大舌头伸出来。他们的动作一顿一顿的,像木偶一样,但是他们有着愉快的性格,总是微笑着,并时不时爆发出一阵大笑。但是他们永远学不会说话,智力严重迟钝。安吉尔曼症患儿要比普拉德?威利症患儿少得多,但是有些时候他们在同一个家族里出现。

    很快弄清楚了,十五号染色体上的同一个区域在普拉德?威利综合症和安吉尔曼综合症患者体内都丢失了。区别则在于,在普拉德·威利综合症患者里,丢失的部分来自父亲的染色体,而在安吉尔曼综合症患者里,丢失的部分来自母亲的染色体。同一种疾病,如果是通过一个男性传到下一代,就是普拉德?威利综合症;如果通过女性传到下一代,就是安吉尔曼综合症。

    这些事实对于我们从格雷戈尔·孟德尔以来了解到的有关基因的一切真是迎头一击。它们与基因组的数码特点似乎不太相符,这就意味着一个基因不仅仅是一个基因,它还带有一些它的出身的隐秘历史。一个基因“记着”它是从父母哪一方来的,因为在卵子受精的时候它得到了一个父方或母方的印记——就像是来自某一方的基因是用斜体字写的。在这个基因呈活跃状态的每一个细胞内,带有印记的那个基因拷贝是活跃的,另一个拷贝则不表达。这样,身体就只表达来自父方的那个基因(在普拉德·威利综合症的情况下),或只表达来自母方的那个基因(在安吉尔曼综合症的情况下)。这具体是怎么发生的我们全然不知,但是我们已经开始在了解它了。它的成因,将是一个不同寻常而又大胆的进化理论所要解释的。

    80年代晚期,在费城和剑桥的两组科学家有了一个出人意料的发现。他们试图制造出只有父亲或只有母亲的老鼠。由于那时从体细胞中直接克隆老鼠还不可能(在多莉(世界上第一只克隆成功的哺乳动物)之后,情况急转直下),费城的那组便把两个受精卵的“前核”做了交换。当一个卵细胞受精的时候,带有染色体的精子细胞核进入卵细胞,却并不马上就与卵细胞核融合在一起:这两个细胞核被称为“前核”。一个灵巧的科学家可以用他的移液管“潜入”受精卵,把精子的细胞核吸出来,把另外一个卵细胞的细胞核放进去;他也可以把卵细胞核取出来,放进另外一个精子细胞核。结果是他得到了两个受精卵,从遗传角度说,一个受精卵有两个父亲,没有母亲,另一个则有两个母亲,没有父亲。剑桥的那一组用了略为不同的技术,但得到的是同样的结果。但是,这两组得到的胚胎都没有能够正常发育,很快就死在子宫里了。

    在有两个母亲的那种情况里,胚胎本身有正常的结构,但它却无法制造一个胎盘来给自己获取营养。在有两个父亲的那种情况里,胚胎制造出了一个又大又正常的胎盘,也基本上有围绕着胎儿的膜。但是,在膜里面胚胎应该在的位置上,只有一小团没有结构的细胞,看不出头在哪里。

    这些实验结果引向了一个不寻常的结论。遗传自父方的基因负责制造胎盘;遗传自母方的基因负责胚胎大部分的发育,特别是头部和大脑。为什么会是这样的?五年之后,当时在牛津的大卫?黑格认为他得到了答案。他开始重新诠释哺乳动物的胎盘,不把它当成是一个用来维持胎儿生命的母体器官,却更把它看做是胎儿的一个器官,目的是寄生于母体的血液循环,在这个过程中又不服从于任何阻挡。他注意到,胎盘实实在在地是钻进母体的血管里去,迫使血管扩张,进而又产生一些激素提高母体的血压和血糖浓度。母体的反应是通过提高胰岛素的浓度来抵御这种“入侵”。但是,如果因为什么原因,胎儿的激素没有分泌出来,母体就不需要提高胰岛素的浓度,仍然有一个正常的怀孕期。换句话说,尽管母体和胎儿有共同的目标,两者却在细节上激烈地争吵,关于胎儿可以使用母体资源的多大一部分——同以后在婴儿断奶时的冲突一模一样。

    但是,胎儿的一部分是由来自母体的基因造成的,所以它们如果发现自己有些相互冲突的利益,也不足为奇。胎儿体内来自父体的基因就没有这样的问题。它们心里没有母亲的利益,她只是为它们提供了一个家。暂时用个拟人的说法,父亲的基因不太信任母亲的基因能够造就一个侵入性足够强的胎盘,所以它们要自己来完成这项工作。因此,我们才得以在有两个父亲的胚胎里发现胎盘基因上有父方的印记。

    黑格的理论做出了一些预测,很多在短时间内就被证实了。具体地说,它预测了给基因加印记这个过程在下蛋的动物里不存在,因为一个在蛋里的基因无法影响母亲在蛋白有多大这个问题上所做的投资:在它可以影响母亲之前,它就已经离开母体了。与此相似的是,袋鼠之类的有袋动物以口袋代替胎盘,从黑格的理论出发,也不会有带有印记的基因。到现在为止,看起来黑格是对的。基因标记是有胎盘的哺乳动物与种子依靠母体才能存活的植物所特有的。

    还有,黑格很快就带着胜利感注意到,一对新近发现的带印记的老鼠基因与他预测的功能一致:控制胚胎的发育。IGF2是由一个基因造出的非常小的蛋白质,与胰岛素类似。它在发育中的胎儿体内很充足,在成人体内却不被表达。IGF2R是另外一个蛋白质,IGF2与之连接起来,但是目的是什么,还不清楚。IGF2R的存在可能只是为了除掉IGF2。现在听好,IGF2和IGF2R基因都带有标记:前者只从来自父方的染色体表达,而后者只从来自母方的染色体表达。它看起来非常像是一场小小的竞赛:来自父方的基因鼓励胚胎的发育,来自母方的基因使其发育不要过度。

    黑格的理论预测,带有标记的基因通常会在这样相互作对的基因对里被发现。在有些情况下,甚至在人体内,他的预测是正确的。人体的IGF2基因位于第十一号染色体上,带有父方的印记。如果有人偶然遗传到了两个父方的版本,他就会受拜克维斯·魏德曼(Beckwith Wiedermann)综合症的折磨,心脏和肝脏会发育得过大,肿瘤在胚胎组织里也会比较常见。尽管人体内的IGF2R基因没有印记,倒确有一个带有母方印记的基因,H19,是与IGF2作对的。

    如果带有印记的基因之所以存在,只是为了跟对方作对,那么你就应该能够把两者的表达都停掉,对胚胎的发育应该没有任何影响。你能够这样做。把所有的印记都去掉,仍然能够得到正常的老鼠。我们又回到我们所熟悉的第八号染色体了,在那里基因是自私的,它们做对自己有利,而不是对整个身体有利的事情。基因标记几乎没有任何内在的目的性(尽管很多科学家曾做过相反的猜测);它只是基因自私的理论和两性冲突的一个具体事例。

    当你开始用基因都是自私的这个方式来思考的时候,一些真正奇怪的想法就进入了你的头脑。试一试这个吧。受父方基因影响的胚胎如果与其他拥有同一个父亲的胚胎一起分享子宫环境,它们的行为会和与其他拥有另一个父亲的胚胎一起分享子宫环境时不太一样。在后一种情况下它们或许有更多的自私的父方基因。这个想法一旦被想到,做一个自然的实验来验证这个预测就是相对容易的事了。不是所有的老鼠都是一样的。在一些种类的老鼠里面,例如Peromyscusmaniculatus,母鼠与多个公鼠交配,每一窝老鼠通常都有几个不同父亲的后代。在其他种类的老鼠里,例如Peromyscuspolionatus,母鼠只与一只公鼠交配,每一窝老鼠都有同一个父亲和同一个母亲。

    那么,当你让P.maniculatus与P.polionatus交配时,会发生什么呢?这取决于哪一种是父亲哪一种是母亲。如果多配偶的P.maniculatus是父亲,幼鼠生下来的时候就有巨大的个头。如果单配偶的P.polionatus是父亲,幼鼠生下来时个头就会很小。你看出来是怎么回事了吗?maniculatus的父方基因,因为估计着自己会与跟自己不同父的其他胚胎合住在子宫里,已经被自然选择培养出了与其他胚胎争夺母体资源的能力。maniculatus的母方基因,估计着子宫里的胚胎们会为了她的资源争斗不停,被自然选择培养出了反击的能力。在事态比较温和的polionatus的子宫里,气势汹汹的maniculatus的父方基因只遇到了一点象征性的抵抗,所以,它们赢了这场竞争:如果幼鼠有多配偶的父亲,它的个头就大;如果有多配偶的母亲,个头就小。这是基因标记理论的一个很清楚的演示。

    这个故事虽然很流畅,但并不是一丝漏洞都没有。就像很多吸引人的理论一样,它好得都不像真的。具体来说,它的一个预测没有实现:带有印记的基因应该是进化得比较快的。这是因为两性之间的冲突会成为分子之间“军备”竞赛的动力,每一种分子通过暂时获得先手而获益。通过一个物种一个物种地比较带有标记的基因,没发现有这种现象。相反地,带有标记的基因似乎进化得很慢。事情看上去越来越像是这样一种情况,即黑格的理论可以解释基因标记的一部分现象,却并非全部。

    基因标记有一个很有意思的后果。在一个男人体内,来自母体的第十五号染色体带有一个记号,说明自己来自母方。但是,当他把这条染色体传给自己的儿子或女儿的时候,它必须用某种方法得到一个记号表明自己是从他体内来的,亦即父方。它必须从一个母方染色体变成父方染色体。在母亲体内有相反的工作需要进行。我们知道,这样的一个转换肯定是发生了的,因为在一小部分安吉尔曼综合症患者体内,两条染色体都没有什么不正常的地方,只除了两者的行为好像它们都来自父方似的。这些是转换没有成功的例子。它们的原因可以被追回到上一代体内的某些突变,这些突变影响一个名叫基因标记中心的东西,它是一小段离有关基因很近的DNA,通过某种方法把父方的标记放到基因上去。这个标记就是一个基因的甲基化,就是我们在第八章里谈过的那种。

    你还记得吧?字母C的甲基化是使基因变得“沉默”的方法,它被用来把那些自私的基因“软禁”起来。但是,在胚胎发育的早期,所谓的胚囊形成的时候,甲基化被去掉了,然后在发育的下一个阶段,原肠胚形成的时候,又被重新加回来。不知为什么,带有标记的基因逃过了这一过程。它们顶住了去甲基的过程。关于它们是怎样做到这一点的,有一些很有意思的线索,但是还没有任何确定的答案。

    我们现在知道,带有标记的基因躲得过去甲基这个过程,是多年以来科学家试图克隆哺乳动物时的惟一障碍。蟾蜍可以很容易地被克隆,只需要把体细胞里的基因放进一个受精卵里即可。但是在哺乳动物那里这一招就是行不通,因为一个女性体细胞内的基因组带有一些被甲基化因而不再活跃的重要基因,男性体细胞里又有另外一些不活跃的基因,这些就是带有标记的基因。所以,在基因标记被发现之后,科学家们曾很自信地宣布,克隆哺乳动物是不可能的。一只克隆出来的哺乳动物,它的有标记的基因在它出生时要么在两条染色体上都表达,要么都不表达,如此就破坏了动物细胞所需要的合适的量,也就导致了发育的失败。发现了基因标记的科学家写到:“用体细胞的核来成功克隆哺乳动物之不可能性,是顺理成章的。”

    之后,突然之间,在1997年上半年,出现了多莉,克隆的苏格兰母羊。她与后来的那些克隆是怎样避开了基因标记这个问题,还是个谜,甚至对她的创造者来说也是如此。但是看上去,在克隆过程中给她的细胞施加的处理方法中,肯定有某一部分把基因的所有标记都抹掉了。

    第十五号染色体带有标记的那一段区域带有大概八个基因。其中的一个一旦被破坏,就会造成安吉尔曼综合症,这个基因叫做UBE3A。在这个基因的旁边是两个一旦被破坏就可能造成普拉德?威利综合症的基因,一个叫SNRPN,另一个叫IPW。可能还有其他的,不过现在让我们先假设SNRPN就是罪魁。

    这两种病并不总是因为这些基因的突变而发生,它们也可以产生于另外一种“事故”。当一个卵细胞在一个妇女的卵巢里形成的时候,它通常是每一条染色体都得到一份。但是在很少见的情况里,一对来自母方的染色体没有能够分离开来,那么,卵细胞就把同一条染色体得到了两份。在精子与卵子结合之后,胚胎就有了三条同样的染色体,两条来自母亲,一条来自父亲。这种情形在高龄孕妇那里更有可能,这对受精卵来说常常是致死的。只有在这三条染色体都是第二十一号染色体——最小的染色体——的时候,胚胎才能够发育成一个可以存活的胎儿,出生之后能够存活几天以上,结果就是唐氏综合症。在其他情况下,多余出来的染色体把细胞内的生物化学反应搅得乱七八糟,使胚胎发育无法成功。

    但是,在大多数情况下,在还没有到这一步的时候,身体就已经有办法来对付这个“三倍体”问题了。它干脆扔掉一条染色体,只留下两条,就像本来应该的那样。困难在于,它这样做的时候很盲目。它无法确定自己扔掉的是两条来自母方的染色体之一,还是惟一那条来自父方的。这样盲目地扔,有66%的机会把来自母方的多余染色体扔掉,不过事故也经常发生。如果它错误地扔掉了惟一那条来自父方的染色体,那么胚胎就高高兴兴地带着两条母方染色体继续发育。在大多数情况下这没有任何关系,但是,如果那“三倍体”是第十五号染色体,你就立刻会看出来将要发生什么。两份带有母方标记的UBE3A要被表达,带有父方标记的SNRPN却一份都没有。结果,就是普拉德?威利综合症。

    表面上看来,UBE3A不是什么有趣的基因。它制造的蛋白质是一种“E3泛蛋白连接酶”,这是一类存在于某些皮肤和淋巴细胞里的、不起眼的从事“中层管理”工作的蛋白质。然后,在1997年年中,三组科学家忽然同时发现,在老鼠和人类里,UBE3A在大脑里也表达。这无异于是炸药。普拉德?威利综合症与安吉尔曼综合症的症状都表明病人的大脑有些不同寻常。更让人惊讶的是,有很强的证据表明,其他一些带有标记的基因在大脑里也很活跃。具体地说,在老鼠里,大部分的前脑看起来都是由带有母方标记的基因造出来的,而大部分的下丘脑(位于脑子的基座处),则是由带有父方标记的基因造出来的。

    这种不平衡是通过一件构想巧妙的科学工作而发现的:老鼠“镶嵌体”的创造。镶嵌体是两个具有不同基因的个体身体的融合。它们在自然条件下就会出现——你可能见过这样的人,你可能自己就是这样的人,但是除非对染色体做细致的检查,你不会意识到。两个带有不同基因的胚胎可以融合起来,然后就像它们原本是一个那样地发育。可以把它们想成是同卵双生子的反面:一个身体里有两个不同的基因组,而不是两个不同的身体带有同样的基因组。

    比较而言,在实验室里制造老鼠的镶嵌体还是很容易的,小心地把两个早期胚胎的细胞融合起来即可。但是在这里,剑桥科研小组的独创性在于,他们把一个正常的老鼠胚胎与另外一个特殊胚胎融合起来了。这个特殊的胚胎,是由一个卵细胞核给另一个卵细胞“受精”而造出来的。这样,它就只带有母亲的基因,没有一点来自父亲的贡献。结果,生出了一只脑子奇大的老鼠。当这些科学家把一个正常胚胎与一个只来自父方的胚胎(从一个卵细胞产生,但是卵细胞的细胞核被两个精子细胞的细胞核所取代了)融合起来之后,结果刚好相反:一只身子大脑袋小的老鼠。通过给母方细胞加上一个类似于无线电信号的生化“信号”,用来报告它们所在的位置,科学家们得以做出了这样一个重大发现:老鼠大脑里大部分的纹状体、脑皮质、海马区都是由这些母方细胞组成的,但是这些细胞被排斥在下丘脑之外。脑皮质是加工来自感官的信息、产生行为的地方。相比之下,父方的细胞在脑子里比较少,在肌肉里则比较多。当它们出现在脑子里的时候,它们为下丘脑、杏仁体、视前区的发育出了些力。这些区域组成了“皮质下感情系统”的一部分,负责控制感情。一位科学家罗伯特·特利沃斯(Robert Trivers)的意见是:这样的区别所反映的是脑皮质需要完成与母方的亲友好好相处这样一项任务,而下丘脑是个自大的器官。

    换句话说,如果我们相信父方的基因不放心让母方基因去造一个胎盘,那么,大脑就是母方基因不放心让父方基因去造的。如果我们像老鼠一样,我们可能就会带着母亲的思想和父亲的感情在这世界上生活(如果思想和感情可以遗传)。在1998年,另外一个带有标记的基因在老鼠体内被发现了,它有个了不起的特点,就是它能够决定一只雌老鼠的母性行为。带有这个Mest基因的正常形式的老鼠是认真照料幼鼠的好妈妈。没有这个基因的正常形式的雌老鼠仍然是正常的老鼠,只是,她们是很差劲的妈妈。她们造不出像样的窝,幼鼠出去闲逛的时候这些妈妈也不把它们招回来,幼鼠身上脏了她们也不管,总的说来,她们好像无所谓。她们的幼鼠通常会死去。无法解释的是,这个基因是从父系遗传来的。只有来自父方的拷贝才有功能,来自母方的拷贝是不活跃的。

    黑格关于胚胎发育冲突的理论无法很轻易地解释这些现象,但是,日本生物学家岩佐庸(YohIwasa)有一个理论却可以。他提出,因为父亲的性染色体决定了后代的性别——如果他传下去一条X染色体,而不是Y染色体,后代就是女性——父方的X染色体就只有在女性体内才有。因此,女性特有的行为就应该只从来自父方的X染色体上表达。如果它们也在来自母方的X染色体上表达,它们就可能也会出现在男性身上,或者它们在女性体内会被表达得太多了。这样,母性行为带有父方遗传的标记就是很合理的事了。

    对这个想法的最好证明来自于伦敦儿童健康研究院的戴维·斯库斯(David Skuse)与同事们做的一项不寻常的观察。斯库斯找到了80位患有特纳综合症的妇女与小女孩,年龄在6~25岁之间。特纳综合症是由于X染色体的部分或全部缺失而引起的。男性只有一条X染色体,女性把她们所有细胞里的一条X染色体都保持在关闭的状态。从原则上说,特纳综合症就应该在发育上没有什么作用。实际也是如此,患有特纳综合症的女孩具有正常的智力和外表。但是,她们常常在“社交适应”方面有问题。斯库斯和他的同事们决定比较两种不同的患特纳综合症的女孩:一种是丢失了来自父方的X染色体,一种则丢失了来自母方的X染色体。25名丢失了母方X染色体的女孩,明显地比55名丢失了父方X染色体的女孩要适应得更好,有着“卓越的语言和高级控制能力,这些能力调节人际间的交往”。斯库斯与同事们是通过让孩子们做标准化的认知测试和给父母调查问卷的方式来估量社交适应能力的。在问卷中,他们询问父母孩子是否有如下的表现:意识不到别人的感受,意识不到别人的烦躁和怒气,对自己行为对家人的影响毫无察觉,总是要求别人陪伴,烦躁的时候很难与之讲道理,自己的行为伤害了别人自己却意识不到,不听命令,如此等等。父母必须回答0(一点都没有)、1(有些时候有)或2(经常如此)。然后,所有12个问题的回答被加起来。所有患特纳综合症的女孩都比正常的男孩女孩的总分高,但是,丢失了父方X染色体的女孩,比起丢失了母方X染色体的女孩,分数要高出一倍多。

    从这里引出的结论是,在X染色体上某个地方有一个带有标记的基因,它在正常情况下只从父方的X染色体上表达,而这个基因通过某种方式促进社交的适应能力——例如,理解别人感受的能力。斯库斯与同事通过观察只丢失了部分X染色体的孩子,又为这种理论提供了进一步的证据。

    这项研究有两个深远的影响。第一,儿童自闭症、阅读困难症、语言障碍以及其他与人相处方面的问题都是在男孩中比在女孩中更普遍,这项研究为这样的现象提出了解释。一个男孩只从他母亲那里收到一条X染色体,也就是说,他收到了带有母方标记的一条,那么促进社交能力的这个基因就是不被表达的。在我写下这句话的时候,这个基因还没有被定位,不过,我们知道有些X染色体上的基因确是带有标记的。

    但是,第二个影响更有普遍意义。在20世纪后半期一直持续的一个有点可笑的争论是两性之间的差异,它把先天因素与环境因素对立起来了。而我们现在开始看到了这个争论结束的可能性。那些喜欢环境因素的人曾经试图否认先天因素的任何作用,而那些喜欢先天因素的人却很少否认环境因素也有作用。问题不在于环境因素是否有作用,因为没有任何一个头脑清醒的人会否认它的作用。问题在于,先天因素是否有作用。当我写这本书的时候,我的一岁女儿有一天在一个玩具童车里发现了一个塑料娃娃,她发出的那种兴奋的尖叫,是我儿子在同样年龄的时候看到过路的拖拉机时会发出的。像很多家长一样,我很难相信这只是因为我们下意识地加给了他们一些“社会规范”。在最早开始的自主活动里,男孩和女孩就有系统的差异。男孩有更强的竞争性,对机器、武器和动手做事更感兴趣,而女孩则对人、衣服和语言更感兴趣。说得更大胆一些,男人喜欢地图、女人喜欢小说可不仅仅是后天培养的结果。

    不管怎么说,一个完美的(虽然人们没有意识到它的残酷)的实验已经被那些只相信环境因素的人做了。60年代,在温尼佩格(Winnipeg,加拿大的一个城市),一个失败的包皮切除手术给一个小男孩留下了一个严重损坏了的阴茎,后来医生决定把它切掉。他们决定,通过阉割、手术和激素治疗等方法把这个男孩变成女孩。约翰变成了琼,她穿了裙子,玩了布娃娃。她长大成了一个年轻女子。1973年,一个弗洛伊德派的心理学家,约翰·莫尼(John Money),突然对公众宣布,琼是一个适应得很好的少年人,她的例子也就结束了一切猜测:性别角色是通过社会环境建立的。

    一直到了1997年,才有人去核对事实。当米尔顿·戴蒙德(Milton Diamond,性别研究专家)和济茨·西格孟德森(Keith Sigmundson,心理学家)找到了琼的下落的时候,他们找到的是一个娶了一位女子、生活幸福的男人。他的故事与莫尼的说法非常不同。在他还是孩子的时候,他就总是深深地为什么事情感到非常不快乐,他总是想穿裤子,想跟男孩子混在一起,想站着撒尿。在14岁的时候,他的父母告诉了他发生过的事情,这让他松了一口气。他停止了激素治疗,把名字又改成了约翰,恢复了一个男性的生活,通过手术切除了乳房,在25岁的时候,他与一个女子结婚,并成了她的孩子的继父。他曾经被当成是性别由社会环境决定的证明,他却证明了这个理论的反面:先天因素在性别的决定上是有作用的。动物学的证据一直是指向这个方向的:在大多数物种里,雄性行为与雌性行为有着很系统的差异,这些差异有着先天成分。大脑是有先天性别的器官。从基因组、有标记的基因、与性别相关的行为诸方面得来的证据,现在都指向同一个结论。

    第十六号染色体 记忆

    遗传为修改它自己提供了方法。 ——詹姆斯•马克•鲍德温,1896

    人类基因组是一部书。一个有经验的技术员通过通读并认真对待不寻常的地方,比如基因标记,就可以造出一个完整的人体。如果有正确的阅读与诠释的方法,一个有能力的现代弗兰肯斯坦(小说《弗兰肯斯坦》中的主人公,是一个医学院的学生;玛丽•雪莱是该书的作者)也可以做到这一点。但是,之后又怎样呢?它可以造出一个人体,并注之以生命之泉,但是,如果“他”要真正地生活,“他”就不仅仅需要存在,还需要做到其他一些什么。“他”需要对外界因素适应、变化、做出反应。“他”需要获得自己的独立性。“他”需要摆脱弗兰肯斯坦的控制。有一种感觉就是,基因们必然失去对它们所创造出来的生命的控制,就像玛丽•雪莱(Mary Shelley)的小说里那个不幸的医学院学生那样。它们必须给“他”自由,让“他”找到“他”自己的生活之路。基因组并不告诉心脏应该什么时候跳动,也不告诉眼睛什么时候应该眨,也不告诉思维什么时候应该思想。即使基因确实为人的性格、智力和人性规定一些变量,并且是以惊人的准确性规定了这些变量,它们知道什么时候应该把权力下放。这里,在第十六号染色体上,存在着一些重要的放220权者:允许学习和记忆的基因。

    也许在很惊人的程度上,我们人类是由我们的基因的“意志”决定的,但是,在更大的程度上我们是由我们一生中所学到的东西决定的。基因组是处理信息的计算机,它通过自然选择从周围世界吸收有用的信息,然后把这些信息汇入它自己的设计图中。进化在信息处理方面慢得要命,常常需要好几代才能够产生一点变化。因此,基因组发现,发明一种快得多的机器对它很有帮助,这就一点也不奇怪了。这个机器的工作是在几分钟甚至几秒钟之间从周围世界里提取信息,并把它整合到自己的行为里去。这个机器就是大脑。你的基因组给你提供了神经,告诉你什么时候你的手被烫到了。你的大脑则给你提供把手从炉台上拿开的动作。

    “学习”是神经科学和心理学的范畴,它是本能的反面。本能是遗传决定的行为,学习则是由经验来调节的行为。心理学中的行为学派在20世纪的大部分时间里都希望我们相信这两者没有什么共同之处。但是,为什么有些事情是通过学习得到的,有些却来自于本能?詹姆斯•马克•鲍德温(James Mark Boldwin)——这一章里的英雄人物——是19世纪一个很不起眼的美国进化理论家。他在1896年写了一篇文章,总结了一场哲学争论。他的文章在当时没有什么影响,事实上,在那之后的91年里也没有什么影响。但是,幸运的是,在80年代晚期,一组计算机科学家把它从默默无闻之中翻了出来,他们认为,他的理论对他们面临的如何教计算机学习的问题有很大的相关性。

    鲍德温试图解释的问题是:为什么有些事情是一个人在他的一生里学习到的,而不是事先设计好的本能。有一个被广泛认同的信念,那就是:学习是好的,本能是坏的,或者说,学习是更先进的,而本能是更原始的。因此,人类的一个标志就是:所有那些对于动物来说是很自然的事情,我们人类都需要学习。人工智能的研究者们遵循着这个传统,很快就把学习放到了最重要的位置上:他们的目的是要造出有多种用途、能够学习的机器。但是这不过是一个事实上的错误。人类通过本能得到的,与动物通过本能得到的一样多。我们爬行、站立、行走、哭泣、眨眼时那种下意识的方式,与一只鸡的方式也差不多。只是在我们移植到动物本能之上的那些事情上,我们才使用学习这一方法:诸如阅读、开车、去银行存款、购物等事情。“意识的主要作用”鲍德温写道:“是使儿童学习遗传没有给他们的东西。

    而且,通过迫使我们学习什么事情,我们把自己放在了一个有选择性的环境里,这个环境很看重一个人把学到的东西变成直觉从而在将来能够用本能来解决问题。这样,学习就慢慢为本能让路。我在讲述第十三号染色体时谈到过,产奶动物的养殖给身体出了一个难题:消化不了的乳糖。第一个解决办法是文化上的:制造奶酪,但是后来身体进化出了一个内在的解决方法,即把乳糖酶的制造持续到成年。如果不识字的人在足够长的时期内在繁殖后代方面处于劣势,也许最终识字都会变成一种内在的特性。实际上,因为自然选择的过程就是从环境中提取有用的信息并把它在基因里储存起来,你也许可以把人类基因组看成是40亿年以来积累起来的学习成果。

    但是,把学到的事情变成本能的优势是有限度的。在口头语言这个例子里,我们有很强的语言本能,但这是一个可塑性很强的本能。如果自然选择一路干到底,甚至把词汇也搞成是本能的东西,那就明显地是发疯了。如果那样,语言就会成为一个太没有可变性的工具:没有一个词用来指代计算机,我们就必须把它描绘成“当你与它交流时它能够思考的东西”同样地,自然选择想到了(原谅我这种目的论的说法)要给迁徙的鸟类一个用星座导航的系统,但是这个系统不是完全装配好的。因为岁差的缘故,正北的方向是在逐渐变化的。鸟类的每一代都能够通过学习来校正自己的星座罗盘,这是生死攸关的事。

    鲍德温效应是文化进化与遗传进化之间微妙的平衡。它们不是事物的相反两面,而是伙伴的关系,它们互相影响,以求达到最好的效果。一只鹰可以从父母那里学到生存本领,从而更好地适应自己的生存环境。一只布谷鸟则必须把所有本事都建立在本能之中,因为它永远见不到自己的父母。(布谷鸟自己不孵卵,而是把卵产在别的鸟的巢里)它必须在从蛋里出生之后的几小时之内就把所寄居的鸟窝里养父母的孩子赶走;必须在幼年时期就迁徙到非洲适合它生活的地方,并且要在没有父母带领的情况下完成;它必须发现怎样找到毛毛虫并以它们为食;必须在第二年春天返回自己的出生地;必须给自己找到一个配偶;必须为自己的孩子找到一个合适的有主儿的巢。这些都靠的是一系列本能的行为,再加上一次次谨慎的从经历里的学习。

    就像我们小看了人类大脑对本能的依靠程度,我们也小看了其他动物学习的能力。例如,人们已经揭示出野蜂从自己的经历中学到很多如何从不同种类的花里采集花蜜的本事;如果只练习过在一种花上采蜜,它们见到另一种花时就会不知所措,直到练习过一阵。但是,只要它们知道怎样对付一种花,它们对付起形状相似的花来就更容易。这就证明了它们不仅仅只是记住了每一种花的特性,而是总结出了一些抽象的原理。

    另外一个从与野蜂一样的低等动物那里得到的动物学习的著名例子是海参。很难想象有比它更卑微更简单的动物了。它既不怎么动又小、又简单、又不出声。它有个极小的脑,它的一生中就是进食与交配,从来不精神紧张。让人羡慕。它既不会迁移也不会交流,不会飞也不会思考。它只是存在着。与布谷鸟甚至是野蜂比起来,它的生活太容易了。如果简单动物运用本能、复杂动物学习这一理论是正确的,那么,海参什么也用不着学。

    但是,它能够学习。如果一股水流射到它的鳃上,它会把鳃收回去。但是如果一股股水持续地喷到它的鳃上,这个收回的举动就逐渐停止了。海参对它认定的“假情报”不再做出反应。它“习惯”了。这当然不是学什么微积分,但是它同样也是学习。反过来,如果在水喷到鳃上之前先给它一次电击,海参会学着把自己的鳃收回得更多——一个叫做“敏化”的现象。它还可以像巴甫洛夫那些著名的狗一样形成条件反射:它可以在感到一股非常轻微的水流时就收回自己的鳃,如果这轻微的水流与一次电击总是同时出现。之后,这轻轻的水流本身就导致使海参飞快地把自己的鳃收回去,虽然在通常情况下这样轻微的水流不足以使海参收鳃。换句话说,海参有能力像狗或人那样学习:习惯、敏化、“联想”学习。但是它们甚至不用它们的脑。这些反射与能够修改它们的学习过程发生在腹部神经节,在这些黏糊糊的动物肚子上的一个小小的神经系统“中转站”在这些实验背后的人,埃里克•坎德尔(Eric Kandel,生物学家,因为在学习的细胞机制方面的工作,与另外两位科学家分享了2000年诺贝尔生理学和医学奖),动机并不是要跟海参过不去。他想要理解学习的最基本机制。学习是什么?当大脑(或腹部神经节)形成了一种新的习惯或改变了它的一种行为的时候,神经细胞里发生了什么?中枢神经系统里有很多神经细胞,电信号在每一个细胞里游走,另外,还有很多突触,它们是神经细胞之间的“桥梁”当神经系统里的一个电信号到达一个突触的时候,它必须要先变成一个化学信号,然后才能以电信号的形式继续旅行,就像火车上的旅客需要搭渡轮过海峡一样。坎德尔的注意力很快就集中在神经细胞之间的这些突触上了。学习似乎是在改变它们的特性。这样,当海参习惯于一个假情报的时候,接受感官信息的神经细胞与移动鳃的神经细胞之间的突触被以某种方式弱化了。反过来,当海参对某种刺激敏化了的时候,这个突触就被加强了。慢慢地,坎德尔与同事们巧妙地逼近了海参脑子里的一个分子,它位于突触弱化或强化的中心。这个分子叫做环化腺苷酸(cyclic AMP)。

    坎德尔与他的同事们发现了一串围绕着环化腺苷酸的化学反应。我们先忽略它们的正式名字,先想象有一串化学物质名字就叫A、B、C……:
    A造出B,
    B造出C,
    C打开一个叫做D的通道,
    这样就使得更多的E进入了细胞内部,
    E延长了释放F的时间,
    F就是把信号送过突触以到达下一个神经细胞的神经递质。
    现在,凑巧的是C也激活一个名叫CREB的蛋白质,激活的方式是改变它的形状。动物如果缺少这种被激活的CREB,仍然可以学习,但是学到的东西大约一小时之后就不再记得了。这是因为CREB—旦被激活就使其他基因开始表达,从而改变突触的形状和功能。以这种方式被改变的基因叫做CRE基因,意思是环化腺苷酸反应因子。如果我讲得比这还细,我会把你闷得扔下这本书直奔离你最近的惊险小说,不过再忍受一下,事情马上又会变得简单起来了。

    事情会变得如此简单,现在是跟“笨伯”见面的时候了。笨伯是一种带有突变的果蝇,它学不会这么一件事:某种气味出现之后总会出现电击。它是在70年代被发现的,是一连串“学习突变”中的第一个,这些“学习突变”的发现,是通过用射线照射果蝇然后让它们完成一些简单的学习任务,然后繁殖那些完成不了这些任务的果蝇而得到的。其他的突变种果蝇随着“笨伯”之后陆续被发现了,它们叫做“白菜”“健忘”“萝卜”“小萝卜”“大萝卜”等等。(这又一次说明,果蝇遗传学家在给基因起名字方面所享有的自由,比人类遗传学家的要大得多。)现在总共有17个“学习突变”在果蝇中被发现了。受到坎德尔研究海参成果的提醒,冷泉港实验室(美国生物学实验室,由发现了DNA结构的詹姆斯•沃森指导)的梯姆•塔利(Tim Tully)开始研究这些突变的果蝇到底是什么地方不对劲。让塔利和坎德尔高兴的是,在这些突变种果蝇体内被损坏了的基因都与制造或响应环化腺苷酸有关。

    塔利接着提出,如果他能够彻底毁掉果蝇的学习能力,那么他也应该可以改变或加强它的学习能力。通过去掉制造CREB蛋白质的基因,他造出了一种可以学习却记不住自己学了什么的果蝇——学到的东西很快就从记忆里消失了。很快地,他又得到了另外一种果蝇,它们学习得如此之快,某种气味之后会有电击这样一个信息,它们只要学一遍就会了,而其他果蝇通常要学十遍才会。塔利描述这些果蝇说它们有照相机一般的记忆。但是,这些果蝇远远算不上聪明,它们在总结规律方面很差劲。它们就像这样一个人:因为他骑自行车在晴天摔了一跤,以后他就拒绝在有太阳的时候骑自行车。

    记忆出众的人,比如著名的俄国人谢拉什维斯基(Sherashevsky),就经历过这样的问题。他们的脑子里充满了那么多的小知识,使得他们常常只见树木不见森林。智慧要求的是把什么该记住什么该忘掉恰当地结合起来。我常常遇到这样一个现象:我能容易地记起——也就是说,能够认出——我读过某一段文章或听过某一段广播节目,可是我背不出它们的内容。它们的记忆是用某种方式藏在我的意识够不着的地方。也许,对于那些记忆超群的人来说它们没有藏得这么好。

    塔利相信CREB处于学习与记忆机制的中心地位,是一种有控制权的基因,它使其他基因开始表达。这样,为理解学习而进行的探索最终变成了对基因的探索。动物有学习的能力而并不是只依靠本能,这一发现并没有让我们逃脱基因的“暴政”,我们只不过发现了,最好的理解学习的方法是了解基因和它们的产物是怎样使得学习能够进行的。

    到现在,如果你得知CREB不仅是在海参和果蝇里才有,就应该不是什么让你吃惊的事了。在老鼠体内有一个几乎是一模一样的基因,失掉CREB基因的突变种老鼠也已经被造出来了。就像预测的那样,这些老鼠学不会简单的东西,比如说,记住眼睛看不见的水下平台在游泳池里的什么地方(这是老鼠学习实验中很标准的“折磨”它们的方法),或者记住什么食物是安全的。通过把反义的CREB基因——它可以在短期内抑制CREB基因——注射到老鼠的大脑里去,老鼠可以变得有短暂的失忆。相反的是,如果它们的CREB基因异常活跃,它们就会是超级的学习能手。

    老鼠与人的距离,从进化角度说也仅仅是毫发之间。我们人类也有CREB基因。人类的CREB基因本身是在第二号染色体上,但是帮助它正常工作的一个重要同盟——CREBBP——却就在这里——第十六号染色体上。与第十六号染色体上另外一个名叫a-整合蛋白的学习基因一起,CREBBP给了我一个(不怎么充分)的理由,把学习这个题目单列成一章。

    在果蝇里,环化腺苷酸系统似乎在一个叫做蘑菇体的大脑区域里异常活跃,它是果蝇大脑里突出来的一堆神经细胞,它们组成了一个伞菌形状的结构。如果一只果蝇的脑子里没有蘑菇体,那么这样的果蝇通常学不会气味与电击之间的联系。CREB和环化腺苷酸似乎就是在蘑菇体里工作。它们具体是怎样工作的直到现在才开始变得清楚起来。通过系统地寻找其他没有学习能力和没有记忆的突变种果蝇,休斯顿的罗纳德*戴维斯(Ronald Davis)、麦克尔•格洛特维尔(Michael Grotewiel)与他们的同事找到了另外一种突变种果蝇,他们给它取名叫“沃拉多”(对于“沃拉多”,他们给了一个很有用的解释。在智利语里它是一种俗语,意思跟“心不在焉”和“健忘”相近,一般用来形容教授)。就像“笨伯”、“白菜”和“萝卜”一样,沃拉多果蝇学习起来很困难。但是,与其他基因不同的是,沃拉多好像跟CREB和环化腺苷酸都没有关系。它是a-整合蛋白中一个部分的配方,这个蛋白质存在于蘑菇体里,似乎在把细胞聚集在一起这个方面有一些作用。

    为了检验这是不是一个“筷子基因”(请看第十一号染色体那一章),除了改变记忆之外是否还有很多其他功能,休斯顿的科学家们做了一件很巧妙的事。他们拿一些自身的“沃拉多”基因被除掉的果蝇,插进去一个与“热激”基因——这个基因在突然受热的时候就开始表达——连在一起的“沃拉多”基因。他们小心地把这两个基因进行了排列,使得“沃拉多”基因只在热激基因表达之后才能够有功能。在温度低的情况下,这样的果蝇没有学习能力。但是,在给了它们一个热刺激三小时之后,它们忽然变成了学习能手。再过几个小时之后,在热刺激已经成为过去的时候,它们又失去了学习能力。这意味着在学习发生的那一瞬间需要“沃拉多”基因,它不是一个仅仅是在建造学习所需的构制时才需要的基因。

    沃拉多基因的任务是把细胞聚集在一起,这个事实提出了一个吸引人的假设,那就是记忆也许真的就是把细胞之间的连接变得更加紧密。当你学什么东西的时候,你改变了你的大脑里的网路,在以前没有连接或只有很弱连接的地方产生新的或更强的连接。我当然可以接受这种有关学习和记忆的说法,但是我很难想象我的关于“沃拉多”一词词义的记忆就是几个神经细胞之间突触连接更加紧密。这真让人百思不得其解。我感觉到,科学家们把学习与记忆的问题“简化”到了分子层次上之后,不仅远远没有把这个问题的神秘性消除,而且在我面前打开了一种新的吸引人的神秘:这个神秘就是,试图想象神经细胞之间的连接不仅给记忆提供了一种机制,而且它们本身就是记忆。它与量子物理是同样给人刺激的神秘,比欧异家板(从神灵世界里获取信息的装置)和飞碟刺激得多了。

    让我们往这个神秘性里再走得更深一些。沃拉多的发现暗示了这样一个假设:整合蛋白对于学习和记忆是至关重要的,但是,这样的暗示以前就有过了。到了1990年的时候,我们已经知道有一种抑制整合蛋白的药会影响记忆力。具体地说,这个药对一种名叫长效强化的过程起干扰作用,而长效强化似乎在记忆的产生中有着重要作用。在大脑基部的深处有一个结构叫做海马区(hippocampus,在希腊语里是海马的意思),海马区的一部分叫做阿蒙角(这个名字来源于埃及与羊相关的神。亚历山大大帝在神秘地造访了利比亚的斯瓦赫(Siwah)绿洲之后,称阿蒙是自己的父亲)。在阿蒙角里有数量众多的“金字塔”细胞(注意这持续不断的埃及风格),它们把其他感觉神经细胞的信息收集到一起。一个“金字塔”细胞很难“开火”(“开火”在这里指神经细胞送出一个电信号),但是如果有两个独立的信息同时输入,它们共同的努力就会使“金字塔”细胞产生电信号。一旦产生过一次电信号,它就容易再次产生了,但是这只是当它接到最初使它开火的那两个信息的时候,其他的输入信号没有用。这样,眼睛里看到金字塔和耳朵里听到“埃及”这个词能够结合起来使一个“金字塔”细胞产生电信号,在这两者之间产生一种联系记忆。但是,关于海马的念头虽然可能也与同一个“金字塔”细胞是连接在一起的,却没有用同一种方式被“加强”,因为它与另外两种信息没有同时到达。这是一个长效强化的例子。如果你用过于简单化的方式把这个“金字塔”细胞想象成是埃及的记忆,那么它现在就可以被金字塔的画面或“埃及”这个词,但不是海马这个词,诱发而产生电信号。

    长效强化,例如海参的学习,绝对需要突触性质的改变,在“埃及”这个例子里,就需要输入信号的细胞和金字塔细胞之间突触的改变。这个改变几乎肯定要跟整合蛋白有关。奇怪的是,抑制整合蛋白并不干扰长效强化的形成,但是的确影响它形成之后的保存。整合蛋白可能真的是把突触“绑”在一起。

    不久之前我曾经很随意地暗示过,“金字塔”细胞可能就是记忆。这是瞎说。你童年时期的记忆甚至都不在海马区里,而是在新皮质里。存在于海马区内部和附近的是形成新的长期记忆所需的机制。“金字塔”细胞大概是以某种方式把新形成的记忆送到它最终存在的那个区域里去。我们之所以如此认为,是因为两个出色却偏偏倒霉的年轻人,他们在50年代遇到了古怪的事故。第一个年轻人在科学文献里以他名字的简称H.M.而被人所知,为了避免因为一次自行车事故而引起的癫痫发作,他的大脑的一部分被切除了。第二个人被称做N.A.,是空军里的雷达技师。有一天他在做一个模型的时候,忽然转过身来,而他的一个同事正在玩一把假剑,碰巧就在那个时刻把剑往前一伸,剑从N.A.的鼻孔穿进去,进了他的脑子。

    这两个人直到今天仍然受健忘症的折磨。他们可以很清楚地记起从他们小时候到出事之前几年的事情。他们可以很短期地记住眼前发生的事——如果在他们记住这些事和回忆这些事之间不再用其他事来干扰他们。但是,他们无法形成新的长期记忆。他们认不出一个每天都见的人的面孔,也学不会记住回家的路。在N.A.的情况里(他是症状较轻的一个),他没法看电视,因为一播广告,他就忘了广告之前演的是什么了。

    H.M.可以很好地学习一项新的技能并不把它忘掉,但是他却想不起来自己曾经学过这项技能。这意味着“程序”记忆是在一个与关于事实或事件的“陈述”记忆不同的地点形成的。这个区别通过研究另外三个年轻人得到了证实。这三个年轻人对事实与事件有严重的健忘症,但是他们上学期间学习阅读、写作和其他技能却没有遇到什么困难。在做脑部扫描的时候,发现这三个人的海马区都非常之小。

    但是,除了记忆是在海马区形成的之外,我们还可以说得更具体一些。H.M.和N.A.受到的损伤暗示了另外两个大脑区域与记忆形成之间的关系:H.M.还缺少中心颞叶,而N.A.缺少一部分间脑。从这里得到启示,神经科学家们在寻找最重要的记忆区域时逐渐把范围缩小到了一个主要区域:鼻周皮质。在这里,来自视觉、嗅觉、听觉及其他感觉器官的信息经过处理成为记忆,也许通过CREB的帮助而完成。之后,信息被送到海马区,然后又送到间脑,暂时储存。如果某个信息被认为是值得永久储存的,它就以长期记忆的形式被送回新皮质储存起来:这就是那个奇怪的瞬间,你忽然用不着查某个人的电话号码,而是自己就能想起来了。记忆从中心颞叶传到新皮质的过程似乎有可能是在夜间睡觉的时候发生的:在老鼠脑子里,中心颞叶的细胞在睡觉时特别活跃。

    人类大脑是一个比基因组还更令人惊叹的机器。如果你喜欢数量化的东西,那么,大脑里有上万亿的突触,而基因组只有上十亿的碱基对,大脑的质量以千克计,而基因组则以微克计。如果你喜欢几何学,那么,大脑是一个三维的逻辑机器,而不是一230个数码式的一维机器。如果你喜欢热力学,那么,大脑在工作的时候产生大量的热量,就像一个蒸汽机一般。对于生物化学家来说,大脑需要成千上万种不同的蛋白质、神经递质以及其他化学物质,并不仅仅是DNA的四种核苷酸。对于没有耐心的人来说,

    我们真的是眼睁睁地看着大脑在不断改变,突触不断地变化以产生新学来的记忆,而基因组的变化比冰山移动还慢。对于热爱自由意志的人来说,一个名叫经验的无情的园丁对我们大脑里神经网络所进行的修整对于它的正常功能有着至关重要的作用,而基因组只是把事先定好的信息放送出来,比起大脑来,没有什么变化余地。从各种角度来看,有意识、由自由意志控制的生活似乎都比自动化的、基因决定的生活更有优势。但是,正如詹姆斯·马克·鲍德温意识到而又被今天研究人工智能的书呆子们所欣赏的,这样的两分法是错误的。大脑由基因制造出来,它有多好取决于它内在的设计。它被设计成一个能够被经验修改的机器,这是写在基因里的。基因是怎样做到这一点的,这个秘密是当代生物学面临的最大挑战之一。但是毫无疑问,人类大脑是基因的神通的最好纪念碑。它标志着一个出色的领导者知道应该在何时把权力下放。基因组就知道应该何时把权力下放。

    第十七号染色体 死亡

    为自己的祖国而死既甜蜜又光荣。——荷雷斯(公元前65〜8年,罗马诗人、讽刺文学作家)

    古老的谎言。——威尔弗雷德•欧文(20世纪英国诗人)

    如果学习是在大脑细胞之间建立新的联系,它也同时是失去旧的联系。在出生的时候,大脑细胞之间的连接太多了,随着大脑的发育,很多连接被丢掉了。比如说,在最初的时候,每一侧的视觉皮质都与到达每一只眼睛的一半信息有连接。通过很剧烈的调整,才使得它们成为这样一种情况:一侧接受来自右眼的信息,另一侧接受来自左眼的信息。经验导致了那些不必要的连接衰弱、消失,也因此把大脑从一个多用途的机器变成了很专门的机器。就像一个雕塑家把一块大理石削来凿去以形成人形那样,环境也把多余的突触去掉以使大脑功能更强。在一个瞎眼的幼年哺乳动物或者眼睛一辈子被遮住的动物那里,这样的调节从来不会发生。

    但是,这个过程除了突触连接消失之外,还有其他意义。它还意味着整个细胞的死亡。有着不正常形式的ced-9基因的老鼠不能正常发育,因为它大脑里多余的细胞不能履行他们的职责而死去。这样的老鼠最终会有一个结构不正常、负担过重、不能正常运转的大脑。民间流传的说法总喜欢强调一个恐怖的(却没有意义的)统计数字,即我们每天要失去100万个大脑细胞。在我们幼年的时候,甚至当我们在子宫里的时候,我们确实以很快的速度失去脑细胞。如果我们没有失去这些细胞,我们就永远也不可能思考。

    受到ced-9之类的基因刺激之后,不必要的细胞就大规模地自杀(其他ced基因在身体的其他器官里引发细胞自杀)。这些要死的细胞顺从地遵守一个精确的程序。在肉眼难见的线虫里,发育中的胚胎有1090个细胞,但是,它们中的131个会在发育过程中自杀,在成年线虫体内剩下959个细胞。它们好像是牺牲自己来换取身体的更大利益。“为自己的祖国而死既甜蜜又光荣”,它们高喊着口号英雄般地逝去了,就像战士们冲上凡尔登(第一次世界大战时德军与法军激烈交战之地,双方死亡将士各达40万人)的峰顶,或是工蜂自杀性地蜇入侵者。这样的比喻远不是只有表面的相似。身体内细胞之间的关系在事实上非常像是蜂巢里蜜蜂之间的关系。你体内细胞的祖先曾经一度是独立的个体,在大约6亿年前,它们在进化过程中决定要合作。这与5000万年以前社会性的昆虫决定要合作几乎是一样的:遗传上关系很近的个体意识到,如果它们把繁殖后代变成一项专门的工作,效率就会高得多,在细胞那里,它们把这项工作交给了生殖细胞,在蜜蜂那里,这项工作交给了蜂王。

    这个比喻如此之好,进化生物学家们开始意识到合作精神是有限度的。就像凡尔登的战士们偶尔被逼得不得已,会不顾集体利益地叛变。如果工蜂们得到机会,它们也会繁殖自己的后代。只有其他工蜂的警惕性可以阻止它们。蜂王通过与多只雄蜂交配来保证大多数的工蜂都只是半个姐妹(一个蜂群里的工蜂都是同一只蜂王与不同雄蜂的后代,同母不同父),因此,它们在繁殖后代方面也就没有多少共同的兴趣。这样,蜂王就能保证工蜂对她忠心,而不是工蜂之间彼此忠心。身体里的细胞也是如此。叛变是个永恒的问题。细胞们经常忘记它们对“国家”的职责,即为生殖细胞服务。它们经常要复制自己。不管怎么说,每一个细胞都来自一代一代传下来的生殖细胞,在整整一代里都不分裂是很违反本性的。也就因此,在每一个器官里、每一天里,都有细胞打破秩序重新开始分裂,就好像它抵御不了基因要复制自己的古老召唤。如果这个细胞的分裂不能被制止,结果就是我们所说的癌症。

    但是,通常它是会被制止的。以癌症为后果的叛变是如此古老的问题,所有身体比较大的动物都在细胞里带有一套精巧的开关,在细胞发现自己变得具有癌症性质的时候,这套装置可用来引起细胞自杀。最著名和最重要的开关,事实上自从它在1979年被发现以来也可能是被人们谈论得最多的人类基因,是TP53,就在第十七号染色体的短臂上。这一章就是要通过一个主要功能是防止癌症产生的基因,来讲述癌症的非凡故事。

    在理查德•尼克松(Richard Nixon)(当时的美国总统)1971年宣布对癌症宣战的时候,科学家们甚至还不知道敌人是谁,只除了一个明显的事实:癌症是细胞组织过多的生长。大多数癌症明显地既不是来自传染也不是来自遗传。传统说法是癌症不是一种疾病,而是一群多种多样的病变,由多种原因引起,这些原因多数来自外部。扫烟囱会因炭灰而染上阴囊癌;X光技术员和广岛原子弹的幸存者因为辐射而得白血病;吸烟的人因吸烟而得肺癌;造船厂工人则因接触石棉纤维而得肺癌。在各种癌症之间可能没有共同的联系,如果有,也许就是免疫系统没有能够抑制肿瘤。传统的说法就是这样。

    但是,两项齐头并进的研究开始得出了一些新的认识,它们最终把我们领到了在认识癌症方面的革命。第一个是在60年代加利福尼亚州布鲁斯•爱姆斯(Bruce Ames)的发现。他发现,很多导致癌症的化学物质和辐射,例如煤渣和X射线,都有一个重要的共同点:它们都很有效地损坏DNA。爱姆斯瞥见了这样一个可能性:癌症是基因的病变。

    第二个突破很早就开始了。在1909年,佩顿•劳斯(Peyton Rous)(1966年获诺贝尔生理学和医学奖)发现有肉瘤的鸡可以把病传给一只健康的鸡。他的工作在很大程度上被忽略了,因为几乎没有什么证据表明癌症是有传染性的。但是,在60年代,一连串的动物癌症病毒被发现了,第一个就是劳斯肉瘤病毒。劳斯最终在86岁高龄的时候获颁诺贝尔奖,以表彰他的先见之明。人类癌症病毒不久也被发现了,变得明显了的是好多类的癌症,例如宫颈癌,实际上是部分地由于病毒感染而引起的。

    把劳斯肉瘤病毒送到基因测序机里,我们发现它带有一个特殊的导致癌症的基因,现在被称为src。其他类似的癌基因很快就从其他癌病毒里被发现了。与爱姆斯一样,病毒学家们开始意识到了癌症是基因的病变。在1975年,癌症研究领域被折腾了个底儿朝天,因为人们发现src根本就不是一个病毒基因。它是一个我们都有的基因,鸡、老鼠、人类体内都有。劳斯肉瘤病毒是从它的宿主那里偷走了这个基因。

    比较传统的科学家很不愿意接受癌症是基因病变的事实:不管怎么说,除了极个别的例子之外,癌症并不遗传。他们忘记了基因并不只存在于生殖细胞里,在一个生命的一生里,基因在所有其他器官里都有用处。在身体的一个器官里的基因病变,即使不是在生殖细胞里,仍然是基因病变。到了1979年,已经有从三种不同肿瘤里得到的DNA在老鼠体内诱发了类似癌症的细胞生长,这样就证明了基因本身可以导致癌症。

    从一开始就很清楚什么样的基因会是癌基因——鼓励细胞生长的基因。我们的细胞拥有这样的基因,所以我们才能够在子宫里生长,能够在儿童时代生长,能够在之后的生活中愈合伤口。但是,至关重要的一点是这些基因大多数时候都应该是关闭着的。如果它们很容易就可以被开启,结果就是灾难性的。我们的身体里有100万亿个细胞,而且更新很快,因此,在一生的时间里癌基因有很多机会可以被开启,即使没有导致突变的吸烟、日光照射等因素从旁鼓励。但是幸运的是,身体拥有一些基因,它们的任务就是识别细胞的过度生长,并使其停止。这些基因最早是由牛津的亨利•哈里斯(Henry Harris)在80年代中期发现的,被人们称为肿瘤抑制基因。它们是癌基因的对立面。癌基因在开启的时候导致癌症,肿瘤抑制基因则在被关闭的时候导致癌症。

    它们用各种方式履行自己的职责,最突出的是在细胞生长、分裂周期的某一时刻把它“关押”起来,并且,可以这么说吧,只有当这个细胞的一切许可证都备齐了之后,才把它放出来。所以,要想进到下一步,一个肿瘤必须要有一个细胞是具有同时开启了的癌基因与关闭了的肿瘤抑制基因的。这就已经不太可能了,但这还没完。要摆脱控制、自由生长,肿瘤现在还需要通过一个决心更大的检查站,那里的哨兵是一个基因,它能够察觉细胞内的异常活动并给其他基因签发命令,把这个细胞从内部解体:细胞的自杀。这个基因就是TP53。

    最初,当TP53在1979年被邓迪(Dundee)的戴维•莱恩(David Lane)发现的时候,人们以为它是一个癌基因,后来它被认出是一个肿瘤抑制基因。1992年的一天,莱恩和他的同事彼得•霍尔(Peter Hall)在酒馆里聊TP53的时候,霍尔伸出自己的手臂,愿意用自己做实验来验证TP53是不是肿瘤抑制基因。拿到动物实验的许可证需要几个月的时间,但是在一个志愿者身上做实验却立刻可以进行。霍尔通过辐射把自己的胳膊一次一次地弄出了小小的伤口,莱恩则在之后的两星期内取了霍尔伤口处的活体样品。

    他们发现,在受到辐射之后,p53——由TP53制造出来的蛋白质——水平显著上升,清楚地证明这个基因对能够导致癌症的伤害有反应。之后,莱恩开始研究以p53作为临床克癌药物的可能性,在本书出版的时候,第一批志愿者要开始服用p53。事实上,邓迪的癌症研究进展如此之快,p53眼看就要成为这个苏格兰台河(Tay)河口边小城的第三大著名产品了,前两个是黄麻和橘子酱。

    TP53基因上的突变几乎是致命的癌症最典型的特征。在所有人类癌症的55%中,TP53都被损坏了。在肺癌里,这个比例上升到90%以上。那些出生时的两份TP53基因中有一份就已经不正常的人,有95%的机会要得癌症,而且通常在年龄很小的时候就得癌。举一个例子,就说结肠和直肠癌吧。这个癌症的开始,是因为一个突变破坏了一个名叫APC的肿瘤抑制基因。如果生长中的息肉又出现了第二个突变,使得一个癌基因RAS被开启,它就变成一个所谓的“腺瘤”。如果这时它再出现第三个突变,破坏一个现在还没有被确认的肿瘤抑制基因,腺瘤就成为一个问题更严重的肿瘤。现在,它就有了得到第四个突变的危险。第四个突变如果发生在TP53基因上,它就把肿瘤变成恶性的癌。相似的“多次打击”模型在其他种类的癌症里也适用,TP53突变常常发生在最后一步。

    你现在就可以看出来,为什么在肿瘤生长的早期就下诊断是那么重要。一个肿瘤越大,它就越有可能已经得到了下一个突变,不仅因为概率的原因,也是因为肿瘤内细胞的快速繁殖很容易引起基因传递过程中的错误,导致突变。特别容易得某些癌症的人经常在“促突变”基因上有突变,它们通常鼓励突变的出现(在关于第十三号染色体的那一章里讨论过的乳腺癌基因BRCA1和BRCA2,也许就是乳房特有的促突变基因)。这些人也有可能已经带有了一份有问题的肿瘤抑制基因。肿瘤就像兔子似的,很容易受到既快又强的进化压力。就像繁殖得最快的兔子很快就会在一个养兔场里占上风一样,在每个肿瘤里繁殖最快的细胞会迅速占上风,排挤掉那些更加稳定的细胞。就像带有突变的兔子能够钻进地洞躲避恶棍,也就因此很快能够排挤掉那些只会坐在开阔地里的兔子,肿瘤抑制基因里的突变如果能够使细胞分裂不被抑制,它就很快能够挤掉其他突变而占上风。肿瘤所处的环境在选择肿瘤抑制基因里的突变时,真的就像是外界环境选择兔子。突变最终在一些情况下出现并不是什么神秘的事。突变是随机的,选择却不是。

    与此类似的是,现在我们也清楚了为什么癌症这种病主要是老年病,年龄每增加十年,癌症出现的几率就翻一番。在10%〜50%的人体内(具体数字与所居住的国家有关),癌症最终会绕过各种肿瘤抑制基因,也包括TP53,会让我们得上这种可怕的而且可能会致死的疾病。这其实是预防医学成功的一个标志,起码在工业化的国家里,它除掉了其他那么多致死的因素使人能够长寿,不过这个说法不会给我们什么安慰。我们活得越长,我们的基因里就积攒了越多的错误,在同一个细胞里一个癌基因被开启、三个肿瘤抑制基因被关上的可能性就越大。这种情况出现的几率是不可想象的小,但是我们一生中造出来的细胞的数目又是不可想象的大。就像罗伯特•温伯格(Robert Weinberg)(当代美国生物学家,癌症研究方面的先驱之一)说过的:C5]“每10亿亿次细胞分裂中出现一次致命的恶性事故,看起来不太坏嘛。”

    让我们近距离看看TP53吧。它有1179个字母长,编码的是一个简单蛋白质的配方。p53在正常情况下很快就会被其他酶降解掉,它的半衰期只有20分钟。在这种状况下p53是不活跃的。但是,当接到一个信号之后,p53的制造就迅速加快,而它的降解也几乎停止了。这个信号还很神秘,对于它到底是什么,还有争议,但是DNA的损坏是它的一部分。被损坏了的小段DNA好像用某种方式提醒了p53。像一个刑事案件的破案小组或突击队一样,p53匆忙地进入战斗位置。下一步发生的,是p53掌握整个细胞的控制权,就像汤米•李•琼斯(Tommy Lee Jones)或哈维•凯特尔(Harvey Keitel)(两人都是好莱坞电影明星)演的那些角色一样,来到事故现场说:“我们是联邦调查局,从现在开始由我们接管了。”p53主要靠着激活其他的基因来告诉细胞做两件事之一:要么停止繁殖,停止复制它的DNA直到损伤被修复,要么自杀。

    另外一个有了麻烦的标志也会提醒p53,那就是如果细胞开始缺氧,这是判断一个细胞是否成为了癌细胞的依据。在一个正在生长的癌细胞团内部,血液供应可能会跟不上,细胞就开始窒息。恶性癌症可以克服这个困难,它给身体送出一个信号,使其把更多的血管伸到肿瘤里去——最初,癌症的希腊名字就来自于它的特征鲜明、像螃蟹腿一样的血管结构。(癌症的英文名字cancer来自于希腊文里的“螃蟹”一词)新的抗癌药物里最有前景的一些就是要阻止血管的形成。但是,有些时候p53会在血液供应到来之前就意识到发生了什么,就会杀死癌细胞。在血液供应不良的器官里的癌,比如说皮肤癌,就必须在其生长早期把p53干掉,它才能够生长。这就是为什么色素瘤如此危险。

    一点也不奇怪,p53得到了“基因组卫士”的昵称,甚至被叫做“基因组的守护天使”。TP53好像是在编码集体利益,它就像一个士兵嘴里含的自杀药片,当它发现这个士兵要叛变了,它就开始融化。以这样方式进行的细胞自杀叫做“程序性死亡”,这个词来源于希腊语“秋天树叶的掉落”(英文为apoptosis,三名科学家因为在发现其机制方面的贡献而获得2002年诺贝尔奖)。它是身体对付癌症最重要的武器,是最后一道防线。事实上,程序性死亡如此之重要,现在已经逐渐清楚,所有抗癌疗法之所以有效,都只是因为它们改变了p53及其同伴,因而引发程序性死亡。以前人们认为放射疗法和化学疗法之所以有用是因为它们可以有选择地杀死正在分裂的细胞——它们可以在细胞复制自己DNA的时候将其破坏。但是,如果真是如此,为什么这些疗法对有些肿瘤不起作用?在癌症发展的过程中有一个时刻,过了之后这些疗法就不再有效了——肿瘤不再因为放射疗法或化学疗法而缩小。为什么会是这样?如果这些疗法杀死正在分裂的细胞,它们应该在任何时刻都有效呀。

    在冷泉港工作的斯科特•洛(Scott Lowe),对此有一个巧妙的答案。他说,这些疗法确实给DNA带来一些小小的损伤,但不足以杀死细胞。事实上,这些损伤刚好能够提醒p53,然后p53会告诉细胞采取自杀行动。所以,化学疗法和放射疗法就像疫苗一样,它们是促使身体帮助自己的疗法。有些很不错的证据支持他这个理论。放射疗法和三种化学疗法——5-氟尿嘧啶、依多波塞(etoposide)、阿霉素都能够促使在实验室里被病毒癌基因感染的细胞进行程序性死亡。而且,当对这些疗法有反应的癌症复发并对这些疗法不再起反应的时候,同时发生的是一个突变将TP53给破坏了。与此类似的是,那些对疗法反应最小的癌症色素癌、肺癌、结肠癌、直肠癌、膀胱癌、前列腺癌——通常它们那里的TP53早就被突变了。某些种类的乳腺癌也对疗法不起反应:TP53被破坏了的那些。

    这些见识对于癌症的治疗相当重要。医学的一个重要分支一直以来是在一个错误的理解之下开展的。医生们不应该寻找能够杀死正在分裂的细胞的物质,而应该寻找能够使细胞自杀的物质。这不是说化学疗法整个就没有效果,但它只是由于偶然原因才有效。现在,既然医学研究知道了自己在干些什么,结果就会更给人以希望。从短期来说,它给人的希望是很多癌症病人可能不会死得那么痛苦。通过检查来判断TP53是否已经被破坏,医生们很快就可以事先知道化学疗法是否会起作用。如果不会,那么病人和他们的家庭就不必再因错误的希望而受折磨了,这种错误的希望在今天是这些病人临终前几个月非常典型的特点。

    癌基因在没有被突变的情况下是动物一生中细胞生长与繁殖所必需的:皮肤需要被不断更新,新的血液细胞需要产生,伤口要被修复,如此等等。抑制潜在癌症的机制必须允许例外的情况,使得正常的生长和繁殖得以进行。细胞必须经常得到许可而进行分裂,而且,只要它们在合适的时候停止,它们就必须具备鼓励分裂的基因。这是如何完成的,现在刚刚变得清楚起来。如果我们是在观察一个人工制造的东西,我们会得到结论说:它的背后有一个聪明得近乎可怕的设计者。

    这里的关键又是程序性死亡。癌基因是导致分裂与生长的基因,但是很让人吃惊的是,它们中有几个也激发细胞死亡。在这几个基因中有一种叫做MYC,它既可以激发细胞分裂也可以激发细胞死亡,但是,它发出的死亡信号暂时被外界的因素——存活信号——抑制住了。当这些存活信号被用完了之后,死亡就占了上风。这好像是设计者意识到了MYC能够误入歧途,所以一开始就给它设了一个陷阱,使得任何发了疯的细胞都会在存活信号被用光的时候自杀。这个聪明的设计师还往前多走了一步,把三个不同的癌基因——MYC、BCL-2和RAS——拴在了一起,使得它们互相控制。只有在三者都正常工作的时候,正常的细胞生长才可以进行。用发现了这些相互关系的科学家们的话说:“离开了这些支持,陷阱就露出来了,受了影响的细胞要么被杀死,要么就奄奄一息,两者都不再有(癌症的)威胁。”

    p53和癌基因的故事就像我这本书的大部分内容一样,对“遗传研究有危险”以及“遗传研究应该停止”的说法是个挑战。这个故事也对这样一个观点——简化论科学,也就是把系统拆成部分以理解它们的做法,是有问题和徒劳的——提出了很强的挑战。癌症医学是把癌症作为一个整体的医学研究,虽然从事这方面研究的人们既勤奋又聪明,也有大量的经费,它所取得的成果,相比于以简化论为基础的遗传研究几年来所取得的成果,真是少得可怜。事实上在最初,测定整个人类基因组序列的号召之一来自于意大利诺贝尔奖得主若罗纳托•都贝科(Renato Dulbecco),在1986年,他提出这是打赢对癌症的战争的惟一途径。现在我们对于癌症这个西方世界里最残酷、最常见的杀手,终于有了得到真正的治愈方法的切实希望,这是人类历史上的第一次,而这来自于简化论、遗传研究以及它们带给我们的认识。那些认为这些研究有危险的人应该记住这一条。

    自然选择在选定了一个解决问题的方法之后,常常也用它去解决其他问题。程序性死亡除了清理掉癌细胞之外,也还有其他用途。它在对抗普通的传染病方面也有用处。如果一个细胞发现它被某种病毒感染了,它就可以为了整个身体的利益而杀死自己(蚂蚁和蜜蜂也会因为整个蚁群或蜂群的利益而这样做)。有很好的证据表明,有些细胞确实这么做。不可避免的是,有些证据也表明一些病毒进化出了一种方法使得这样的细胞自杀不会出现。爱泼斯坦一巴尔(Epstein-Barr)病毒可以导致腮腺炎或单核细胞增多症,它带有一个暂时休眠的细胞膜蛋白质,其任务似乎就是制止被感染的细胞所表现出来的任何自杀倾向。人类乳头状瘤病毒是宫颈癌的起因,它带有两个基因,它们的任务就是关闭TP53和另外一个肿瘤抑制基因。

    我在四号染色体那一章里谈到过,亨廷顿氏病就是无计划的、过多的脑细胞的程序性死亡,而这些细胞一旦死亡就无法被补充——这就是为什么有些大脑损伤是不可逆的。这在进化角度来说很合情合理,因为与皮肤细胞不同,每一个脑细胞都是被很仔细地塑造、训练的富于经验的“接线员”。用一个没有经验、没有受过训练、形状不定的细胞来代替它比无用还要糟糕。当病毒进入神经细胞的时候,神经细胞不会接到自杀的指令。但是因为某种还不完全清楚的原因,病毒本身有时候引发神经细胞的自杀。例如,在致命的脑炎a病毒那里,就是这么个情况。

    程序性死亡还可以被用来制止除癌症之外的其他细胞叛变,比如由转座子引起的基因的改变。有些很好的证据表明,卵巢和精囊里的生殖细胞分别处于卵泡细胞和塞尔托里细胞的监视之下,它们的任务就是察觉细胞的自私性并在其出现的时候引发程序性死亡。例如,在一个5个月大的人类胚胎的卵巢里,有着大约700万个生殖细胞。到了她出生的时候,就只有200万个了。在这200万个里,只有400个左右会在她的一生中进入排卵过程。剩下的大多数都通过程序性死亡被除掉了。这个过程铁面无私地执行优化人种的政策,给任何不够完美的细胞都下达自杀的命令(身体是个独裁统治的地方)。

    同样的原则可能也适用于大脑,在那里,ced-9和其他基因在发育过程中除掉了大量细胞。任何工作得不太好的细胞又是为了集体利益而被牺牲掉了。所以,通过程序性死亡除去神经细胞不仅仅使学习成为可能,它也保持了细胞的平均质量。在免疫细胞那里可能也发生了类似的事情,即用程序性死亡无情地除去细胞。

    程序性死亡是个没有中央控制的行为。没有一个计划中心,没有一个“中央政治局”来决定哪个细胞该死哪个可以留着。这是它美妙的地方。就像胚胎发育一样,它从每一个细胞对自己的了解得到收获。只有一个概念上的困难:程序性死亡是如何进化来的?如果在受到感染、具有癌的性质或有了捣蛋基因的时候,一个细胞就会杀死自己,那么它就没有办法把自己的优点传给子孙。这个问题被称为“神风之谜”(神风是第二次世界大战时期日本自杀式敢死队的名称,在无法用常规手段打击敌舰时队员们驾驶飞机撞向敌舰),它可以用一种群体选择的形式解决:如果程序性死亡进行得比较好,那么整个身体就比那些程序性死亡进行得不好的身体要有更大的优势,前者因此就把好的特点传给它们后代的细胞。但是这就意味着程序性死亡系统在一个人的一生中无法进步,因为在一个身体之内它无法通过自然选择而进化。我们只能守着我们从遗传得到的细胞自杀机制。

    第十八号染色体 疗法

    我们的疑惑是叛徒它让我们惧怕尝试而失去我们本可以得到的果实 ——威廉•莎士比亚《一报还一报》

    当公元第三个千年来临之际,我们第一次处在了可以修改我们的遗传密码的位置。它不再是珍贵的手稿,它现在被存在软盘上。我们可以切下一些部分,加进一些部分,重新组合段落,或者重写某些词。这一章是关于我们是怎样做这些事情的、我们是否应该做,以及为什么在我们就要这样做的时候我们似乎失去了勇气,而强烈地想要把整个文字处理器扔掉,坚持说遗传密码应该保持它的神圣不可侵犯性。这一章是关于基因的操纵的。

    对大多数外行来说,遗传研究的明显目的——如果你愿意也可以说是最终的奖赏——就是通过基因工程造出的人。有一天,也许是几个世纪以后,这意味着会有一些人身上带有新发明出来的基因。现在,它意味着一个借了别人基因的人,或者从动物或植物那里借了基因的人。这样的事情可能吗?而且,如果可能,在伦理上行得通吗?

    想一想在第十八号染色体上的一个基因,它能够抑制结肠癌。我们在上一章里已经与它有过一面之交了:它是一个位置还没有被完全确定的肿瘤抑制基因。人们曾经认为它是一个名叫DCC的基因,但是我们现在知道DCC的任务是在脊柱里引导神经生长,与抑制肿瘤一点关系也没有。这个肿瘤抑制基因与DCC挨得非常近,但它仍然难以捉摸。如果你生下来时就已经有了这个基因的不正常形式,你得癌症的几率就会大大增加。一个未来的基因工程师能不能像取出汽车上一个坏了的火花塞那样把它给拿出来,用好的零件来代替它呢?很快,答案就会变得肯定。

    我的年龄使我在开始新闻业生涯的时候还在用真正的剪刀剪纸张,用真正的糨糊贴它们。现在,要把段落挪来挪去的时候,我会用微软的好人们做得很合适的小小的软件里的符号来指示它们做同样的剪贴。(我刚刚把这一段从下一页里挪过来。)但是,原理是一样的:为了挪文字,我把它们剪下来,再把它们贴到另外一个地方。

    对基因内容做同样的事,也需要剪刀和糨糊。幸运的是,自然界为了她自己的目的已经把两者都发明了。糨糊是名叫连接酶的东西,每当它遇到松散的DNA句子的时候,它就把它们缝到一起。剪刀叫做限制性内切酶,是1968年在细菌里发现的。它们在细菌细胞里的作用是以切碎病毒的基因来打败它们。但是,很快显现出来的是,跟真正的剪刀不同,限制性内切酶事儿很多:它只是在遇到一串特定的字母序列的时候才能够把DNA切开。我们现在知道400种不同的限制性内切酶,每一种识别不同的DNA字母序列,然后把那一处切开。它们就像是一把剪刀只在找到“限制”这个词的时候才把纸剪开。

    1972年,斯坦福大学的保罗•伯格(Paul Berg)用限制性内切酶在试管里把两段病毒DNA对半切开,然后用连接酶把它们以新的排列组合方式又连接起来。他就这样造出了第一个人工“重组”DNA。现在,人类可以做反转录病毒做了很久的事情了:把一个基因插到染色体上去。在那之后的一年之内,第一个基因工程细菌产生了:这是带有从蟾蜍里拿出来的一个基因的一种肠道细菌。

    当时立刻有了一阵公众的忧虑,而且并不仅限于外行。科学家们自己也认为在急着去利用这项新的技术之前暂停一下是对的。在1974年,他们呼吁暂时停止所有的基因工程研究,这仅仅是给公众的忧虑之火又煽了些风:如果科学家都担心得要让研究停下来,那肯定有什么事是值得担心的。自然把细菌基因放在细菌里,把蟾蜍基因放在蟾蜍里,我们是谁,要把它们换过来?后果是否很可怕呢?1975年在阿西洛玛(Asilomar)(美国加利福尼亚州海滨度假村,很多科学会议在此举行)举行的一次会议经过讨论搞出了一份安全方面的意见,使得美国的基因工程在一个联邦委员会的指导下小心翼翼地重新开始。科学在当自己的警察。公众的紧张情绪似乎逐渐消失了,不过,在90年代中期它又相当突然地复活了,这一次的聚焦点不是安全,而是伦理。

    生物技术诞生了。一开始是基因能泰克,然后有西特斯和百奥真(都是生物技术公司的名字),然后其他公司纷纷崛起,来利用这些新技术。在这些新兴企业面前的是一个充满可能性的世界。细菌可以被引诱来制造人体蛋白,用于医药、食品或工业。不过,当人们发现大部分人类蛋白质都不能由细菌很好地造出来,以及我们对人类蛋白质知之甚少,在医药上对它们还没有大量需求的时候,失望就逐渐地浮现了。尽管有大量的风险投资,为它们的持股者赢了利的只是诸如“应用生物系统”等给其他人制造仪器的公司。产品还是有的。到了80年代末期的时候,细菌制造的人体生长激素就代替了从死尸大脑里取出来的既昂贵又不安全的同类产品。在伦理和安全方面的担心到目前为止被证明是没有根据的:在30年来的基因工程中,没有任何或大或小的环境或公共健康事故是由于基因工程实验引起的。到目前为止,一切良好。

    同时,基因工程对科学的影响比对商业的影响要大。现在克隆基因是可能的(在这里,这个词的意思与尽人皆知的那个意思不一样):在人类基因组这个“稻草堆”里分离出一个基因这样的一根“针”把它放入细菌里去,长出几百万份,这样使得它们能够被纯化,它们的序列能够被读出来。通过这个方法,存“书”很多的人类DNA图书馆被建起来了,它们存着成千上万相互之间有重叠的人类基因组片段,每一种的数量都够用来进行研究。

    就是从这些图书馆里,人类基因组计划中的人们拼凑出了基因组的全部文字。这个计划开始于80年代末期,有着一个野心大得近于荒唐的目标:在20年内读出整个人类基因组。在之后的14年里,没有什么进展。然后在一年之内,新的基因测序仪器就完成了任务。2000年6月26日,人类基因组计划宣布它得到了人体的完整草稿。

    实际上,人类基因组计划是被“撞”进了这个声明。一个中学肄业生、前职业冲浪运动员、越南战争老兵克雷格•文特尔(Craig Venter)分享了功劳。文特尔曾经三次把遗传学翻了个底儿朝天。第一次,他发明了一种快速寻找基因的方法,专家说这不会成功。它却成功了。去了私人公司之后,他又发明了一种快速测序的技术,叫做“霰弹法”它把基因组打成随机的碎片,然后通过各片之间的重合部分把它们按正确的顺序重新组装起来。专家们又说这不会成功,而他事实上已经在用它给一个细菌基因组测序了。

    这样,当文特尔在1998年5月宣布他要第一个为人类基因组测序并把结果申请专利时,人类基因组计划内部出现了很严重的惊恐情绪。英国的威尔康姆信托基金会通过资助剑桥附近的桑格中心而资助了该计划的三分之一,它对文特尔的回应是提高“赌注”它给由公众资金扶持的这个项目注入了更多资金,并要求把它的完成日期提前。桑格的头儿,约翰•萨尔斯顿(John Sulston),领头开展了一场影响很大的宣传,反对在他看来文特尔在研究最后关头为寻求商业利益而进行的“海盗”行为。最后,冷静的头脑占了上风,2000年6月,宣布了一个“平局”。

    但是,还是回到操作上去吧。把一个基因放到一个细菌里去是一回事,把它插到人体里去又是另一回事。细菌很高兴吸收那些叫做质粒的环状DNA,把它们当做自己的DNA—样接受。还有,每一个细菌都只有一个细胞。人有100万亿个细胞。如果你的目标是从遗传上摆布一个人,那你需要在每一个相关的细胞里都插进一个基因,或者从单细胞的受精卵开始。

    即使如此,在1970年发现的逆转录病毒能够从RNA制造DNA拷贝,突然使得“基因疗法”似乎是个可行的目标了。一个逆转录病毒带有由RNA写成的信息,基本上是这样的意思:“做一份我的拷贝,把它缝到你的染色体里去。一个实施基因疗法的人只需要拿来一个逆转录病毒,切掉几个它的基因(特别是那些使它在第一次插进染色体后变得有传染性的),放进一个人类基因,然后用它感染病人。病毒开始工作,把基因插到体细胞里,嘿,你就有了一个转基因人。

    在整个80年代早期,科学家们都在担心这样一个程序的安全性。逆转录病毒也许会工作得太好了,不仅感染普通细胞,也感染生殖细胞。逆转录病毒也许会用某种方法重新获得它那些丢失了的基因,变成恶性;也或者它会使得身体本身的基因变得不稳定而引发癌症。任何事都可能发生。在1980年,当一位研究血液病的科学家马丁•克莱因(Martin Cline)违背了他的承诺,把一段无甚害处的重组DNA放入了一个受遗传血液病地中海贫血症折磨的以色列人体内(尽管不是通过逆转录病毒)的时候,对于基因疗法的恐惧被煽得更厉害了。克莱因丢了工作与名誉;他的实验结果从未被发表。每一个人都同意,就算不说别的,人体实验的时机也还不成熟。

    但是,老鼠实验被证明是既让人宽心又让人失望。基因疗法远远没有不安全,却更有可能不会成功。每一种逆转录病毒只能感染一种细胞组织;需要细心的包装才能把基因放进它的套子里去;它着陆在随便一条染色体上的随便一个什么地方,而且常常不被激活;而且,身体的免疫系统被传染病的“突击队”事先提示了一下,不会漏过一个笨手笨脚、科学家自制的逆转录病毒。

    还有,到80年代早期为止,被克隆出来的人类基因如此之少,即使能够使逆转录病毒成功地工作,也没有什么明显的候选基因要放进逆转录病毒里去。

    不过,到了1989年,几个里程碑被越过了。逆转录病毒把兔子基因带入了猴子细胞;它们把克隆出来的人类基因送入了人体细胞;它们还把克隆的人类基因带入了老鼠细胞。三个大胆又有雄心的人-弗伦奇•安德森(French Anderson)、麦克尔•布雷斯(Michael Blaese)和史蒂文•罗森伯格(Steven Rosenberg)(这三个人均为当代美国生物学家,基因疗法的创始人)——认为人体实验的时机成熟了。在一场既漫长且有时很痛苦的与美国联邦政府重组DNA指导委员会进行的斗争中,他们试图得到在癌症晚期病人身上做实验的许可。他们的理由带出了科学家和医生对于什么有优先权的不同考虑。在纯科学家看来,人体实验显得仓促和不成熟;对于惯于见到病人因癌症而死的医生来说,仓促一些是很自然的。“我们为什么这么匆忙?”在一次会议上安德森问到:“在这个国家里每一分钟有一个病人死于癌症。自从146分钟之前我们开始这场讨论,已经有146个病人死于癌症。最后,在1989年5月20日,委员会给予了许可,两天以后,一个马上要死于黑色素瘤的卡车司机——莫里斯•孔茨(Maurice Kuntz)——接受了第一个特意引入(并被批准)的新基因。它并不是被设计来使他痊愈的,甚至都不会在他的身体里永久停留;它仅仅是一种新的癌症疗法的助手。一种特殊的白细胞在他的体外被繁殖了,它们在渗透入并吃掉肿瘤方面很不错。在把它们注射回体内之前,医生们用带有一个小小的细菌基因的逆转录病毒感染了这些细胞。这样做的目的只是为了使它们能够在病人体内跟踪这些细胞,指出它们去了哪里。孔茨去世了,在这个实验里什么让人吃惊的事也没有发生。但是,基因疗法开始了。

    到了1990年,安德森和布雷斯又回到了委员会面前,带着一份更有雄心的计划。这一次,要注射的基因真的会是能够治病的,并不仅仅是一个身份标签。目标是一种极其少见的遗传病,叫做严重综合免疫缺失(SCID),它使得儿童面对感染无法展开免疫防御,致病原因是所有白细胞的迅速死亡。这样的孩子面对的是不断地受感染不断生病的短暂生命,除非他们是被放置在无菌的罩子里,或是因为幸运寻得了一个骨髓型相配的亲戚而得到完全的骨髓移植。这个病是由第二十号染色体上一个名叫ADA基因的一个“拼写”错误造成的。

    安德森与布雷斯的建议是从一个SCID孩子体内取出一些白细胞,拿一个用新的ADA基因武装起来的逆转录病毒感染它们,然后再把它们输入孩子体内。他们的建议又一次遇到了麻烦,但是这一次的反对来自另外一个方向。到了1990年,有一种治疗SCID的方法,叫做PEG-ADA,它的组成部分是巧妙地向血液里输送——不是ADA基因——ADA蛋白质,这是用等价的基因在牛体内合成的。就像治疗糖尿病的方法(注射胰岛素)或治疗血友病的方法(注射血凝因子)一样,SCID被蛋白质疗法(注射PEG-ADA)攻克了。基因疗法还有什么必要呢?

    在新技术刚刚诞生的时候,它们常常显得无可救药地缺乏竞争力。最早的铁路比当时存在的运河昂贵得多,不可靠得多。只是随着时间,新的发明才会逐渐降低它自己的花费或是提高它的效应,达到能够比得上旧技术的地步。基因疗法也是如此。蛋白质疗法在治疗SCID上赢得了竞赛,但是它要求每月一次在臀部注射,很不方便,也很贵,并且一生都要坚持治疗。如果基因疗法能够成功,它会把所有这些都用一次治疗代替——给身体重新安装上它本来就应该有的基因。

    在1990年9月,安德森与布雷斯得到了“前进”的许可,他们用基因工程改造过的ADA基因治疗了阿山蒂•德西尔瓦(Ashanthi DeSilva),一个三岁的小女孩。那是一个立竿见影的成功。她的白细胞数目增加了两倍,她的免疫球蛋白数目大大提高,她的身体开始制造正常人四分之一的ADA蛋白。不能说基因疗法使她痊250愈了,因为她已经接受了PEG-ADA,并且还在继续接受。但是,基因疗法成功了。今天,全世界四分之一以上的SCID儿童已经接受过基因疗法。没有一个人是确确实实被治愈到能够停止使用PEG-ADA的程度,但是还没有什么副作用。

    其他病会很快加入SCID,列入已经被逆转录病毒基因疗法攻打过的疾病名单,包括家族性高胆固醇血症、血友病和囊性纤维化。但是,癌症毫无疑问是主要目标。1992年,肯尼斯•卡尔沃(Kenneth Culver)(当代美国生物学家)尝试了一个有勇气的实验,第一次把带有想要的基因的逆转录病毒直接注射入人体(与此相对应的是用病毒感染在体外培养的细胞,再把这些细胞重新输入人体)。他把逆转录病毒直接注射进了20个人的脑瘤里。把任何东西注射进大脑里听起来都够吓人的,更别说是逆转录病毒了。但是,等你听到逆转录病毒里有什么再说吧。每一个逆转录病毒里都有一个从疱疼病毒里提取出来的基因。肿瘤细胞把逆转录病毒吸收进去,然后表达疱疹病毒的基因。届时,卡尔沃医生再让病人服用治疗疱疹的药物;而这药物就攻击了癌症细胞。在第一个病人身上它似乎成功了,但是在那之后的五个病人里有四个没有成功。

    这些是基因疗法最初的日子。有些人认为有一天它们会像今天的心脏移植那样常规。但是,要想说基因疗法是否是战胜癌症的战术,或者,那些以抑制血管生成、抑制端粒酶或p53为基础的疗法,哪一种能够赢得这场比赛,现在还为时过早。不管结论如何,在历史上癌症疗法从来没有像现在这样看上去充满希望,这几乎都是因为新的遗传学的缘故。

    这样的体细胞基因疗法已经不再那么有争议了。当然,关于安全的担心还是有的,但是几乎没有人能够想出一个从伦理出发的反对意见。它只是另一种形式的治疗方法,没有一个人,在目睹朋友或亲戚因为癌症而接受了化学治疗或放射治疗之后,会从那些没有什么根据的安全考虑出发,对相对来讲可能没有什么痛苦的基因疗法有什么不情愿。加进去的基因会离那些形成下一代的生殖细胞远远的;这个担心已经被牢固地消除了。但是,生殖细胞基因疗法一在那些能够被传到后代去的地方改变基因,对人类来说还是彻头彻尾的禁忌一在某种意义上来说要容易实施得多。在90年代里导致了新一轮抗议的,就是以转基因大豆和转基因老鼠形式出现的生殖细胞基因疗法。借用贬损它的人所用的一个词来说,它是弗兰肯斯坦技术(弗兰肯斯坦从不同尸体上肢解不同的部分合成为一个有生命的“人”,结果这个丑陋的“人”成为一个为害人类的强大的怪物。弗兰肯斯坦为了消除自己行为的恶果而追杀怪物,最后与自己的作品同归于尽)

    植物基因工程迅速发展有几个原因。第一个是商业的:多年以来,农夫们都为新品种的种子提供了一个需求迫切的市场。在史前时期,传统的培养方法把麦子、稻子和玉米从野生的草变成了产量高的庄稼,这完全是通过操纵它们的基因完成的,虽然那些早期的农民肯定不知道他们做的是这么一件事。在现代,虽然从1960年到1990年,世界人口翻了一番,但同样的技术使粮食产量提高了两倍,人均粮食产量提高了百分之二十多。热带农业的“绿色革命”在很大程度上是一个遗传学现象。但是,所有这些都是盲目完成的,有目标的、精心的基因操纵能够取得的成就会比这大多少?植物基因工程的第二个原因是植物可以被相当容易地克隆和繁殖。你不可能拿从老鼠身上切下来的一块去长出一只新老鼠,你在很多植物那里却可以。但是,第三个原因是个幸运的意外。一种名叫土壤杆菌的细菌已经被发现了,它有一种不寻常的特点,就是能够用名叫Ti质粒的小型环状DNA感染植物,这些Ti质粒把自己融合到植物染色体里去。土壤杆菌是现成的载体:只需往质粒里加一些基因,把它涂到叶子上,等到感染确实发生之后,用叶子的细胞再长出新的植物。现在,这个植物会用自己的种子把新基因一代代传下去。这样,在1983年,最初是一株烟草,然后是一株牵牛花,再然后是一株棉花,都以这种形式成为转基因植物。

    谷类植物对土壤杆菌的感染有抵抗力,它们需要等到一个更粗糙的方法的发明:基因们名副其实地是被装在微小的金粒上用火药或是粒子加速器射进细胞里的。这个技术现在已经成了所有252植物基因工程的标准技术。它引起的发明有放在架子上不容易烂的西红柿,不受棉铃虫蛀蚀的棉花,能够抵抗科罗拉多甲虫的土豆,能够抵抗玉米螟虫的玉米,以及其他很多转基因植物。

    这些植物从实验室挪到大田实验,又成为商品出售,过程中没打几个嗑巴。有时候,实验没有成功——1996年,棉铃虫严重毁坏了应该是有抵抗力的棉花;有时候,它们招来了环境保护人士的抗议。但是,从来没有出过“事故”当转基因庄稼被运过大西洋时,它们遇到了更强烈的环保人士的抵制。特别是在英国,那里的食品安全检验者们自从“疯牛病”之后就失去了公众的信任。转基因食品在美国已经成为常规食品的三年之后,在1999年,它在英国突然成了了不得的事。更有甚者,蒙森托(Europe Monsanto,农业技术公司,研制出很多转基因食品)在欧洲犯了一个错误,它首先推行的作物对它自己公司生产的没有选择性的杀植物剂——“围捕”——有抵抗力。这使得农夫可以用“围捕”来除草。这样一种操纵自然、鼓励使用除草剂和赚取商业利益的组合,激怒了很多环保主义者。环保恐怖分子开始捣毁油料作物的试验田,并穿着弗兰肯斯坦的服装到处游行。这个问题成了绿色和平组织的三大担忧之一,这无疑是信奉公众的权利与智慧的标记。

    像通常情况一样,媒体迅速地把争论两极化了,极端分子们在午夜电视节目上冲对手大喊大叫,一些采访逼着人们做出简单回答:你支持还是反对基因工程?这场争论的最低点,是一位科学家被迫早早退休,因为在一个歇斯底里的电视节目中有人声称他证明了加有凝集素的土豆对老鼠有害。后来,由“地球之友”组织起来的一些同事证明了他的“清白”。他的结果与其是说明了基因工程是否安全,不如说是说明了凝集素——这是一种已知的动物毒素——是否安全。是媒体混淆了它所传达的信息。把砒霜放到烧锅里会使里面煮的东西变得有毒,但是这并不意味着所有烹调都是危险的。

    同样道理,基因工程与工程里涉及到的基因一样安全或危险。有些安全,有些危险。有些对环境无害,有些对环境有害。对“围捕”有抵抗力的油菜也许对环境不友好,因为它鼓励除草剂的使用,或者把抵抗力传给杂草。能够抵抗昆虫的土豆对环境友好,因为它们需要更少的杀虫剂,使撒杀虫剂的拖拉机需要更少的柴油、运送杀虫剂的卡车损耗更少的路面,等等。对于转基因作物的反对,是出于对新技术的仇恨而不是对环境的热爱,它们在很大程度上故意忽略这样一些事实:千千万万的安全性实验已经做过了,没有得到过意外的坏结果;现在已经知道,在不同物种之间——尤其是在微生物之间一进行的基因交换,比我们所料想的要普遍得多,所以,这个原理没有一点不“自然”的地方;在基因改造之前,植物的育苗就包含有有意或偶然地用伽马射线对种子的照射,以引起突变;基因改造的主要后果是提高对疾病与害虫的抵抗力以减小对于化学喷雾的依赖;粮食产量的迅速增长对环境是有好处的,因为减轻了开荒种地的压力。

    这个问题的政治化造成了荒唐的结果。在1992年,世界上最大的种子公司“先锋”把巴西果的一个基因引入了大豆。本意是想弥补大豆里一种名叫甲硫氨酸的化学物质的“先天不足”,使得大豆对于那些以它为主食的人来说成为更为健康的食品。但是,很快就发现,世界上有很少的一些人对巴西果过敏,于是,“先锋”试验了它的转基因大豆,证明它们也能够引起这些人的过敏反应。在这个时候,“先锋”通知了负责机构,发表了他们的发现,并放弃了最初的计划。尽管计算表明,这个新的大豆过敏可能每年最多杀死两个美国人,却有可能把世界上数以万计的人从营养不良中解脱出来,他们还是这样做了。但是,这个事情并没有成为商业集团小心谨慎的一个例子,相反,这个故事被环保人士重新包装之后,被当成一个揭示基因工程的危险性和商业集团不顾一切的贪婪心的故事来讲。

    尽管如此,甚至在有那么多项目出于小心而被取消的情况下,一个比较可靠的估计是,到了2000年,在美国出售的作物种子里有50%〜60%是经过基因改造的。不管是好是坏,转基因作物是在这儿呆下去了。

    转基因动物也是如此。把一个基因放入一只动物里使它及它的后代被永久地改变,现在已经与改变植物一样容易了。你只需要把基因给插进去。用一个非常细的玻璃移液管把基因吸进去,在老鼠交配的12小时以后,把移液管的尖端捅进一个还处在单细胞阶段的老鼠胚胎里去,确定移液管的尖端进入了两个细胞核之一,然后轻轻一按。这个技术还远远不够完美:这样出来的老鼠只有大约5%能够表达外来的基因,在其他动物比如牛中,成功的就更少了。但是在那5%里得到的结果是外来基因整合到了某一条染色体的一个随机位置上的“转基因老鼠”。

    “转基因老鼠”在科研上是含金的沙子。它们使得科学家能够发现基因的作用是什么以及为什么。加进去的新基因不需要是来自老鼠的,它可以来自于人体:跟电脑不同,几乎所有生命体都能够运用任何类型的“软件”。例如,一只特别容易得癌症的老鼠可以通过引进人类的第十八号染色体而重新变得正常,这也是最早证明第十八号染色体上有一个肿瘤抑制基因的证据之一。但是,与加进去一整条染色体相比,更常见的是只加一个基因。

    微观注射正在为另一个更精巧的技术让路,它有一个明显的优势:可以把基因安插到一个精确的位置上。一个三天大的老鼠胚胎含有一些叫做胚胎干细胞的细胞,又称为ES细胞。如果这些细胞之一被取出来,注射进一个基因,那么,就像马里奥•卡佩255基(Mario Capecchi)(当代美国生物学家)在1988年首先发现的那样,细胞会在这个基因应在的位置上把染色体切开,把新基因放进去,把这个位置上原来的基因取下来。通过在电场里让细胞上的孔洞短期张开的方法,卡佩基把从老鼠里克隆出来的一个癌基因int-2放进了一个老鼠细胞,并且观察了新基因找到有故障的基因并将其换下来的过程。这个方法被称为“同源基因重组”它利用了这样一个事实,即修复破损的DNA的机制常常是用另一配对染色体上富余的那个基因作为模板。细胞错误地把新的基因当成了模板,照着它去修复了自己的基因。这样改变之后,就可以把这个ES细胞放回胚胎里,长成一个“镶嵌体老鼠”——它体内的一部分细胞带有新的基因。

    同源基因重组不仅允许基因工程师修补基因,也允许他们做相反的事情:用安插有问题的基因去故意破坏正常工作的基因。

    这样做的结果是所谓的“剔除”老鼠,它们是在有一个基因不能“出声”的情况下长大的,这可以更好地让那个基因的真正功能显露出来。记忆机制的发现(参见第十六号染色体那一章),就要在很大程度上归功于“剔除”老鼠,其他生物学分支也是如此。

    转基因动物并不是只对科学家才有用。转基因羊、牛、猪和鸡都有商业方面的应用。有一个人类的凝血因子已经引进到羊的体内,这样做是希望它可以从羊奶里被大量提取出来,用于治疗血友病。(顺便说一句,进行了这项工作的科学家克隆了多莉羊并在1997年早些时候把它展示给一个大惊失色的世界。)魁北克的一个公司拿了使蜘蛛能够结网的基因,把它放进山羊体内,希望能够从山羊奶里提取成丝蛋白质并把它们纺成丝。另外一个公司把它的希望寄托在鸡蛋上,指望着把它变成生产各种有价值的人类需要的产品的工厂,从药品到食品添加剂。但是,即使这些半工业化的应用失败,转基因技术也会改造动物的繁殖,就像它改造了植物的繁殖一样,它可以生产出有更多肌肉的肉牛,有更多奶的奶牛,或者是下的蛋味道更好的鸡。

    这些听起来都很容易。制造转基因人或“剔除”人的技术上的障碍,对于一个设备精良的实验室里的一组优秀科学家来说,变得越来越微不足道了。从原理上说,从现在开始的几年之后,你也许可以从你自己的身体里取出一个完整的细胞,在一个特定染色体的一个特定位置上插进一个基因,把细胞核转到一个自身细胞核被去掉了的卵细胞里,然后从这样造成的胚胎里长出一个人来。这个人会是一个你本人的转基因克隆,在其他任何方面都与你一模一样,惟一例外的是——举个例子说——在让你秃头的那个基因处有另外一种形式的基因。你还可以用这个克隆人体内的ES细胞长出一个多余的肝脏来替换你体内被酒精损坏了的那个。或者你可以在实验室里长出一些人类的神经细胞用来试验新的药物,这样就可以饶过实验动物的性命了。或者,如果你发疯得够厉害,你可以把财产留给你的克隆,然后放心地自杀,知道你的一部分仍然存在,但是经过了些许改进。没有人需要知道这个“人”是你的克隆。如果他年龄大了之后你们之间的相似处越来越多,他不秃顶这一点就可以消除别人的怀疑。

    所有这些都还不可能——人类ES细胞刚刚被发现——但是它不会在将来很长时间里都不可能。当克隆人体成为可能的时候,它是否符合伦理?作为一个自由的个体,你拥有你自己的基因组,没有任何政府可以使它成为国家财产,没有公司可以把它买下来,但是这是否就给了你权力把它加之于另一个个体身上?(一个克隆人是另一个个体。)又能否去改变它?到目前为止社会好像倾向于把自己绑住以抵御这些诱惑,暂时停止克隆人和生殖细胞基因疗法,给胚胎研究设立严格的界限,放弃医学上的可能成就以避免未知事物可能会带来的恐怖。我们已经把科幻电影里福斯特式(德国民间传说中的人物,因只顾眼前快乐不计后果而把自己的灵魂卖给魔鬼)的布道,即干扰自然进程就会招致凶暴的报复,牢牢地刻进了脑子里。我们变得谨慎了,或者说起码作为有投票权利的人我们更谨慎了。作为消费者,我们很可能有不同的做法。克隆很可能不257是由于多数人赞成而发生,而是由于少数人的行为。毕竟试管婴儿就大致是这样发生的。社会从来就没有决定可以允许试管婴儿;它只是慢慢习惯了这样的想法,即那些绝望地想要试管婴儿的人有办法搞到他们。

    与此同时,现代生物学大量提供给我们的嘲弄之一,就是如果你在第十八号染色体上的肿瘤抑制基因有问题,那你就忘掉基因疗法吧。一个更简单的预防措施也许就在我们手边。新的研究表明,有些人的基因会增加他们得直肠癌的可能性,但含有大量阿司匹林和不成熟的香蕉的饮食,可能会为他们提供保护。诊断是基因上的,疗法却不是。在基因诊断之后实施传统疗法,也许是基因组给医学带来的最大好处。

    第十九号染色体 预防

    99%的人一点儿都不理解这场革命来得有多快。 ——史蒂夫•福多尔(Steve Fodor),爱菲梅特利克斯(生物技术公司)的总裁

    任何医疗技术的进步都带来一个道德难题,冲击着我们这个物种。如果这个技术可以挽救生命,那么,即使有风险相伴,不发展和应用它也是道德上的错误。在石器时代,我们除了眼睁睁看着亲人死于天花之外,别无他法;在琴纳(Jenner)完善了疫苗接种技术之后,如果我们还是眼睁睁看着亲戚死于天花,那我们就是不负责任。在19世纪,我们除了眼看父母向肺结核屈服之外,别无选择;在弗莱明(Fleming)发现了青霉素之后,如果我们没有把将要死亡的肺结核病人送去看医生,那是我们的疏忽。(作者此处所举例子不恰当,因为青霉素治不了肺结核)对于个体适用的,对于国家和群体就更适用。富国不能够再忽视夺去了穷国里无数儿童生命的流行性腹泻,因为我们再也不能说医学对此没有办法。口服补水疗法(腹泻之所以危险是因为身体如果因此脱水过多就会造成机能不正常,严重时可以死亡)给了我们良知。因为有些事情是我们可以做的,我们就必须做。

    这一章是关于最常见的两种疾病的基因诊断,这两种病,一种是快速无情的杀手,另一种是缓慢又没完没了的盗取记忆者:冠心病和早老性痴呆症。我相信,我们在运用影响这两种疾病的基因的知识方面有一种危险,就是我们过于吹毛求疵、过于谨慎了,因此,我们就面临着另一种危险:拒绝人们接触到能够挽救生命的研究,从而犯下道德上的错误。

    有一个家族的基因,叫做载脂蛋白基因,或APO基因。他们基本上有四种,叫做A、B、C和——很奇怪的——E,尽管每一种在不同染色体上会有不同的形式。我们最感兴趣的是APOE,它凑巧位于第十九号染色体上。要理解APOE的工作,需要离题一点,谈谈胆固醇和甘油三酯的习惯。当你吃一盘熏肉和鸡蛋的时候,你吸收进很多脂肪,跟它们一起进来的是胆固醇——能够溶于脂肪的物质,很多激素都是从它开始造出来的(见第十号染色体那一章)。肝脏把这些东西消化掉,送它们进入血液循环,以让它们被送到其他器官里去。因为它们不溶于水,胆固醇和甘油三酯必须被名叫脂蛋白的蛋白质“背着”通过血液。在旅途开始的时候,送货的卡车叫做VLDL,是非常低浓度脂蛋白的意思,它装着胆固醇和脂肪。当它卸下它的一些甘油三酯的时候,它就变成了低浓度脂蛋白,或叫LDL(这是“坏的胆固醇”)。最后,在把胆固醇送到地方之后,它又变成高浓度脂蛋白,HDL(这是“好的胆固醇”),又回到肝脏去接受下一批货。

    APOE蛋白(叫做apo-s)的任务是把VLDL与一个需要甘油三酯的细胞上的受体介绍给对方;APOB蛋白(或说是apo-P)的任务,是卸胆固醇时做同样的工作。这样,很容易就可以看出,APOE和APOB是与心脏病有关基因的主要候选者。如果它们不正常工作,胆固醇与脂肪就留在血液里,慢慢会在动脉壁上累积起来,成了动脉粥样硬化。APOE基因被“剔除”了的老鼠即使吃正常的老鼠食物也会得动脉粥样硬化。制造脂蛋白与细胞上受体的基因也能够影响胆固醇和脂肪在血液里的行为,影响心脏病的发生。一种遗传的易得心脏病的特性叫做家族性高胆固醇血症,是胆固醇受体基因上一个罕见的“拼写错误”的结果。

    APOE之所以特殊,在于其非常“多态”我们并不是所有人都有同一形式的APOE基因,只有很少见的例外。相反,APOE就像眼睛的颜色一样:它有三个常见的类型,叫做E2、E3和E4。因为这三类在从血液里取出甘油三酯的效率有所不同,它们在是否易得心脏病方面也不同。在欧洲,E3是“最好”与最常见的一种:80%以上的人起码有一份E3,39%的人有两份。但是,有两份E4的那7%的人很早就有心脏病的危险比别人高得多,有两份E2的那4%的人也是如此,虽然得病的方式略有不同。

    但这是一个全欧洲的平均数。跟其他许多多态性相似,APOE的多态性也有着地理上的趋势。在欧洲,往北走得越远,E4就变得越常见,而E3变得越少(E2是大致不变的)。在瑞典和芬兰,E4的出现频率几乎是在意大利的三倍。因此,冠心病的频率也大致是意大利的三倍。再往远走,差异还更大。大约有30%的欧洲人至少有一份E4;东方人拥有E4的比例最低,在15%左右;美国的黑人、非洲人和波利尼西亚人中,这个比例是40%以上;新几内亚人是50%以上。这也许部分地反映了过去几千年中饮食里脂肪和肥肉的数量。在一段时间里我们已经知道,新几内亚人在吃自己的传统饮食,即甘蔗、芋头和偶尔从负鼠和树袋鼠那里得到的瘦肉时,几乎不得心脏病。但是,只要他们在露天矿上找到工作并开始吃西方的汉堡包与炸薯片时,他们很早就得心脏病的危险便飞快上升了——比大多数欧洲人快得多。

    心脏病是可以预防也可以治疗的疾病。特别那些有E2基因的人对高脂肪、高胆固醇的饮食非常敏感,换句话说,只要他们接受警告,远离这样的食品,他们就可以很容易地被治好。这是极有价值的基因信息。通过简单的基因诊断以挑出那些有得病危险的人并着重于他们的治疗,有多少生命可以挽救,有多少早期的心脏病可以避免啊。

    基因筛选并不会自动导致人工流产或基因疗法这些极端的解决办法,一个不祥基因的诊断会越来越多地导致不那么极端的治疗方法:去吃人造黄油以代替真黄油,去上健美操课。医学界应该尽快就学会不要警告所有人都避免高脂肪饮食,而是要挑出那些能够从这样的警告里获益的人,让剩下的人放松下来大吃冰激凌吧。这也许与医学界谨慎的直觉相反,却与希波克拉底誓言(希波克拉底是古希腊医师,被誉为西方医学之父,认为医师所医治的不仅是疾病,而且是病人。希波克拉底每次行医,必先吟诵自己的把为病家谋幸福作为第一目的的誓言。希波克拉底誓言被视为医德的基础)不矛盾。

    但是,我把你带到APOE这里,主要不是为了写心脏病的,尽管我感到我仍然在违反自己的规定,因为我要写另一种病了。APOE是被研究得最多的基因之一,原因不在于它在心脏病里的作用,而在于它在另一种更邪恶、更无法治疗的疾病中的重要作用:早老性痴呆症。伴随着年龄在很多人那里出现的是摧毁性的记忆与性格的丧失一这在很少的一些年轻人那里也同样会出现,它被归结为各种因素,环境的、病理的,或是偶然原因。诊断早老性痴呆症的症状是大脑细胞里无法溶解的蛋白质“硬块”的出现,它的生长会损坏细胞。病毒感染曾经一度被怀疑是病因,头部经常受打击也同样被怀疑为病因,铝在硬块中的存在使得铝锅有一段时间成了怀疑对象。传统的经验是说,遗传与这种病没有什么关系或只有很少的关系。有一本教科书很坚定地说:“它不是遗传病。”

    但是,就像基因工程的发明者之一保罗•伯格所说:“所有疾病都是遗传病”,即使当它也受其他因素影响的时候。终于,在伏尔加德国人(18世纪离开德国到俄国伏尔加地区定居的人。19世纪末,由于资源不足,很多人被送到西伯利亚。伏尔加德国人一直以来很贫困并受到严格的控制,20世纪以来更受到了深重的迫害)现在美国的后裔中,找到了早老性痴呆症以高频率出现的家谱,而且,到了90年代早期,有三个基因被与早发性早老性痴呆症联系起来了。这三个基因,一个在第二十一号染色体上,两个在第十四号染色体上。但是,在1993年,一个比这重要得多的发现是第十九号染色体上的一个基因似乎与老年人的早老性痴呆症有联系,也就是说,老年人中的早老性痴呆症也有部分遗传基础。很快,犯有“罪行”的基因就被找到了,不是别的,正是APOE。

    一个血脂基因与一种大脑疾病之间的联系不应该是这样让人惊讶的。说到底,早老性痴呆症患者常常胆固醇也高,这已经被发现了有一阵儿了。不过,它们之间联系的密切性让人吃了一惊。

    “坏”的基因形式在这里又是E4。在特别容易得早老性痴呆症的家族里,没有E4基因的那些人得这种病的几率是20%,平均发病年龄是84岁。那些有一份E4基因的人,发病几率上升到47%,平均发病年龄降低到75岁;那些有两份E4基因的人,发病几率是99%平均发病年龄是68岁。换句话说,如果你带有两份E4基因(7%的欧洲人就是如此),你最终得早老性痴呆症的几率大大高于一般人。有些人仍然能够逃过这样的命运——事实上,有一项研究就发现了一个有两份E4的86岁老人,他还保留着他所有的智慧。在很多没有显现出记忆衰退的人当中,早老性痴呆症那经典的硬块仍然存在,它们在带有E4基因的人体内也比带有E3基因的人体内更严重。那些起码带有一份E2基因的人比带有E3基因的人更不容易得早老性痴呆症,尽管他们之间的区别很小。这不是偶然的副产物,也不是统计的巧合:这看上去像是这个病的机理的关键所在。

    回想一下,E4在东方人里很少,在白人里常见一些,而在非洲人里更常见,在新几内亚的美拉尼西亚人(Melanisian)中最为常见。随之而来的应该是早老性痴呆症也遵从这样一个梯度,但是,事情并不这么简单。相比于E3/E3的人,得早老性痴呆症的相对危险在E4/E4的白人里比E4/E4的黑人和拉丁美洲人里都高得多。也许,是否容易得早老性痴呆症还受其他基因影响,而这些基因在不同的种族之间有所不同。而且,E4的效果在女性中似乎比在男性中更强。不仅仅有更多女性得早老性痴呆症,而且E4/E3的女性与E4/E4的人有同样的得病危险。在男性当中,有一份E3就可以降低危险。

    你也许在想,为什么E4还能够存在,更别说还以这么高的频率而存在。如果它既加剧心脏病又加剧早老性痴呆症,它当然应该已经在很早以前就被更无害的E3和E2灭绝掉了。我则倾向于这样来回答这个问题:高脂肪的饮食直到最近以前还是非常少见的,它对冠状动脉的副作用几乎不重要,而早老性痴呆症对于自然选择来说根本是不相关的,因为它不仅仅是发生在那些得病之前很早就已经把孩子抚养成人的人身上,而且在人们受到它袭击的那个年龄,大多数石器时代的人早就死了。但是,我不太肯定这是不是一个好的回答,因为多肉多奶酪的饮食在世界上的某些部分已经存在很久了——长得足够让自然选择去做它的工作了。我怀疑E4在身体里还有另外一个我们不知道的功能,在这个功能上它比E3强。记住:基因的存在不是为了导致疾病。

    E4与更常见的E3之间的区别在于:基因的第334个字母是G而不是A,E3与E2之间的区别是第472个字母是G而不是A。这样的结果是:E2蛋白质比E4多了两个半胱氨酸,而E4比E2多了两个精氨酸,E3介于两者之间。这些细微的变化,在一个有897个字母长的基因上,足够改变APOE蛋白质工作的方式。那个工作到底是什么,还很模糊,但是有一个理论是说,它的作用是稳定另外一个名字叫tau的蛋白质,而tau的作用又可能是保持一个神经细胞的管状“骨架”的形状。tau对于磷酸盐很有亲和性,而磷酸盐却阻止它做自己的工作;APOE的工作就是让tau别碰磷酸盐。另外一个理论是说,APOE在大脑里的工作与它在血液里的工作有相似之处。它带着胆固醇走在脑细胞之间和脑细胞内部,使得脑细胞可以建造和修理它们那些脂肪不能穿过的细胞膜。第三个较为直接的理论是说,不管APOE的工作是什么,E4都对一种淀粉状p多肽有很强的亲和力,而这正是积累在早老性痴呆症患者神经细胞里的东西,APOE则以某种方法帮助这些具有毁灭性的硬块的形成。

    这些细节有一天会变得重要,但是现在,重要的事实是我们突然掌握了一种作预测的方法。我们可以检测个体的基因,做出相当好的预言来预测他们是否会得早老性痴呆症。遗传学家埃里克•兰德(EricLander)最近提出了一个让人震惊的可能性。我们现在知道罗纳德•里根(Ronald Reagan)就有早老性痴呆症,现在回想起来,似乎有可能他还在白宫里的时候就有了此病的早期症状。假设在1979年一个又肯干又有倾向性的记者急于发现某种方法来丢里根这个总统候选人的脸,假设他抄走了一张里根用来擦过嘴的纸巾并检测了上面的DNA(先忽略这样的检测当时还没有出现这一事实吧)。假设他发现了这个历史上年龄第二大的总统候选人很有可能在任职期间患上早老性痴呆症,并把他的发现在他的报纸上刊登出来。

    这个故事刻画了基因测试所带来的对于公民自由的威胁。当问到我们是否应该提供APOE测试给那些好奇地想知道自己是否会得早老性痴呆症的人,大多数医学界人士都回答:否。最近,在深思熟虑之后,英国在这方面最好的思想库——纳菲尔德生物伦理委员会(Nuffield Council on Bioethics)——也做出了同样的结论。检查某人是否患有一种无药可治的病,说得再好听,也是值得怀疑的。它可以为那些没有E4基因的人买来安心,但却付出了高昂的代价:那些有两份E4基因的人几乎无疑会得到无药可治的痴呆症的“判决”。如果这样的诊断是绝对可靠的,那么(就像南希•韦克斯勒对于亨廷顿氏病所说的——见第四号染色体那一章),这样的检测可能对人的打击更大。另一方面,亨廷顿氏病这265样的测试,起码不会误导人。但是在不是那么肯定的情况下,比如说APOE的例子,这种测试的价值就更低了。你仍然可以——如果你非常幸运一有两份E4基因却活到很大年纪都没有症状,正如你仍然可以——如果你运气非常差——没有E4基因而在65岁的时候患上早老性痴呆症。因为有两份E4这样一个诊断既不是患早老性痴呆症的充分条件也不是必要条件,又因为这病无法治疗,别人不应该向你提供基因测试,除非你已经有了这个病的症状。

    一开始,我认为所有这些理由都很让人信服,但是现在我不这么肯定了。说到底,给人提供HIV病毒检测(只要他们自己想要)被认为是符合伦理的,虽然艾滋病(直到最近以前)是无药可治的。艾滋病并不是HIV感染之后的必然结果:有些人虽然有HIV感染却能够无限期地存活。不错,在HIV的例子里,社会还有另外一个愿望,就是阻止HIV感染的传播,而这在早老性痴呆症里就没有。但是,我们在这里考虑的是有患病危险的那些个体而不是整个社会。纳菲尔德委员会是通过不言明地把基因测试和其他测试区分开的方法来对待这个问题的。一份报告的作者菲奥娜•考尔迪科特(Fiona Caldicott)夫人说,把一个人容易得某种疾病的特点归结于他的基因组成,可以扭曲人们的态度。它使人们错误地相信遗传的影响是至关重要的,这使得他们忽略社会以及其他因素,而这又使得与精神疾病联系在一起的耻辱更多了。

    这是一个被不恰当地运用了的恰当的观点。纳菲尔德委员会是在使用双重标准。心理分析学家和精神病学家对于精神疾病提供“社会”解释,他们只需要最薄弱的证据就可以得到执照去行医,而这些解释与遗传解释一样可能让一些人显得更耻辱。这些“社会”解释持续繁荣,而“伟大正义”的生物伦理学却把另外一些有根据的诊断定为“非法”,只因为它们是基因方面的解释。在努力寻找理由去禁止用基因作解释却又允许用社会作解释大行其道的时候,纳菲尔德委员会甚至采用了这样的方法:称APOE4检测的预测能力“非常低”。这是一个奇怪的用词方法,因为在E4/E4与E3/E3之间,得病的危险有11倍的区别。就像约翰•麦道克斯引用APOE这个例子来阐明他的观点时评论的一样:有些根据,使人怀疑医生们在向他们的病人提供不受欢迎的基因信息时很鋳躇,也因此而没有抓住有价值的机会,……这种鋳躇有时有些过度。”

    另外,尽管早老性痴呆症没有治愈方法,现在已经有药物来减轻一些症状,也可能有一些可以让人们使用的预防措施去防止得病,虽然这些措施有多大价值还不确定。一个人使用所有的预防措施难道不是更好吗?如果我有两份E4,我可能很愿意知道,这样我可以做志愿者去试验新的药物。对于那些在行为上放纵自己从而会增加得病机会的人来说,这样的检测无疑是有意义的。例如,现在已经很明显,带有两份E4基因的职业拳击手得早发性早老性痴呆症的机会如此之大,拳击手们的确是被告知他们最好是去作检测,如果发现自己有两份E4基因就不要再搞拳击了。每六个拳击手中就有一个在40岁之前会得震颤麻痹或是早老性痴呆症——在微观上它们的症状是相似的,但是致病基因却不同——很多人,包括穆罕默德•阿里(Mohammed Ali)(穆罕默德•阿里:美国20世纪著名黑人拳击手,奥运会冠军),得病的年龄还要更早。在那些得早老性痴呆症的拳击手中,E4基因不同寻常地常见,在那些受到过头部伤害,之后又发现神经细胞里有硬块的人当中,也是如此。

    在拳击手那里出现的事情,在其他头部会受冲击的运动里可能也会出现。有一些道听途说的证据表明很多优秀的足球运动员在上了年纪之后过早地衰老——最近的一些伤心的例子是英国倶乐部队的丹尼•布兰茨弗劳尔(Danny Blanch flower)、乔•默瑟(JoeMercer)和比尔•佩斯利(BillPaisley),被这些证据提醒,神经学家们已经开始研究在这些运动员中早老性痴呆症的普遍性。有人计算出,一个足球运动员在一个赛季里平均要顶头球800次,对头部的损害和磨损可以是很可观的。荷兰的一项研究确实发现足球运动员比起其他项目的运动员来有更严重的记忆衰退,挪威的一项研究则发现了足球运动员脑部损伤的证据。在这里又有这样的可能,即如果E4/E4纯合子起码在选择职业时能够知道自己面临很高的危险,还是有可能受益的。我是经常把头撞在门框上的一个人,因为建筑师没有把它们设计得高到让个子高的人也能走过,我自己也在想,我的APOE基因是什么样子的。也许我也应该去测试一次。

    测试还可以有其他价值。起码有三种新的早老性痴呆症药物在发展和试验阶段。已经使用的药物,泰克林(tacrine),现在我们知道它对于带有E3或E2基因的人要比对带有E4基因的人效果好。基因组一次又一次地把“个体差异”这一课给我们上到家了。人类的多样性是它最重要的信息。但是在医学界,人们仍然明显地不情愿把人当做个体来治疗,而愿意把人当成群体来治疗。对一个人合适的治疗方法也许对另外一个人就不合适。饮食上的建议可以挽救一个人的生命,对另外一个人却可能一点用处都没有。将来会有这么一天,医生在给你开一大堆药之前先要检查一下你带有的是哪一种基因。这样的技术已经在被开发了,一个加利福尼亚的小公司爱菲梅特利克斯与其他公司一道试图把一整个基因组的基因序列都放到一个硅片上去。有一天,我们也许每人都会随身带着这样一个芯片,医生的电脑通过它就可以读出任何基因,这样,医生就可以更好地使他的处方适应我们的情况。

    也许你已经感觉到了这样做的问题是什么——以及专家们对于APOE检测过于谨慎的真正原因。假设我真的有E4/E4,而且我是一个职业拳击手。我因此有比一般人高得多的可能会发作心绞痛和早发性早老性痴呆症。假设我今天不是去看医生,而是去见一个医疗保险代理商,想搞一份新的人寿保险以配合我的房屋抵押,或者是搞一份新的医疗保险以应对将来的疾病。我拿到一份表格,被要求填写对这样一些问题的回答:我是否吸烟,喝多少酒,是否有艾滋病,体重多少,是否有心脏病的家族史——这是个遗传问题。每一个问题都设计得用来把我归类到一个特殊的风险级别,这样,我才可以得到一个既可以让保险公司赢利又仍然有竞争力的报价。很合乎逻辑的事是,保险公司很快也会要求看看我的基因,问问我是E4/E4还是有一对E3。它担心我也许是因为从最近的一次基因检测中知道我自己肯定要完蛋了,所以大买特买人寿保险,就像一个计划放火烧楼的人给楼买保险一样,坑保险公司一笔。不仅如此,它还看到,它可以通过给基因检测结果令人放心的那些人提供折扣价来吸引到让它赢利的生意。这被人们叫做“摘樱桃”这也正是为什么一个年轻、瘦削、非同性恋、不吸烟的人已经发现:比起那些年老、胖墩墩的同性恋吸烟者,他可以买到很便宜的人寿保险。有两份E4基因跟这样也差不多。

    在美国,健康保险公司已经对早老性痴呆症的基因检测感兴趣了,这没有什么奇怪的,早老性痴呆症可以是需要保险公司拿出高额开销的疾病(在英国,医疗保险基本上是免费的,主要的担心是人寿保险)。但是,保险公司在开始对同性恋者比对异性恋者收取更高保费以反映出同性恋得艾滋病的更大可能性时,引起了人们极大的愤怒。因为还记着这件事,所以保险公司现在是在小心翼翼地探路。如果基因检测对很多基因都成为常规的事情,那么,整个群体风险的概念,保险业的基础,就会受到影响。一旦我的命运被精确地了解,我就会得到这样一个保险费的报价:它会正好够我一生看病所用。对于那些在基因上很不幸的人来说,这样的保费也许是他们负担不起的:他们就会成为医疗保险里的下层阶级。因为对这些问题很敏感,英国的保险业联合会在1997年同意两年之内它们不得把做基因检测作为买保险的条件,而且不得(对10万英镑以下的房屋抵押)要求知道你已经做过的基因检测的结果。有些公司走得更远,声明基因检测不在它们的计划之内。但是这样的羞羞答答可能长不了。

    为什么人们对这个问题有如此强烈的感受,当它在实际中意味着很多人的保险费会降低?事实上,与生命中很多其他事情不同,基因上的好运气是在“受了眷顾”与没有“受眷顾”的人当中平均分配的——富人无法买到好基因,虽然富人原本就在保险上花更多的钱。我想,答案是在决定论的核心里。一个人在吸烟喝酒方面的决定,甚至是让他患上艾滋病的决定,在某种意义上来说是他自愿做出的。他“决定”在APOE基因上有两份E4,这却根本不是一个决定;这是大自然替他做出的决定。在APOE基因的基础上对人歧视就像是以皮肤颜色或性别为基础对人歧视。一个不吸烟的人也许可以很正当地拒绝与吸烟者被放在同一个风险级别里,拒绝给吸烟者的保险费提供“补贴”但是,如果一个E3/E3的人拒绝“补贴”E4/E4者的保险费,他却是在对一个什么错都没有只是运气不好的人表达偏执与偏见。

    对于用人单位拿基因检测来挑选可能雇用谁,这样的担心倒不多。即使有更多的检测成为可能,也没有什么东西可以引诱用人单位去使用它们。事实上,当我们对“基因决定我们对环境中的风险有多敏感”这个说法更为习惯之后,有些检测也许会对用人单位和雇员都成为好的做法。在一个要与已知的致癌物质(比如说日光)有一定接触的工作上(比如说,救生员),用人单位如果雇用有着不正常的p53基因的人,在将来也许会算是忽视自己关心员工的责任。在另一方面,用人单位也许出于更加自私的动机会要求申请工作的人去进行基因检测:以选择先天更健康或有更外向的性格的人(这些正是找工作时的面试所要达到的目的)。但是,已经有法律规定不得歧视了。

    同时,有一种危险,就是为保险而作基因测试或为选择雇员而作基因测试这样的“怪物”会把我们吓得不敢为了发展更好的医药的目的而进行基因测试。但是,有另外一个怪物让我更害怕:那就是担心政府要告诉我,我能如何使用自己的基因。我很不希望与保险公司分享我的遗传密码,我很希望我的医生能够知道并利用它,但是我坚持这应该是我自己的决定,而且我的这种坚持到了狂热的程度。我的基因组是我的财产,不是国家的。我和谁应该分享我的基因的内容是不应该由政府决定的,我是否应该作基因检测是不应该由政府决定的。这些应该由我决定。有一种很可怕的“父性”倾向,认为“我们”在这些问题上应该有一个统一的政策,认为政府应该制定规则来决定你可以看到多少你自己的遗传密码,你可以把它给什么人看。但是它是你的,不是政府的,你应该永远记住这一点。

    第二十号染色体 政治

    噢,英国的烧牛肉,古老英国的烧牛肉。 ——亨利·费尔丁《格拉博街歌剧》

    科学的燃料是无知。科学就像一个饥饿的火炉,必须要从包围着我们的无知森林中取来木柴喂给它。在这个过程中,我们称做“知识”的开阔地扩展开来,但是,它扩展得越大,它的边界就越长,越多的无知就出现在我们面前。在基因组被发现以前,我们不知道在每一个细胞的“心脏”里都有一个30亿个字母长的文件,我们对它的内容一无所知。现在,当我们读了这本书的一部分之后,我们就意识到了很多新的神秘现象。

    这一章的主题就是神秘。一个真正的科学家认为知识很沉闷;向无知——以前的发现揭示出来的新的神秘现象——开战才会让他来劲。森林比开阔地更有意思。在第20号染色体上有一个小“灌木丛”,它既迷人又恼人,比起哪个神秘现象来也不逊色。它已经造就了两个诺贝尔奖,只不过是因为发现了它的存在,但它固执地抵抗着,不肯被砍伐下来成为空地。而且,就像是要提醒我们,具有神秘性的知识有一种习惯是要改变世界,在1996年的某一天,它成为了最具煽动性的政治问题之一。它与一个名叫PRP的小小基因有关。

    故事从羊开始。在18世纪的英国,一组企业家先驱给农业带来了革命。在这些企业家中有莱切斯特郡的罗伯特·贝克维尔(Rober tBakewell)。他的发现是:通过让羊和牛有选择地与自己的后代里最出色的那些来交配的方法,可以使人们喜欢的特点以更高的频率出现,迅速改良品种。这种近亲繁殖用到了羊身上,产生了生长快、肉肥、毛长的羊。但是,它有一个没有预料到的副产品。萨佛克种的羊尤其明显地在年老之后出现了精神错乱的症状。它们挠自己、走路蹒跚、用一种奇怪的步子小跑,变得焦虑,似乎对抗群体生活。它们很快就死了。这种无法治愈的疾病叫做瘙痒症,它成了一个大问题,常常是每十只母羊里就有一只死于这个病。瘙痒症随着萨佛克种的羊,在较小程度上也随着其他品种的羊,来到了世界其他地方。它的病因仍然是个谜。它似乎不是遗传的疾病,但是它也无法被追踪到另外一个起因。在30年代,一位兽医学研究者在试验另外一种疾病的疫苗时,导致了瘙痒症在英国的一场大传播。这个疫苗的一部分来自其他羊的脑子,尽管这些脑子已经用福尔马林彻底消毒过了,它们仍然保留了部分传播感染的能力。从那时开始,兽医学家们就形成了一个“正统”的观念,且不说这个观点还是受了“蒙蔽”的:既然瘙痒症可以传播,它肯定是由什么微生物引起的。

    但是,什么微生物呢?福尔马林没有杀死它。清洁剂、煮沸和用紫外光照射也杀不死它。这个微生物能够通过连最小的病毒都能够挡住的过滤器。它在受感染的动物体内不引起任何免疫反应,有些时候,从注入致病物到发病之间有很长的延迟——但是如果把带病体直接注射入大脑,延迟就会短得多。瘙痒症筑起了一道让人摸不着头脑的无知的墙,打败了一代意志坚强的科学家。在相似症状出现在美国貂养殖场和落基山脉一些国家公园里居住的野生麋和黑尾鹿时,它的神秘性反而更深了。如果在实验室里把带病体直接注射入体内,貂对于羊的瘙痒症是有抵抗力的。到了1962年,一位科学家又回到了遗传的假说。他提出,也许瘙痒症既是遗传病又是可以传染的,这在那时还是一种没有听说过的组合。遗传病多得是,由遗传因素决定是否易受感染的传染病也很多——霍乱现在是一个经典的例子了——但是一个有传染性的“颗粒”能够通过某种方式在生殖细胞里旅行,这种说法似乎违反所有的生物学定律。这位科学家——詹姆斯·帕里(James Parry)——坚定不移。

    大约就在这个时候,一位美国科学家——比尔·哈德洛(Bill Hadlow)——在伦敦维尔康姆医学博物馆看到了被瘙痒症困扰的病羊那些受了损害的大脑的图片。他被这些图片与他以前在另外一个非常不同的地方所见的图片之间的相似而震动了。瘙痒症马上就要变得跟人类更加有关了。另外那个地方是巴布亚新几内亚,在那里有一种可怕的、让人丧失能力的大脑疾病,名字叫做酷鲁(Kuru),它在一个名叫佛尔的部落里已经打倒了大批的人,尤其是妇女。一开始,她们的腿开始晃晃悠悠,然后,她们的整个身体开始摇晃,她们说话开始吐字不清,她们突然会出人预料地大笑起来。在一年之内,因为大脑逐渐从内向外瓦解,病人也就死了。到了50年代末期,酷鲁已经是佛尔妇女死亡的主要原因了。它杀死了如此之多的妇女,使得在部落里男性和女性的比例成了三比一。儿童也得上了这种病,但是相比之下成年男性得病的很少。

    后来证明这是一个关键的线索。在1957年,两个在那个地区工作的西方医生,文森特·齐嘎斯(Vincent Zigas,生物学家)和卡尔顿·盖达塞克(Carlton Gajdusek,生物学家,1976年获诺贝尔生理学和医学奖)很快意识到了在发生什么。当有人死了的时候,尸体被部落里的妇女以固定仪式肢解,作为葬礼仪式的一部分,而且据传还会被吃掉。葬礼上的吃人习俗已经快要被政府铲除掉了,它已经有了足够的恶名,很少有人愿意公开谈论。这使得有些人怀疑它是否真的在过去发生过。佛尔人用断续、嗑巴的英语描述1960年前的葬礼是“切开、煮、吃”,但是,盖达塞克和其他人搜集了足够多的证人的叙述,使得人们不再认为这样的说法是在撒谎。一般情况下妇女和儿童吃内脏和脑子,男人吃肌肉。这立刻就为酷鲁病的发生提示了一个解释。它在妇女和儿童中最常见,它出现在死者的亲属里——但是在姻亲和血亲里都出现。在吃人的习俗被定为不合法之后,发病年龄稳定地提高了。说得具体一些,盖达塞克的学生罗伯特·克里茨曼(Robert Klitzman)查出了三群死亡者,每一群死者都在40年代和50年代参加过因酷鲁病而死的人的葬礼。例如,在1954年有一个为一位名叫尼诺的妇女举行的葬礼,参加葬礼的15名亲戚中有12名后来死于酷鲁。那三个没有死于酷鲁的人一个是在很年轻时就死于其他原因了,一个是因为她与死者嫁给了同一个男子,所以传统上禁止她参与吃尸体的行为,一个是事后声称她只吃了一只手。

    当比尔?哈德洛看到被酷鲁病折磨的人脑与被瘙痒症折磨的羊脑之间的相似性时,他立刻给在新几内亚的盖达塞克写了信。盖达塞克跟踪了这个线索。如果酷鲁病是瘙痒症的一种,那么就应该可以通过直接往脑子里注射的办法把它由人传给动物。在1962年,他的同事乔·吉布斯(Joe Gibbs)开始了一长串的实验,试图用佛尔部落死人的脑子使猩猩和猴子感染上酷鲁病(这样的实验在今天是否会被认为是符合伦理的,不在本书讨论范围之内)。头两只猩猩在接受了注射之后的两年之内得了病,死了。它们的症状很像那些酷鲁病人的症状。

    证明酷鲁病是瘙痒症在人体里的自然表现形式并没有什么帮助,因为瘙痒症研究在到底什么是病因的问题上把人搞糊涂了。自从1900年以来,一种罕见又致命的大脑疾病就一直困扰着神经学家。这种病后来被叫做克鲁茨菲尔特—雅各布病(Creutzfeldt-Jacob),或简称CJD。它的第一个病例是1900年由布列斯劳(Breslau,当时德国的一个城市,现属波兰)的汉斯·克鲁茨菲尔特(Hans Creutzfeldt)诊断出来的,病人是一个11岁的女孩,她在那之后的十年里死去了。因为CJD几乎从来不袭击特别年轻的人,而且得病之后死得也快,这个病例乍看起来几乎肯定是一个奇怪的误诊,它给我们留下的迷惑对于这个神秘的病来说是太典型了:第一个被查出的CJD病人原来没有这个病。但是在20年代,阿尔方斯·雅各布(Alfons Jakob)确实发现了一些可能是CJD的病例,于是病的名字就定下来了。

    吉布斯的猩猩和猴子很快就被证明对CJD与对酷鲁一样敏感。在1977年,事情的发展向更可怕的方向转了个弯。两个癫痫病人在同一家医院里接受了运用微电极进行的试验性脑手术之后都染上了CJD。这些电极以前在一个CJD患者身上被使用过,但是使用之后它们被用适当方式消毒过了。那致病的神秘东西不仅能够抵挡住福尔马林、清洁剂、煮沸和照射,它还能抵挡住手术器械的消毒。这些电极被空运到贝塞斯达(Bethesda)(美国国家卫生研究院所在地。——译者注),去在猩猩身上使用,它们也很快染上了CJD。这被证明是一个新的而又更加古怪的流行病:“由医生引起的”CJD。从那时到现在它杀死了近100人,都是身材矮小的人使用了从尸体的脑垂体中分离出来的人体生长激素。因为每一个病人接受的人体生长激素都来自好几千个脑垂体,提取的过程就把很少几个自然出现的CJD病给放大成了一个真正的流行病。但是,如果你谴责科学是在以福斯特式的行为与自然捣乱而引火烧身,那么你也得给它些荣誉,因为它解决了这个问题。生长激素引起的CJD规模有多大是在1984年被了解到的,但早在这之前,合成生长激素,最早的来自经过基因工程改造的细菌的产品之一,就已经在代替从尸体里提取的激素了。

    让我们来盘点一下这个奇怪的故事在1980年左右时的样子吧。羊、貂、猴子、老鼠和人都可以因为注射受了感染的脑组织而染上同一种病的不同形式。这个感染经受住了几乎所有通常的杀灭微生物的程序,而且,在最有威力的电子显微镜下它仍然是隐形的。但是在日常生活里它又不传染,似乎没有通过母亲的乳汁传染,不引起任何免疫反应,有些时候可以在休眠状态里呆上二三十年,只需要些许剂量就可以染病——虽然染病的可能性与剂量非常有关。它到底是什么呢?

    在所有这些兴奋当中几乎被忘记了的是萨佛克羊的病例,以及近亲繁殖看上去似乎加剧了瘙痒症这个线索。逐渐变得清楚的还有,在几个病人那里——尽管只占总数的不到6%——似乎有一些家族的联系,暗示着这可能是遗传病。了解瘙痒症的关键不是在病理学家所掌握的那套“武器库”里,而是在遗传学家的“武器库”里。瘙痒症存在于基因里。这个事实在以色列表现得最充分。当以色列科学家在70年代中期在自己的国家里寻找CJD病例的时候,他们注意到了一个不寻常的事情。整整14个病例,或者说,是偶然发生率的30倍,出现在从利比亚移民到以色列的那为数很少的犹太人当中。立刻,怀疑到了他们的饮食上面,而那包括了对羊脑的特别爱好。但是,这不是问题所在。真正的解释是遗传方面的:所有得病的人都属于一个分散开了的家族。现在知道,他们都带有同一个突变,这个突变在斯洛伐克、智利和德国裔美国人的几个家庭里也找到了。

    瘙痒症的世界很怪异、很异乎寻常,却也模模糊糊地有点熟悉。就在一组科学家抵挡不住诱惑要把瘙痒症总结为遗传病的同时,另外一组却在琢磨一个革命性的、事实上可以说是异端邪说的想法,在一开始它似乎是向与遗传病相反的方向走的。早在1967年,有人就提出,传播瘙痒症的东西可能既不含有DNA也不含有RNA。它也许是地球上惟一不用核酸也没有自己的基因的生命。因为弗兰西斯?克里克刚刚在那之前不久发明了被他半严肃地称为“遗传的中心教义”这个词——DNA制造RNA制造蛋白质——有一种生命没有DNA,这个主张在生物学里所受的欢迎,与路德(Luther,16世纪宗教改革家)的主张在罗马教廷所受的欢迎一般。

    1982年,一位遗传学家,斯坦利?普鲁西纳(StanleyPrusiner)提出一个方案,来解决一个没有DNA的生命与一种在人类DNA里游走的疾病之间明显的矛盾。普鲁西纳发现一团能够不被普通的蛋白酶切碎的蛋白质,它在有瘙痒症类疾病的动物体内存在,在同样一种动物健康的个体里却不存在。他比较容易地就得到了这一团蛋白质里氨基酸的序列,并推测出与其等价的DNA序列,然后他在老鼠的基因里寻找这个序列,后来在人类基因里也找了。普鲁西纳就这样发现了一个基因,名叫PRP(抵抗蛋白酶的蛋白质),并且把他的“异端邪说”钉到了科学这个教堂的大门上。他的理论在之后的几年里逐渐发展起来,是这样的:PRP是老鼠和人类体内的正常基因,它制造一个正常的蛋白质。它不是一个病毒的基因。但是,它的产品,名字叫做蛋白侵染子的,是一个有着不寻常性质的蛋白质,它可以突然改变自己的形状,变成一个又硬又黏的东西,抵御住所有想要摧毁它的企图,并结成一团,破坏细胞的结构。所有这些已经够史无前例的了,但是普鲁西纳还提出了更异乎寻常的东西——这种新型的蛋白侵染子有能力改变正常的蛋白侵染子,使其成为像自己一样的形状。它不改变蛋白质的序列——蛋白质与基因一样也是由长长的数码序列组成——但是它改变蛋白质的折叠方式。

    普鲁西纳的理论摔在了石头地上。它未能解释瘙痒症与类似疾病的一些最基本的特点,具体地说,它未能解释这个病有多种形式这样一个事实。正如普鲁西纳今天沮丧地说的:“这样的假说得不到什么热情。”我还清楚地记得,那时我在写一篇文章时询问专家对于普鲁西纳理论的意见,而那些专家谈到普鲁西纳的理论时带有一种轻蔑。但是,慢慢地,随着证据的积累,看起来他似乎是猜对了。最终变得清楚起来的是,没有蛋白侵染子基因的老鼠不会染上这一类病里的任何一种,而一剂形状不对的蛋白侵染子就够让一只老鼠得病了:这些病是由蛋白侵染子造成的,也是通过它们传播的。但是,尽管普鲁西纳的理论从那时起砍倒了一大片无知的林子——普鲁西纳也恰当地尾随着盖达塞克去斯德哥尔摩拿回了诺贝尔奖(普鲁西纳于1997年获诺贝尔生理学和医学奖。——译者注)——大片林子仍然存在。蛋白侵染子保持着深深的神秘性,最突出的一个是它们到底是为了什么而存在。PRP基因不仅在所有检查过的哺乳动物里都存在,它的序列也很少有变化,这暗示着它是在做什么很重要的工作。这个工作几乎肯定是与大脑有关,因为大脑是这个基因被激活的地方。这个工作也许需要铜,因为铜是蛋白侵染子很喜欢的东西。但是——这是它的神秘所在——一只老鼠的两份PRP基因如果在出生之前就被有意拿掉,它仍然是一只完全正常的老鼠。看起来,不管蛋白侵染子的工作是什么,老鼠可以不需要它就长大。为什么我们要有这么一个有潜在致命性的基因?我们仍然不得而知。

    同时,我们只差一两个突变就会从我们自己的瘙痒症基因那里得上这个病。在人体内,这个基因是有253个三个字母长的词。尽管最前面的22个和最后面的23个在蛋白质一制造出来的时候就被砍下去了。只在四个位置上,一个改变会引发疾病——四种不同形式的疾病。把第102个词从脯氨酸变成亮氨酸会引起戈斯特曼—斯特劳斯勒—杉克病(Gerstmann-Straussler-Scheinker),这是一种遗传病,病人可以存活很长时间。把第200个词从谷氨酰胺改成赖氨酸会引起在来自利比亚的犹太人当中典型的CJD病。把第178个词从天冬氨酸改成天冬酰胺引起典型的CJD,除非第129个词也同时被从缬氨酸改成甲硫氨酸。在这种情况下,结果是由蛋白侵染子引起的疾病里最可怕的一种。这是一种罕见的疾病,被称为致命家族失眠症,在几个月彻底的失眠之后死亡。在这个病里,丘脑(也就是大脑里的睡眠中心之一)被疾病吞噬掉了。看来,蛋白侵染子引起的不同疾病的不同症状,是不同的大脑区域被侵蚀的结果。

    在这些事实最初变得清楚之后的十年,科学在进一步探索这个基因的神秘性方面成果辉煌。从普鲁西纳和其他人的实验室里,巧妙得几乎让人发懵的实验不断涌现出来,揭示了一个不同寻常的关于决定性和专一性的故事。“坏”的疾病通过重新折叠它的中心部分(第108到第121个词)来改变自己的形状。在这个区域里的一个突变会使形状的改变更容易发生,它在一只老鼠生命的如此早期就会致死,蛋白侵染子在出生之后的几个星期之内就会发作。我们在不同种类的蛋白侵染子疾病中所看到的突变,都是“边缘”性质的,它们只稍微改变一下蛋白质形状改变的机会。这样,科学告诉了我们越来越多有关蛋白侵染子疾病的事情,但是,每一条新知识只暴露出了更深的神秘。

    这个形状的改变到底是怎么发生的?是否像普鲁西纳所设想的那样,还需要有未被发现的第二个蛋白质,被称为X蛋白质的那个?如果真是如此,为什么我们无法发现它?我们不知道。

    同样的一个基因,在大脑的所有区域都表达,它怎么可能根据自己带有什么样的突变而在不同的区域里有不同的表现呢?在山羊里,疾病的症状可以是嗜睡也可以是过度兴奋,看它们得的是两种疾病形式里的哪一种。我们不知道这是为什么。

    为什么物种之间有一道屏障,使得这些疾病在物种之间很难传递,在一个物种之内却很容易?为什么通过口腔传染不容易得病,而直接注射到脑子里却相对比较容易?我们不知道。

    为什么症状的出现由剂量大小决定?一只老鼠摄入的蛋白侵染子越多,发病就越快。一只老鼠拥有的蛋白侵染子基因份数越多,注射“无赖”蛋白质之后发病就越快。为什么?我们不知道。

    为什么杂合体要比纯合体更安全?换句话说,如果在你的一份基因上第129个词是缬氨酸,在另一份上是甲硫氨酸,你为什么就会比那些有两份缬氨酸或是两份甲硫氨酸的人对蛋白侵染子疾病有更强的抵抗力(致死家族失眠症除外)?我们不知道。

    这些疾病为什么这么挑剔?老鼠很难患上仓鼠瘙痒症,反过来也一样。但是,一只被人工加了仓鼠蛋白侵染子基因的老鼠,却在接受仓鼠脑子的注射之后能够患上仓鼠瘙痒症。一只带有两份不同的人类蛋白侵染子基因的老鼠,能够患上两种人类的疾病,一种像是致死家族失眠症,一种像是CJD。一只既有人类蛋白侵染子基因又有老鼠蛋白侵染子基因的老鼠,比起只有人类蛋白侵染子基因的老鼠,患病会更慢。这是否说明不同的蛋白侵染子相互有竞争?我们不知道。

    这个基因在穿过一个新的物种时是怎样改变它的品系的?老鼠很难患上仓鼠瘙痒症,但是一旦患上了,它们就把它越来越容易地传给其他老鼠。为什么?我们不知道。

    为什么这个疾病从接受注射的位置缓慢而逐渐地传播开去,仿佛坏的蛋白侵染子只能够改变那些就在它们旁边的好的蛋白侵染子?我们知道这个疾病要通过免疫系统里的B细胞,它们不知怎么一来就把这病传到脑子里去了。但是为什么是B细胞?是怎样传递的?我们不知道。

    这个不断扩展的对于我们的无知的了解,它真正让人迷惑的一个方面是它冲击了比弗兰西斯?克里克的那个教义还更中心的遗传学教义。它削弱了我从这本书的第一章就开始宣讲的内容之一,那就是:生物学的核心是数码式的。在这里,在蛋白侵染子基因上,我们确有像样的数码突变,用一个词代替了另一个词,但它导致的后果离开其他知识就是无法预测的。蛋白侵染子系统是个逻辑系统,不是数码系统。它的改变不是序列上的而是形状的改变,它还与剂量、位置以及是否在刮西风有关。这并不是说它没有决定作用。要说起开始发病的年龄来,CJD比起亨廷顿氏病还准确呢。过去的记录里曾有不居住在一起的兄弟姐妹在完全相同的年龄发病的。

    蛋白侵染子疾病是一种链式反应引起的,一个蛋白侵染子把它的邻居变成跟它自己一样的形状,它的邻居们再去改变其他的,就这样呈指数式地继续下去。它就像是1933年有一天列奥?希拉德(LeoSzilard)(匈牙利物理学家,核物理中链式反应的发明人。——译者注)在伦敦等着过马路的时候在他脑子里想出来的一个决定人类命运的图景:一个原子裂开放出两个中子,每个中子导致另外一个原子裂开又放出两个中子,这样继续下去——这个图景里的链式反应后来在广岛爆炸了。蛋白侵染子的链式反应当然比中子链式反应慢得多,但是它也同样有能力形成一个指数式的“爆炸”,还在普鲁西纳在80年代早期刚刚开始破解其中细节的时候,新几内亚的酷鲁流行病就是这种可能性的一个证据。但是,在离家更近的地方,一个更大的蛋白侵染子流行病已经开始了它的链式反应。这一次,牺牲品是牛。

    没有人确切地知道是在什么时候、什么地点、怎么样——又是那该死的神秘性——但是在70年代晚期或80年代早期的某个时候,英国牛肉食品的制造商开始把形状不对的蛋白侵染子加进了他们的产品。它也许是因为在牛脂降价之后工厂里的生产过程有所变化,也许是因为有更多的年老的羊找到了进入工厂的路,多谢慷慨的羊肉补贴。不管原因是什么,形状错误的蛋白侵染子进入了生产系统:它所需要的只是一只被高度感染的、被瘙痒症困扰的动物进入给牛做的牛食。老牛和羊的骨头和下水先要被煮沸消毒之后才能够被做成富含蛋白质的添加剂,给奶牛食用,但这没有用处。瘙痒症里的蛋白侵染子在煮沸之后仍然“存活”。

    把蛋白侵染子疾病传给一头牛的机会仍然非常小,但是如果有千万头牛,那就够了。一旦最初的几例“疯牛病”又重新进入食物链,被做成食物给其他的牛吃,链式反应就开始了。越来越多的蛋白侵染子进入了牛食饼,给新的小牛越来越高的剂量。较长的潜伏期意味着那些完蛋了的牛平均在5年之后才出现症状。在1986年底,当人们认识到最初的6个病例不同寻常的时候,在英国已经大约有3万头牛被感染上了,尽管此前没人知道这件事情。最终,在90年代晚期此种病几乎被全歼之前,有18万头牛死于牛海绵状脑病。

    在第一个病例被报告之后的一年之内,政府兽医那精湛的侦探工作就把受污染的饲料确认为问题根源。它是惟一符合所有细节的理论,还能解释奇怪的异常现象,比如说,在古恩希岛(Guernsey)发生的流行病比泽西岛(Jersey)早很多:这两个岛的饲料来自两个不同的供给商,一个用了很多肉和骨头,另一个用得比较少。到了1988年7月,反刍动物饲料禁令就已成了法律。很难想象专家和政府部门动作还能比这更快,除了事后诸葛亮的时候。到了1988年8月,索思伍德(Southwood)委员会的建议也被执行了,所有患有海绵状脑病的牛都被杀掉且不得再进入食物链。这时,发生了第一个大错:政府决定只给农民牛价值的50%作为补偿,这就给了农民一个动力去漠视疾病的征兆。但是,即使这个错误的后果也不像人们所想的那么严重:当补偿金额提高之后,汇报上来的病牛数字也没有大幅增加。

    特别规定的牛内脏禁令在一年之后也生效了,它禁止成年牛的脑子进入人类的食物,只在1990年才把被禁的牛脑扩展到小牛。这也许会发生得更早。但是,因为知道除非是直接往脑子里注射,其他物种很难染上羊瘙痒症,这样的措施在当时显得过于谨慎了。已经证明了通过食物是不可能让猴子染上人类蛋白侵染子疾病的,除非剂量特别大,而从牛到人的跳跃比从人到猴子的跳跃大得多。(人们的估计是,与通过食物吸收相比,向大脑里注射会把得病的危险提高1亿倍。)在那个时期,如果谁说食用牛肉不安全,那就会成为最大的不负责任。

    就科学家们所关心的来说,不同物种之间口腔传播的危险确实小得几乎不存在:如此之小,以至于在实验里如果不用几万只、几十万只动物就一个病例都得不到。但是这就是问题所在:这样的一个实验正在5000万只名字叫做英国人的“实验动物”上进行。在这样大的一个样本里,不可避免地会出现几个病例。对于政治家来说,安全是个绝对的概念,不是相对的。他们想看到的不是个别人患病,而是没有一个人患病。另外,牛海绵状脑病像它以前的所有蛋白侵染子疾病一样,被证明在让人吃惊这一点非同一般。猫因为吃了牛所吃的同样的有肉有骨头的饲料,也染上了病——从那时到现在,70只以上的家猫、三只猎豹、一只美洲豹、一只美洲斑豹,甚至一只老虎都因牛海绵状脑病死了。但是还没有出现过得了牛海绵状脑病的狗。人类会像狗那样有抵抗力还是会像猫科动物那样脆弱?

    到了1992年,牛的问题被有效地解决了,尽管流行病的高峰在那之后才出现,因为在受感染和出现症状之间有五年的潜伏期。1992年之后出生的牛很少有患牛海绵状脑病或有可能患上的。但是,人类的歇斯底里才刚刚开始。至此,政治家们所做的决定开始稳步地变得越来越愚蠢。感谢那个内脏的禁令,它使得食用牛肉比最近十年来的任何时候都更安全,但是也就是在那个时候人们开始拒食牛肉。

    在1996年3月,政府宣布,确有十个人死于蛋白侵染子疾病的一种,看起来很像是在那段危险的时期通过牛肉传染上的:它的一些症状与牛海绵状脑病相似,以前没有见过。公开的警告,加上媒体心甘情愿地煽风点火,就成了——很短暂的——极端。认为只在英国就会有几百万人死亡的狂想式预言也被大家认真对待。把牛变成了吃人兽这样的蠢事被广泛地描述成支持用有机肥料种田的证据。出现了很多阴谋理论:这个病由杀虫剂引起;科学家的嘴都被政客们封住了;真相被隐瞒了;对饲料业的管理规则被取消才是问题的原因;法国、爱尔兰、德国和其他国家也在封锁同样严重的流行病的消息。

    政府感到它必须做出反应,要出台一个更没用的禁令,不许食用两岁半以上任何年龄的牛:这个禁令更煽起了公众的警惕,摧毁了整个一个行业——把整个系统用那些命运已被注定的牛给堵死了。那一年的晚些时候,在欧洲政客们的坚持下,政府下了命令,“有选择地杀死”另外10万头牛,尽管明知这是一个会进一步疏远农民和消费者的没有意义的姿态。在马跑了之后它都不再把圈门关上了,它要在圈外面杀一只羊来做祭祀。不出所料,这个新的杀牛举动甚至没能取得让欧盟解除它禁止进口所有英国牛肉禁令效果,这个禁令其实主要是出于欧洲自身的经济利益。但是比这更糟的是接下来在1997年对带骨头的牛肉的禁止。人人皆知带骨牛肉的危险是微乎其微的——最多导致每四年有一例CJD。政府对于危险所采取的措施如此之集权化,尽管危险性比遭雷击还小,农业大臣也不准备让大家自己去做决定。事实上,可以预料到,政府对危险采取了这样一种荒谬的态度,它逼得治下的人们采取了更有危险的行动。在有些圈子里几乎出现了一种逆反心理。我就发现,在禁令即将生效的时候,我受邀请去吃红烧牛尾的次数比以前任何时候都多。

    在1996年一整年里,英国做好了迎接一场人类海绵状脑病流行的准备,但是从3月到年底只有6个人死于这种病。患病数字远远没有增加,相反似乎保持稳定甚至减少了。当我写这本书的时候,有多少人会死于新类型的CJD仍然不清楚。这个数字慢慢升到了50以上。每一个病例都是无法想象的家庭悲剧,但还算不上流行病。一开始,调查显示这个新类型CJD的受害人都是在危险的年头里特别热衷于吃肉的人,尽管受害者之一在几年以前当了素食者。但是这是一个幻象:当科学家们向那些被认为是死于CJD的病人(但是死后检查却表明他们是死于其他原因)的亲属询问死者生前的习惯时,他们发现了同样的食肉倾向:死者家属所讲述的记忆,心理上的多于实际的。

    受害者们一个共同的特点是他们几乎都属于同一种基因型——在第129个词上是双份的甲硫氨酸。也许,人数更多的杂合子与缬氨酸纯合子会被证明只不过是有更长的潜伏期:通过大脑内注射而传给猴子的牛海绵状脑病就比其他蛋白侵染子疾病有长得多的潜伏期。另一方面,因为绝大多数人类通过牛肉得到的传染都应该发生在1988年底以前,十年的时间已经是牛的平均潜伏期的两倍了,也许,物种之间的界限与在动物实验里看到的一样高,而流行病最坏的时候已经过去了。也可能新类型的CJD跟吃牛肉没有关系。很多人现在相信,有一种可能是从牛肉制品中得到的人体疫苗或其他医药制品给我们的危险更大,而这种可能性在80年代晚期被权威机构有点太过轻率地否定了。

    CJD曾经杀死过一辈子都吃素、从来没有动过手术、从来没有离开过英国、从来没有在农场或屠宰场干过活的人。蛋白侵染子最后的也是最大的一个神秘之处就是甚至在今天——当CJD的各种形式通过各种已知途径传播,包括吃人的习俗、手术、激素注射,吃牛肉也有可能——85%的CJD病例是“零星”的,意思是说,在目前它们无法用任何理由解释,只能说是偶然。这冒犯了我们的决定论,在这个理论里所有疾病都要有个病因,但是,我们并不生活在一个完全由决定论控制的世界。也许CJD就是以每100万人中有一例的概率自发地出现。

    蛋白侵染子让我们因自己的无知而感到卑微。我们没有想到存在一种不使用DNA的自我复制——根本就没有用数码信息。我们没有想象到有一种疾病有着如此深奥的秘密,从如此不可能的地方出现,被证明是如此致命。我们仍然不能完全理解一个多肽的折叠怎么就能导致这么大的混乱,或者蛋白链组成上的一个微不足道的改变怎么就能够有这么复杂的后果。正如两位蛋白侵染子专家所写:“个人的与家庭的悲剧、民族的灾难与经济的灾难,都可以追溯到一个小小的分子淘气的错误折叠。”

    第二十一号染色体 优化人种论

    (优化人种论,eugemcs,指把“有害”基因从人类基因组中淘汰掉,往往伴随着对某一部分人的歧视)

    我不知道有比人民本身更为安全的社会力量的保管处,如果我们认为他们所受的启蒙不足以使他们以健康的判断力来行使他们的控制权,补偿的办法不是把权力从他们那里拿走,而是为他们作判断提供信息。 ——托马斯•杰斐逊

    第二十一号染色体是人体里最小的染色体。因此,它应该被叫做第二十二号染色体,但是,叫了那个名字的染色体直到最近还被认为是更小的,这些名字现在已经固定了。也许因为第二十一号染色体是最小的染色体,可能有着最少的基因,它是惟——条能够在一个健康人体内有三份而不是两份的染色体。在所有其他情况下,有一整条多余的染色体会把人类基因组的平衡打乱得使身体根本无法正常发育。偶尔有儿童在出生时有一条多余的第十三号或第十八号染色体,但是他们最多活几天。出生时有一条多余的第二十一号染色体的儿童很健康、明显地很快乐,也注定能够活很多年。但是他们不能够被认为是——用那个带点轻蔑的词说——“正常”他们有唐氏综合症。他们的外表特征——矮小的身材、胖胖的身体、窄眼睛、愉快的脸-看就很明显。同样明显的是,他们头脑迟钝、性情温和、衰老得快,常常患上某种形式的早老性痴呆症,在40岁之前死去。

    唐氏综合症的婴儿通常有大龄母亲。随着母亲年龄的增加,生出一个唐氏综合症婴儿的机会迅速呈指数增长,从20岁时每2300个婴儿里有一个到40岁时每100个里就有一个。完全是出于这个原因,唐氏综合症的胚胎是基因筛选的主要受害者,或者说他们的母亲是基因筛选的主要使用者。在大多数国家里,现在为高龄母亲提供羊膜穿刺——或者是强制实行——来检查胚胎是否带有一条多余的染色体。如果是,母亲就会被建议流产,甚至被骗做了流产。给出的理由是尽管这些孩子有着愉快的举止,但大多数家长不希望成为唐氏综合症孩子的父母。如果你持有某种观点,

    你会把这看成是科学的良性用途的一个体现,它奇迹般地制止了那些身有残酷疾患之人的出生,又没有给谁带来痛苦。如果你持有另一种观点,你会把这看成是出于可疑的追求人类完美与对残疾人的不尊重、由政府公开鼓励的对神圣的人类生命进行的谋杀。你看,尽管50多年前纳粹的暴行使人们看到优化人种的做法的荒唐而对其失去了信任,但是在实际生活中它仍然在进行。

    这一章是关于遗传学历史上的阴暗面的,关于遗传学家庭里的“黑羊”——以基因纯洁性的名义而进行的谋杀、绝育和流产。

    优化人种论之父——弗兰西斯•高尔顿——在很多方面都与他的第一代表兄查尔斯•达尔文正相反。达尔文有条理、有耐心,288害羞,很传统,高尔顿却是知识的浅薄涉猎者,在性心理上一团糟,还爱炫耀。他也很聪明,在南部非洲探险过,研究过孪生子,搜集过统计学资料,幻想过乌托邦。今天,他的名声几乎与他的表兄一样大,只不过他的名声更像是臭名昭著而不是声名显赫。达尔文主义总有被变成政治信条的危险,高尔顿就这么做了。哲学家赫伯特•斯宾塞(Herbert Spencer)热情地拥抱了“适者生存”这个观念,并论述说它支持了经济学中的自由资本主义和维多利亚时代社会中的个人主义:他称之为社会达尔文主义。高尔顿的见解更缺乏诗意一些。如果像达尔文阐述的那样,物种被系统化的有选择的繁殖而改变,就像牛和信鸽那样,那么人类也可以通过这样的繁殖来改进自己。在某种意义上说,高尔顿是在求助于一个比达尔文主义要早的传统:18世纪繁殖牛的传统和比这更早的养殖各种苹果和玉米的传统。他叫嚷的是:让我们像改进了其他物种那样地改进我们自己这个物种吧。让我们只用人类最好的样本而不是最差的来传宗接代。在1885年,他发明了“优化人种”这个词来指称这样的生育方式。

    但是,“我们”是谁?在斯宾塞的个人主义世界里,它确确实实是我们每一个人:在这里,优化人种的含义是每一个人都努力挑选一个优秀的配偶——脑子好用身体健康的人。这与选择结婚对象时比较挑剔也没有什么不同一我们已经这样做了。但是,在高尔顿的世界里,“我们”有了一个更加“集体化”的含义。高尔顿的第一个也是最有影响的一个跟随者卡尔•皮尔逊(Karl Pearson,统计学家),是个激进的社会主义乌托邦派,也是一个优秀的统计学家。被德国不断发展的经济实力所吸引又对其感到畏惧,他把优化人种论变成了一种军国主义。必须优化人种的不是个体,而是国家。只有在公民中实行有选择的生育,英国才能够领先于它在欧洲大陆上的竞争对手。在谁能够生育谁不能够生育上国家必须有发言权。刚刚诞生的时候,优化人种论不是一门政治科学,它是以科学为借口的政治信条。

    到了1900年,优化人种论抓住了普通民众的想象。“优基因”这个名字突然成了时尚,平空冒出了公众对于有计划地生育的兴趣,同时,优化人种学会在英国各处都冒了出来。皮尔逊写信给高尔顿说:“如果孩子不健康,我听到大多数的中产阶级太太会说:‘噢,但是那不是一个优化人种的婚姻!波尔战争(1899到1902年英国军队与波尔人(从17世纪起居住在非洲南部并融入当地的荷兰农民的后代)在非洲南部进行的战争,英国的目的是掌握对该地区的控制。此次战争中,英国军队遇到了出乎预料的打击,这也是英国殖民思维变弱的开始)中军队征招来的战士素质非常差,以至于它在刺激了关于福利的争论的同时,也刺激了关于更好地生育的争论。

    相似的事情在德国也发生了,一种混合了弗雷德里克•尼采(Friedrich Nietzsche)的英雄哲学与恩斯特•海克尔的强调人的生物命运的学派,产生了一种激情,希望进化上的进步与经济和社会的进步同时发生。独裁哲学能如此容易地吸引人,意味着在德国,比在英国更甚,生物学与民族主义交织在一起了。但是在那时候它还仅仅是意识形态,还没有被付诸实施。

    到此,还没有什么危害。但是,重点迅速从鼓励最优秀的人以优化人种的名义生育转移到了阻止最“差”的人生育,以免把基因带坏。“最差的”很快就成了“心智虚弱”的意思,它包括了酗酒者、患有癫痫病的人、罪犯,以及智力低下者。在美国尤其如此。在1904年,高尔顿和皮尔逊的一个崇拜者查尔斯•达文波特(Charles Davenport)劝动了安德鲁•卡内基(Andrew Carnegie)(卡内基是以铁路和钢铁起家的美国实业家、慈善家,出资建立过很多研究机构),为自己建立了冷泉港实验室,专门研究优化人种论。达文波特是个顽固保守、精力无穷的人,他更关心的是怎样制止劣化人种的生育,而不是怎样鼓励优化人种的生育。他的“科学”,至少是过于简单化的。例如他曾说,既然孟德尔学说已经证明了遗传的颗粒结构,美国人的“大熔炉”思维就应该退休了;他还提出过海军的家庭可能有热爱海洋的基因。但是在政治上,达文波特既有技巧又有影响力。亨利•戈达德有一本书,是关于一个神秘的、智力有缺陷的、名字叫做卡里卡克(Kalli kak)的家庭的。在这本书里他强烈地论证了心智虚弱是有遗传的,而达文波特就从这本书里得到了帮助。达文波特和他的同盟者们逐渐说服了美国政界,让他们认为美国人的“质量”正处于极度危险之中。西奥多•罗斯福(Theodore Roosevelt,美国第26任总统,第32任总统富兰克林•罗斯福是他的本族侄子)说:“总有一天我们会意识到,正确的类型中的优秀公民最主要的责任,不能逃避的责任,这个法案是在他或她的身后给这世界留下他们的骨血。”错误类型的人就不必申请了。

    美国对于优化人种论的热情多是来自反对移民的感情。在那个时候,东欧与南欧迅速地向美国移民,很容易就会掀起疑神疑鬼的情绪,认为美国国内“更好”的盎格鲁一萨克逊人种正在被290稀释。支持优化人种的观点为那些出于传统的种族主义而希望控制移民的人提供了方便的掩饰。1924年的移民限制法案就是优化人种运动的直接结果。在以后的20年间,它把很多绝望的欧洲移民困在故国,推入了一个更加恶劣的命运,因为它拒绝给这些人提供一个在美国的新家。它在法律文书里呆了40年,没有得到修正。

    对于优化人种论的支持者们,限制移民可不是他们在法律上的惟一胜利。到了1911年,有六个州已经有了记录在案的法律,允许对心智不健康的人实行强制绝育。6年之后,又有9个州加入了他们的行列。理由是这样的:如果一个州可以处决罪犯,它当然可以剥夺人的生育权(好像头脑天真跟犯罪行为是同样的东西)。“在这些个人自由,或者是个体权利的例子中,……我们要谈的是登峰造极的愚蠢。这样的个体……没有权利生育像他们那样的人。”一个名叫W•J•罗宾逊(Robinson)的美国医生写道。

    最初,最高法院否决了很多绝育方面的法律,但是在1927年,它的立场改变了。在巴克控告贝尔(Buckvs Bell)一案中,最高法院判决,弗吉尼亚州政府可以给凯瑞·巴克(Carrie Buck)做绝育手术。巴克是一个17岁的女孩,居住在林池堡一个癫痫病人和弱智者的群落里,和她的妈妈爱玛以及女儿维维安住在一起。在进行了一次仓促草率的检查之后,只有7个月大[!]的维维安被宣布是个白痴,于是凯瑞被命令去做绝育手术。法官奥利弗•温代尔•霍姆斯(Oliver Wendell Holmes)在判决里有一句出了名的话:“三代白痴已经够了。”维维安幼年就死去了,(维维安在7岁时因病死去。她读了一年多小学,成绩中等)但是凯瑞活到了较大的年龄,是一个值得尊敬的女人,智力中等,空闲时间喜欢玩填字游戏。她的妹妹多瑞丝也被做了绝育手术,她试了很多年想要怀个孩子,最后才意识到,在没有征得她同意的情况下别人对她做了什么。直到70年代,弗吉尼亚州还在继续给那些有智力障碍的人做绝育手术。美国——个人自由的堡垒——按照1910年到1935年间通过的30多个州和联邦的法律,给十多万人做了绝育手术,理由是这些人“弱智”。

    但是,尽管美国是个先锋,其他国家却跟得很紧。瑞典给6万人做了绝育,加拿大、挪威、芬兰、爱沙尼亚和冰岛都把强制绝育放入了自己的法典,并付诸实施。最臭名昭著的是德国,先是给40万人做了绝育,后来又杀死了其中的很多人。在第二次世界大战期间的18个月内,有7万已经被做过绝育手术的德国精神病人被用毒气杀死,为的是腾出病床来给受伤的战士用。

    但是,英国从来没有通过一个优化人种的法律,在新教工业化国家里这几乎是惟一的。它从来没有通过一个法律允许政府干涉个人的生育权利。(注意此处的说法与后面的说法的区别)具体地说,英国从来没有过一个法律制止弱智人结婚,也从来没有一个英国法律允许政府以某人弱智为理由对其实行强制绝育。(这并不是要否认,医生和医院都有过连蒙带骗给病人做了绝育的行为,但是这些属于个人行为。)

    英国并没有什么特殊之处;在罗马天主教堂影响比较大的国家都没有优化人种的法律。荷兰人就避免了通过类似法律。苏联更关心迫害和杀掉聪明人而不是无趣的人,从来没有这样的法律条文。但是,英国之所以突出,是因为20世纪前40年优化人种学与优化人种的宣传很多——事实上,大部分一都来自英国。与其去问为什么那么多国家都跟从了这样残忍的行为,回过头来问一问这样一个问题会给人以启发:为什么英国抵挡住了这样做的诱惑?功劳应该给谁?

    功劳不是科学家的。科学家们在今天喜欢告诉自己,优化人种学一直是被看成伪科学并被真正的科学家所不屑的,特别是在孟德尔的主张被重新发现之后(它揭示了比明显的突变多得多的隐性突变的存在),但是,在有文字的记录里,这样的说法却没有什么证据。大多数科学家都很乐意接受在一个新的技术官僚体系中被尊为专家的奉承。他们一直在催促政府采取行动。(在德国,学术界一半以上的生物学家加入了纳粹党——比任何其他专业人员比例都高——而且没有一个人批评优化人种论。)

    一个说明问题的例子是罗纳德•费希尔爵士,又是一个现代统计学的奠基人(尽管高尔顿、皮尔逊和费希尔是伟大的统计学家,没有人就此认为统计学与遗传学一样危险)。费希尔是个真正的孟德尔主义者,不过他也是优化人种学会的副主席。他沉迷于被他自己称做是从高等阶级向穷人的“生育事件的重新分配”:穷人比富人生孩子更多这样一个事实。即使后来优化人种论的批判者,例如朱利安•赫胥黎(Julian Huxley)和J•B•S•霍尔丹,在1920年以前也是优化人种的支持者。他们抱怨的不是优化人种的原则,而是优化人种政策在美国实行过程中的粗鲁和有偏向性。

    社会主义者在制止优化人种论方面也没有功劳。尽管工党在30年代是反对优化人种的,在那之前社会主义运动总的来说给优化人种论提供了思想武器。你得使劲挖掘才能在英国有名的社会主义者中找到一个在20世纪的前30年对优化人种论表示过哪怕是相当模糊的一点反对。要在那个时候的费边社人物中找到支持优化人种的言论却超乎寻常的容易。H*G•韦尔斯(H*G.Wells)、J.M. 凯恩斯(J*M•Keynes)、乔治•伯纳德•萧(George Bernard Shaw)、海弗洛克•埃利斯(Havelock Ellis)、哈罗德•拉斯基(Harold Laski)、西德尼和贝亚翠丝•韦伯(Sidneyand Beatrice Webb )(H•G•韦尔斯是小说家,J•M•凯恩斯是经济学家,乔治•伯纳德•萧是作家萧伯纳,海弗洛克•埃利斯是性学家,哈罗德•拉斯基是政治学家,西德尼和贝亚翠丝•韦伯都是社会改革者)一都在关于迫切需要让蠢人和残疾人停止生育的方面说过很可怕的话。萧伯纳的剧本《人与超人》里的一个角色说:“作为懦弱者,我们用慈善的名义打败自然选择:作为懒汉,我们用体贴和道德的名义忽视人工选择。”

    H•G•韦尔斯的作品尤其充满了有滋有味的话:“就像人们带有的致病微生物,或者一个人在墙壁很薄的房间里发出的噪声一样,人们带到这世界上来的孩子们也不仅仅属于父母自己”,或者是:“密密麻麻的黑人、棕色人、肮脏的白人以及黄种人……都必须走开。”或者:“已经变得明显,人类群体从总体上看,要比他们所拥有的未来低劣……给他们平等就是把自己降到他们的水平,保护和珍视他们则会被他们的多产所淹没。”他又安慰人地加上一句:“所有这样的杀戮都要先施麻醉剂。”(事实不是这样。)

    社会主义者们有着对计划的信心,准备好了把国家权力置于个人之上,他们是优化人种理论的天然接受者。生育也到了国有293化的时候了。优化人种论在费边社皮尔森的朋友们中间首先扎下根来成了一种受欢迎的论调,优化人种论是他们的社会主义磨坊里的麦子。优化人种论是进步的哲学,又呼吁了国家的作用。

    很快,保守派和自由派都同样地有了激情。前总理阿瑟•鲍尔弗(Arthur Balfour)主持了1912年在伦敦召开的第一届世界优化人种大会,赞助会议的副主席们包括最高法院的大法官和温斯顿•丘吉尔(Winston Churchill)。牛津联合会(世界上最著名的辩论社,创建于1823年,活跃至今。常邀请著名人物对重要事件发表演说)在1911年以二比一的比例通过支持优化人种论的原则。像丘吉尔所说:“心智虚弱之人的成倍增加”是“对于一个种族非常危险的事情”。

    确切地说,还是有几个孤独的反对声音的。一两个知识分子保持了怀疑态度,在他们当中有希莱尔•贝洛克(ffilaire Belloc)和G•K•切斯特顿(Chesterton)(希莱尔•贝洛克:出生于法国、在英国生活的作家、政治家;G•K •切斯特顿,英国作家、诗人),他们写道:“优化人种论者发现了把硬心肠和软脑子结合起来的方法。”但是,大多数英国人是支持优化人种的法律的,这一点无可置疑。

    有两个时刻英国几乎要通过优化人种的法律了:1913年和1934年。在第一次,这样的企图被孤胆反对者逆着传统认识的潮流给挫败了。1904年,政府设立了一个“照顾与控制弱智人”的皇家委员会,由拉德纳(Radnor)伯爵指导。在1908年,当汇报工作的时候,它顽固地坚持“智力低下是遗传”的立场,这一点都不奇怪,因为委员会的很多成员都是收了钱的优化人种论者。最近格里*安德森(Gerry Anderson)在剑桥大学所作的论文里阐述,在那之后有一个时期各个游说组织开展了长期的游说,敦促政府采取行动。内政部接到了来自各郡、各市议会和各教育委员会的几百份决议,敦促通过一个法案限制“不适者”的生育。新的优化人种教育学会对总理进行了“狂轰滥炸”,并与内政大臣开会以推进自己的主张。

    在一段时间内,什么也没发生。内政大臣赫伯特•戈莱德斯通(Herbert Gladstone)不为所动。但是,当他在1910年被温斯顿•丘吉尔接替之后,优化人种论终于在内阁的会议桌上有了一个积极的代表。丘吉尔在1909年已经把阿尔弗雷德•特雷德戈尔德(Alfred Tredgold)的一个支持优化人种的演讲以内阁文件的形式散发了。在1910年12月,在内政部就职之后,丘吉尔写信给总理赫伯特•阿斯齐斯(Herbert Asquith),敦促尽快制定优化人种的法律,结束时写道:“我感到,在另一年过去之前,疯狂之流的源泉应该被切断与封住。”他希望那些精神病人的“诅咒随着他们死去”。为了防止对他的意思还有怀疑,威尔弗里德•斯克恩•布伦特(Wilfrid Scawen Blunt)(威尔弗里德•斯克恩•布伦特:19世纪末20世纪初英国作家)写道,丘吉尔那时已经在私下里宣传用X射线和手术的方法给那些精神“不合适”的人做绝育。

    1910年与1911年的宪法危机使得丘吉尔没有能够提出自己的提案,然后他就调到了海军部。但是到了1912年,立法的声浪又复活了,保守党的一名高层人物,格寿姆•斯图尔特(Gershom Stewart),在这个问题上提出了自己以个人成员身份的提案,最终强扭了政府的手。1912年,新的内政大臣里吉诺德•麦克纳(Reginold Mc Kenna)有些不情愿地提出了一个政府法律草案:精神缺陷法案。这个法案将会限制弱智者的生育,并惩罚那些与有精神残疾者结婚的人。一个公开的秘密是,一旦具备可行性,这个法案就可以被修改为允许强制绝育。

    有一个人应该特别提及,因为他发动了对这个法案的反对:一个激进的自由派议会成员,他的名字如雷贯耳——事实上这也与故事有关——乔赛亚•韦奇伍德(Josiah Wedg wood)。他是多次与达尔文家族联姻的著名的工业家族的后代。查尔斯•达尔文的外祖父、岳父以及一个姐夫(同时也是他妻子的哥哥)都叫乔赛亚•韦奇伍德。议员乔赛亚的职业是海军工程师。在1906年自由派大获全胜的时候他被选入议会,但是后来加入了工党,于1942年进入上议院。[达尔文的儿子伦纳德(Leonard),在那时是优化人种学会的主席。

    韦奇伍德非常不喜欢优化人种论。他指责优化人种学会是在试图“把劳动阶层像牛一样繁殖”,他还断言,遗传定律“太没有确定性,无法让人把信心建立在某一个特定学说上,更不要说根据它来立法了”。但是,他的主要反对意见是以个人自由为基础。他对一个给予了政府用强制手段把孩子从自己家中领走的权力的法案很反感,因为其条文规定,警察在接到公众举报某人“心智虚弱”时有责任做出反应。他的动机不是社会公正,而是个人自由:其他保守党的自由派,例如罗伯特•塞西尔(Robert Cecil))爵士,加入了他的行列。他们的共同目标是个人利益与政府的对抗。

    真正让韦奇伍德如鲠在喉的条文是,“鉴于整个社会的利益,(心智虚弱的)人被剥夺生育后代的机会是合乎意愿的”。用韦奇伍德的话来说,这是“在所有被提倡过的事情中最令人厌憎的”,而且不是“我们有权期望一个自由派政府所能做到的对于治下人民自由的关切和在个人面对政府时给予个人的保护”。

    由于韦奇伍德的攻击的效力,政府收回了这个法案,第二年又以温和得多的形式重新提出。关键的是,这一次它略去了“任何可能被诠释为优化人种论的提法”(用麦克纳的话说),那些限制生育与婚姻的得罪人的条文被去掉了。韦奇伍德仍然反对这一法案,他用了整整两个晚上,靠巧克力支撑着,把200多条补充条款放到桌面上,以继续了自己对草案的攻击。但是,当他的支持者减少到只有四个人的时候,他放弃了,草案被通过,成为了法律。(作者此处的叙述与第299页有矛盾)

    韦奇伍德也许认为自己失败了。可以强制执行的对于精神病人的关押成了英国生活的一个特征,并在实际上使他们更不容易生育后代。但是真实情况是他不仅阻止了优化人种的措施被采用,而且他还发出了警告信号给将来任何认为优化人种立法值得考虑的政府。并且,他指出了整个优化人种工程中处于中心位置的漏洞。这个漏洞不是基于错误的科学理论,也不是因为优化人种在实际中不可行,而是它归根结底是对人的压制而且很残酷,因为它要求政府的权力得到保证,凌驾于个人权利之上。

    在30年代早期,随着萧条时期失业人数的增加,优化人种论死灰复燃。在英国,人们开始荒唐地把高失业率与贫困怪罪到最初的优化人种论者预言过的种族的退化,优化人种学会的会员数达到了创纪录的水平。就是在那个时候,多数国家通过了优化人种的法律。例如,瑞典在1934年开始具体实施它那强制绝育的法律,德国亦然。

    希望英国通过绝育法律的压力已经在一些年里增加了,政府的一个被称为伍德报告的关于精神缺陷的文件帮了忙,这个文件的结论是精神疾病在增加,而原因部分是因为精神缺陷者的高生育率(提交这个报告的委员会小心地定义了三类精神缺陷:白痴、弱智和“心智虚弱”)。但是,当一个工党议员以私人名义递交给下议院的优化人种提案被拒之后,向政府施压的优化人种组织改变了策略,把它们的注意力转向社会服务部门。卫生部被说服了,聘请了一个委员会,在劳伦斯•布罗克(Lawrence Brock)爵士领导下分析为精神缺陷者绝育的提议。

    布罗克委员会虽然出自于行政系统,但从一开始就有派性。据一位现代历史学家说,它的大多数成员“一点都不愿意去不带感情地检验那些相互矛盾和下不了结论的证据”。这个委员会接受了精神缺陷来自遗传的观点,忽略了与此观点不符的证据,“跟从了”(用它自己的原话)那些支持此观点的证据。它接受了精神缺陷者生育多这样一个观点,全然不顾只凭已有证据还不足以下结论,它只是为了便于满足反对者才“拒绝”了强制绝育一它轻描淡写地放过了一个问题,即怎样从精神有缺陷的人那里得到绝育许可。在1931年出版的一本生物学普及读物里,有一句引用的话道出了游戏内幕:“可以通过贿赂或其他说服的方法使很多这样的低等人接受自愿绝育。”

    布罗克报告是彻头彻尾的宣传,粉饰得却像是一个不带个人偏见的专家评估。就像在最近被指出的,在制造一个由“专家”们一致同意并需要采取紧急措施的人工合成的危机时,它所使用的方法为20世纪后期国际上社会服务人士们在全球变暖问题(全球变暖是20世纪环保人士最关注的现象之一。但是,不少科学家、经济学家与其他专家认为全球变暖现象没有那么严重,有一些人士指责一些环保人士为了捍卫自己的主张而夸大事实,对与自己观点不符的证据视而不见)上的行为开了一个先例。

    这个报告的目的是要引出一个绝育法案,但是这样的法案却一直没有见天日。这一次,主要原因倒不是有一个像韦奇伍德那样的坚定的反对者,而是因为全社会的意见已经有所不同。很多科学家改变了自己的想法,引人注目的是J•B•S•霍尔丹。原因部分是因为通过玛格丽特•米德等人与心理学中的行为学派,用环境解释人类本性的说法开始为公众所知,影响也与日倶增。工党在那时是坚定地反对优化人种的,它把这看成是劳动者的一场阶级斗争。在一些圈子里,天主教会的反对也很有影响。

    让人吃惊的是,直到1938年,才有报告从德国渗透过来,说明强制绝育在现实里意味着什么。布罗克委员会曾经不够明智地赞赏过纳粹的绝育法律,这样的法律是在1934年一月开始实行的。在1938年事情变得清楚了,这样一个法律是无法容忍的对个人自由的侵犯,也是迫害别人的借口。在英国,良好的判断占了上风。

    这一段优化人种论的简短历史让我得到了一个不可动摇的结论。优化人种论的错误不在于它背后的科学,而在于强制的方法。优化人种与任何其他把社会利益置于个人权利之上的计划并无不同。它是人道上的罪行,不是科学上的罪行。毫无疑问,优化人种的生育方法会在人类中“成功”,就像它在狗和奶牛那里都成功了一样。通过有选择的生育是有可能来减少精神疾病的发生率、提高人类的健康的。但是,也没有什么疑问,这只能通过漫长的过程来完成,它的代价——残酷、不公正与对人的压制——无比巨大。卡尔•皮尔森有一次在回答韦奇伍德时说:“社会的就是正确的,除此之外没有其他定义。”这个骇人的说法应该成为优化人种论的墓志铭。

    是的,当我们在报纸上读到智慧基因、生殖细胞基因疗法、产前检查和筛选的时候,我们无法不从骨子里感觉到优化人种论还没有死。正如我在第六号染色体那一章里讲述的,高尔顿的信念——人的本性大多都有遗传因素一又重新成为了时尚,这一次,它有了更好的——尽管仍然无法下定论的——事实依据。在今天,基因筛选越来越使得父母能够选择他们孩子的基因了。例如,哲学家菲利普•基切尔(Philip Kitcher)(菲利普•基切尔:当代美国哲学家,研究领域主要是科学和数学哲学)就把基因筛选叫做“自由优化人种”:“每一个人都要成为他或她自己的优化人种师,利用现有的基因检测手段去做出他或她认为正确的生育方面的决定。

    用这个标准来看,优化人种每天都在全世界的医院里发生,它最最常见的受害者是那些带有一条多余的二十一号染色体的胚胎,这些胚胎原本是会出生为有唐氏综合症的婴儿。如果他们出生,在大多数情况下他们会有一个短暂却很快乐的一生——这是他们先天条件的属性。但是,对于一个依靠母体为生又没有情感的胚胎,不见得要在没有被生出来时就被杀死。现在,我们就像接到紧急通知一样飞快地进入了关于流产的争论:母亲是否有权流产掉一个孩子,或者政府是否有权制止她这样做。这是一个旧的争论了。基因的知识使她有了更多理由去做流产。在胚胎中选择一个具有某种特殊能力的而不是去掉一个缺乏能力的,也可能离我们不远了。选择男孩而把女孩流产掉,已经是羊膜穿刺的不正当使用了,这在印度次大陆上尤其猖獗。

    我们拒绝了政府的优化人种政策只是为了落入私人优化人种的陷阱吗?父母们也许会受到各种压力而接受自愿的人种优化,这些压力可能来自医生、来自医疗保险公司、来自社会文化。有很多故事讲述的是直到70年代还有妇女被他们的医生诱骗去做绝育手术,因为她们带有一个遗传病的基因。但是,如果政府要以基因筛选可能被不正当使用为理由把它禁止,它会冒增加世界上的痛苦的危险:把基因筛选列为非法与把它强制实行是同样残忍的。它是一个个人的决定,不是应该由技术官僚来决定的。基切299尔肯定是这样想的:“至于人们想要得到哪些特性、避免哪些特性,这当然是他们自己的事情。”詹姆斯•沃森也这样想:“这些事情应该放得离那些认为自己才最有见识的人远远的……我想看到把关于基因的决定放到用户手里,政府可不是用户。”

    尽管还有少数边缘上的科学家担心种族和人类遗传上的退化,大多数科学家现在都认识到了个体的幸福应该比群体的幸福更有优先权。在基因筛选与优化人种论者在他们的巅峰期想要的东西之间,有着巨大的区别,这就在于:基因筛选是要让人以个人的身份用个人的要求来做出个人的选择。优化人种论则是要把这样的决定国有化,让人民不是为了自己而是为了国家来生育。在忙着规定“我们”在基因的新世界里应该允许什么不允许什么的时候,这是一个常常被忽略了的区别。“我们”是谁?是个体,还是有着集体利益的国家和种族?

    比较一下现代仍然实行的“优化人种”的例子。在美国,就像我在第十三号染色体那一章里讲过的,犹太人遗传疾病防治委员会为学龄儿童验血,在将来,如果想结婚的双方都带有某一个特定的致病基因的一种形式,委员会就要劝阻。这是一个完全自愿的政策。尽管它被批判成是“优化人种”,但是它没有任何强制的措施。

    优化人种历史的很多现代版本都把它表达成是一个科学、尤其是遗传学、不受控制会有多么危险的例子,其实它更多地是一个政府不受控制会有多么危险的例子。

    第二十二号染色体 自由意志

    休谟之叉:我们的行为要么是事先已经被决定了的,这样我们就不必为它们负责;要么是偶然事件的结果,这样我们也不必为它们负责。——《牛津哲学词典》

    当这本书的第一稿快要完成的时候,也就是新千年到来之前的几个月,传来了一个重要的消息。在剑桥附近的桑格中心,第二十二号染色体的全部序列已被测完,这是第一条被从头读到尾的人类染色体。在人类自传的第二十二章里的所有1100万个词已经被读出来,并写成了英文:3340万个A、C、G和T。

    在靠近第二十二号染色体长臂顶端的地方,有一个大而复杂的基因,充满了重要性,它叫做HFW。它有14个外显子,合在一起拼出了一篇6000多字母长的文字。在转录之后,这篇文字被奇怪的RNA剪接过程剪辑一番,造出一个非常复杂的蛋白质,却只在大脑前额叶的一小部分区域里表达。相当过分地概括一下,这个蛋白质的功能是把自由意志赐予人类。没有HFW,我们就不会有自由意志。

    前一段是瞎编的。在第二十二条染色体上没有HFW基因,在其他染色体上也没有。在花了二十二章的篇幅没完没了地讲事实之后,我就是想要骗骗你。我在身为非小说作者而感到的压力下撑不住了,没法再抵御想编些东西出来的诱惑。

    但是,“我”是谁?是被一种傻傻的冲动战胜、决定写一段瞎编的情节的那个我吗?我是一个被我的基因组合在一起的生物体。它们事先确定了我的体型,给了我每只手上的五个手指和嘴里的32颗牙,设置了我的语言能力,规定了我的智力能力中的大约一半。当我记忆什么事情的时候,是基因在为我做这件事,把CREB系统打开,把记忆储存起来。它们给我造了一个大脑,把日常工作的职责分派给它。它们还给了我一个明显的印象,就是我能够自由地决定我想怎样行动。简单的自省告诉我,没有什么事是我“帮不了我自己”的。同样,也没有什么告诉我,我必须要做什么事不许做什么事。我能够现在就跳进我的汽车开到爱丁堡去,原因没有别的,就是我想去。我也能够编出一段小说般的文字。我是一个自由的力量,有自由的意志。

    自由意志从何而来呢?很清楚地,它不是来自我的基因,否则就不是自由意志了。根据许多人的说法,答案是它来自社会、文化和后天培养。根据这个说法,自由就等于我们的天性中没有被基因决定的那部分,是一种在我们的基因干完了它们那暴君的恶行之后才开的花。我们可以到达我们那基因决定论之上去摘取那神秘的花:自由。

    有一类科学书籍作者有着一个悠久的传统,他们说生物学的世界被分成了两派:相信基因决定论的人和相信自由的人。但是,同样是这些作者,他们否定基因决定论,只是因为他们建立了其他形式的生物决定论以代替它——父母影响决定论或社会环境决定论。很奇怪的是有这么多作者捍卫人类的尊严不受基因的统治,却似乎很高兴接受我们的环境的统治。有一次在某出版物上我受到了批评,因为它声称我说过(其实我没有说过)所有行为都是由基因决定的。这个作者进一步给了一个例子以说明行为不是由基因决定的:广为人知的一件事,是虐待儿童的人往往自己在小时候也受过虐待,这就是他们日后行为的原因。他似乎没有意识到,这个说法同样是决定论,而且对于那些已经受了很多苦头的人,这比我说过的任何话都是更缺乏同情、更带偏见的谴责。他是在主张:虐待儿童的人的孩子很可能也会变成虐待儿童的人,他们自己无法改变这一结局。他没有意识到他是在使用双重标准:在用基因解释行为时要求有严格的证明,却轻易就接受了用社会因素来解释行为的说法。

    有一种粗糙的划分法:基因是不可更改的编程员,是加尔文主义的命运前定,而环境则是自由意志的家。这是错误的。在塑造性格与能力方面最有力量的环境因素之一是子宫里的总体状况,你无法改变它。正如我在六号染色体那一章中提出的,有些智力能力方面的基因也许是欲望方面的基因,而不是能力方面的基因:它们把它们的拥有者带上一条自愿学习的路。同样的效果也可以由一个会激励人的老师达到。换句话说,天性比起后天培养更有可塑性。

    阿道斯·赫胥黎(Aldous Huxley)的《美丽的新世界》写于优化人种的热情达到顶峰的20年代,它呈现给我们的是一个恐怖的世界:整齐划一,强制的控制,没有个人的差异。每个人都温顺、自愿地接受他或她在等级制度里的位置——从?到埃普西隆(从最高到最低)——顺从地工作,并享受社会希望他或她享受的娱乐活动。“美丽的新世界”这个词现在已经有了这样的意义:集权统治与先进的科学手挽手实现的恶劣的社会。

    所以,让人吃惊的就是当你读了赫胥黎的书之后你会发现,里面几乎没有任何优化人种的东西。?和埃普西隆不是天生的,而是产生于在人工子宫里的化学调节以及其后的巴甫洛夫式的条件反射训练和洗脑,并在成人之后靠类似于鸦片的药物维持。换句话说,这个糟糕的社会与天性没有一点关系,却全部来自于后天的培养。它是一个环境的地狱,不是基因的地狱。每个人的命运都是注定的,被他们的严格受控的环境,而不是被他们的基因。这确实是生物决定论,但却不是基因决定论。赫胥黎的天才在于他认识到了一个后天培养占主导地位的世界事实上会多么可怕。确实,30年代统治了德国的极端的基因决定论者与同一时期统治了苏联的极端的环境决定论者,谁给人们带来了更大的痛苦,还很难说。我们所知道的只是,两个极端都很恐怖。

    幸运的是,我们抵抗洗脑的能力相当辉煌。不管父母和政客们怎么告诉年轻人吸烟对他们有害,他们还是要吸烟。事实上,正是因为成年人给他们宣讲吸烟的危害,才使得吸烟有这么大的吸引力。我们从遗传得到一种仇视权威的倾向,特别是在我们的青少年时期,我们用它来保护我们的本性,提防独裁者、老师、虐待人的后爹后妈以及政府的宣传攻势,

    另外,我们现在知道,几乎所有用来显示父母影响塑造我们性格的证据都有缺陷。在虐待儿童与在童年曾经受过虐待中间,确实有一定联系,但是它可以完全用遗传的性格特点来解释。虐待儿童的人,他们的孩子从遗传得到了虐待他们之人的性格特点。研究发现,在把这个因素考虑到之后,后天因素就没有什么决定作用了。例如,虐待孩子者收养的孩子不会成为虐待孩子的人。

    惊人的是,同样的现象在你听到过的几乎所有的标准的“社会的阴谋”里都是如此。罪犯生罪犯,离婚的人养出离婚的孩子,问题父母养出问题儿童,肥胖的父母养出肥胖的孩子。朱迪斯·里奇·哈里斯(Judith Rich Harris,心理学家)在她写作心理学课本的漫长的职业生涯中曾经相信了所有这些说法,但是在几年前她突然对此产生了怀疑。她的发现让她感到震惊与不解。因为几乎没有任何实验考虑了遗传的因素,在所有这些研究里没有任何因果关系的证据。对于这样的忽略甚至都没有人提一句:在这些研究里两件事情之间有联系被经常地说成是有因果关系。但是在每一个现象里,从行为遗传学研究里都得到了新的、有力的证据,反对里奇?哈里斯所称的“后天培养假说”。例如,关于孪生子离婚率的研究显示,遗传能够解释离婚率的一半区别,每一个孪生子遇到的独特的社会因素解释了另一半,而他们共同的家庭环境一点作用都没有。换句话说,如果你是成长在一个破裂的家庭,你离婚的可能性并不高于平均水平——除非你的亲生父母离了婚。在丹麦,对于被领养孩子的犯罪纪录的研究显示,他们是否犯罪与亲生父母的犯罪纪录有很大关系,与养父母则只有很小的关系——这很小的关系,在考虑了同伴效应之后也消失了,这个效应就是,这些被领养的孩子是否犯罪与他们的养父母是居住在犯罪率高的街区还是犯罪率低的街区有关。

    事实上,现在已经清楚了,孩子对于父母的非遗传影响比父母给孩子的非遗传影响还要大。正如我在X和Y染色体那一章里提出的,传统说法一般认为与孩子疏远的父亲和过分保护的母亲把孩子变成了同性恋。现在认为更可能的是反过来:觉察到儿子对于男性关心的东西不太感兴趣之后,父亲就疏远了儿子,母亲则用过分保护儿子来弥补。同样地,自闭症儿童确实通常有冷淡的母亲;但这是果而不是因:母亲长年以来努力想要与一个自闭症孩子沟通,却没有任何回报,她被搞得精疲力竭,最后终于放弃了。

    里奇?哈里斯有系统地摧毁了作为20世纪社会科学基础的教条之一:父母塑造孩子的性格与文化的假说。在西格蒙德·弗洛伊德的心理学、约翰·沃森(John Watson)的行为学派和玛格丽特?米德的人类学中,父母养育的决定作用从来没有被检验过,只是一种假设。但是来自孪生子的研究、来自移民家庭孩子以及被领养孩子的研究现在就在我们面前:人们从他们的基因和他们的同伴那里得到他们的性格,而不是从他们的父母那里。

    在70年代,E.O. 威尔逊(Wilson,生物学家)的著作《社会生物学》出版之后,出现了对于遗传影响行为说法的一个猛烈的反击,领头的是威尔逊的哈佛同事,理查德?路文廷和斯蒂芬?杰?古尔德。他们中意的口号被路文廷用做自己一本书的书名,教条得不留任何回旋余地:“不在我们的基因里!”在那个时候,“基因对行为只有很少影响或没有影响”这样的论断仍然只是一个合乎情理的假设。在25年的行为遗传学研究之后,这个观点已经不再成立了。基因确实影响行为。

    但是,即使有了这些发现,环境仍然相当重要——在所有行为中也许环境的总和都比基因重要。但是在环境的影响中只有小得惊人的一部分是父母影响的作用。这不是要否认父母有作用或者是孩子没有父母也行。事实上,就像里奇?哈里斯所说,如果这样否认就太荒谬了。父母塑造家庭环境,而一个愉快的家庭环境本身就是好事。你不需要相信快乐决定性格,也会同意拥有快乐是好事。但是儿童似乎不让家庭环境影响他们离开家之后的性格,也不让它影响自己在成年之后生活里的性格。里奇?哈里斯在观察之后做出了一个关键的结论:我们都把自己生活中的公共生活带和私人生活带分开,而且我们并不见得会把在一个带里学到的教训或表现的性格拿到另外一个带里。我们很容易地在两个带之间切换。这样,我们学到了我们同伴的语言(对于移民来说)或口音并在今后的生活中使用,而不是我们父母的。文化自动地从一个儿童的小群体传到另一个,而不是从父母传到子女——举一个例子说,这就是为什么在成年人中推动性别更加平等的运动对于儿童活动场上自愿的按性别分组没有任何影响。每个家长都知道,小孩喜欢模仿同伴而不是家长。心理学与社会学和人类学一样,曾经被那些对遗传因素有着强烈反感的人所主导;但是它再也不能继续这样无知下去了。

    我的目的并不是要重复一遍天性与后天培养的辩论,这个题目我在第六号染色体那一章里谈过了。我是想引起人们对这个事实的注意:即使后天培养的假说被证明是正确的,它也不会减少外界因素对行为的决定性。通过强调跟从于同伴对人的性格会有多么大的影响,里奇?哈里斯彻底揭示了环境决定性比遗传决定性更应该引起警觉。它就是洗脑。它远没有给自由意志留下空间,而是减少了空间。一个孩子在不顾父母和兄弟姐妹的压力而表达自己的(部分是遗传的)性格时,他至少是在遵从内在的力量,而不是其他什么人的。

    所以,靠着用社会因素来寻找同情并没有躲开决定论。事情的结果要么有原因,要么没有原因。如果我因为童年时期发生的什么事而变得胆小,这并不比一个胆小的基因具有更少的决定性。更大的错误不是把决定性与基因等同起来,而是把决定性当成是不可避免的。《不在我们的基因里》一书的三位作者,史蒂文·罗斯(Steven Rose)、利昂·卡民(Leon Kamin)和理查德?路文廷说:“对于生物决定论者来说,那古老的信条‘你无法改变人的本性’是人类状况的开始也是结束。”但是这个等式——决定论等于宿命论——是没有根据的,这是人们都理解得很清楚的,很难发现这三位批评家到底是在指控哪个假想敌。

    决定论等于宿命论之所以是没有根据的,原因如下。假设你生病了,但是你通过推理认为没有必要打电话找医生,因为你要么会痊愈,要么不会,医生是多余的。但是,这就忽略了一个可能性,那就是:你痊愈也许是因为你看了医生,不痊愈也许是因为你没有看医生。随之而来的是,决定论并不决定你可以做什么不可以做什么。决定论是向后去看你现在状况的原因,并不是向前去看它的后果。

    但是,这样的神话继续流传:遗传决定性是比环境决定性更不容易改变的命运。就像詹姆斯?沃森所说的:“我们谈论基因疗法,似乎它能够改变一个人的命运,但是你也可以用帮一个人还清债务的方法改变他的命运。”了解遗传知识的惟一目的就是为了(主要利用非遗传的方法)干涉、弥补遗传的缺陷。我已经列举了众多例子,说明基因突变的发现远远没有导致宿命论,而是导致了减轻它们影响的双倍的努力。就像我在六号染色体那一章里提出的那样,当阅读困难症终于被认做是一个真实的也许是遗传的问题之后,家长、老师和政府的反应不是宿命式的。没有人说,因为阅读困难症是遗传病,所以它是不可治愈的,从现在起被诊断为有阅读困难症的孩子都应该被允许当文盲。发生的事情与此正相反:为阅读困难症孩子发展出了有弥补措施的教育方法,效果相当令人叹服。与此类似,我在第十一号染色体那一章里说过,连心理疗法医生都发现,害羞的遗传解释能够帮助它的治疗。通过让害羞的人相信他们的害羞是内在的、“真实”的,能够帮助他们克服这个问题。

    生物决定论威胁政治自由的说法也是说不通的。正如山姆·布瑞坦(Sam Brittan,经济学家)曾经说过的:“自由的反面是强制,不是因果决定。”我们珍惜政治自由是因为它允许我们拥有个人作决定的权利,而不是反过来。尽管我们嘴上说我们热爱自由意志,当需要“赌注”的时候我们却抓住决定论想用它来救我们。1994年2月,一个美国人斯蒂芬·莫布利(Stephen Mobley)被判决谋杀了一个比萨饼店的经理约翰·科林斯(John Collins),并被判死刑。他的律师在上诉要求把死刑改判为无期徒刑时,提出遗传作为辩护。他们说,莫布利来自一个几代都出骗子和罪犯的家庭。也许他杀了科林斯是因为他的基因让他这么干的。“他”对此没有责任,他只是由遗传决定的一个自动化机器。

    莫布利愉快地放弃了他拥有自由意志的想法,他希望别人相信他没有自由意志。每个用“精神疯狂”或“应负责任应该减少”来为自己辩护的罪犯都是这样希望的。每一个因嫉妒而杀死了自己不忠的配偶的人也是这样希望的,他们为自己辩护的理由是“短暂疯狂”或“正当的愤怒”。每一个大亨在被指控造假欺骗持股者的时候也是这样希望的,他们的借口是“早老性痴呆症”。事实上,每一个孩子,当他在游戏场上说,是他的朋友让他干的,他也是这样希望的。我们中的每一个人,如果在心理医生一点隐晦的暗示下就心甘情愿同意我们现在的不快乐都应该怪我们的父母,也是这样希望的。一个把高犯罪率归罪到社区环境上的政客也是这样希望的。当一个经济学家肯定地说消费者追求的是商品功能的极值时,他也是这样希望的。当一个传记作家试图解释他书中人物的性格是怎样被具有改变人的力量的体验而塑造的时候,他也是这样希望的。每一个去算命的人都是这样希望的。在每一个例子里都有一种自愿、快乐和感激的对于决定论的拥抱。对于自由意志我们远远不是热爱,我们似乎是一个只要有可能就会跳起来把它放弃的物种。

    一个人对自己的行为负全责是一个有必要的虚构故事,没有它,法律就站立不稳,但是它照样是一个虚构故事。在某种程度上说,你的行为是出于你的性格,你是要为自己行为负责的;但是,出于性格的行为只不过是在表达那许多决定了性格的因素。大卫·休谟(David Hume,18世纪哲学家)发现自己被这个后来被称为休谟之叉(Hume’s fork)的两难问题难住了。我们的行为要么是被事先决定的,在这种情况下我们不必为它负责;要么我们的行为是随机的,在这种情况下我们也不必为它负责。在每种情况下,常识被否定了,社会秩序无法形成。

    基督教已经与这些问题纠缠了两千年,其他宗教的神学家们还要更长。上帝似乎是否认自由意志的,这几乎是定义,否则他就不是万能的了。但是,基督教尤其努力地试图保存自由意志的概念,因为没有它,就不能让人类对自己的行为负责。如果没有责任,罪恶的概念就是一个笑话,而地狱就成了来自于公正的上帝的一个该诅咒的不公正。现代基督教的共识是上帝把自由意志加诸我们,使得我们能够选择让自己的生活充满美德还是罪恶。

    几位著名的进化生物学家最近提出,宗教信仰是人类普遍拥有的本能的体现——在某种意义上说,有一组基因是关于信仰上帝或神祇的。(一位神经生物学家甚至声称他在大脑颞叶发现了一个专门的区域,在信仰宗教的人里比在其他人里体积更大更活跃;过分的宗教情结是有些种类的颞叶癫痫的一个特征。)宗教的本能也许仅仅是本能的迷信的一个副产品,这样的迷信假定所有事件,甚至是雷雨,都有一个带有某人意志的原因。这样的迷信在石器时代可以是很有用的。当一块大石头滚下坡几乎把你压扁的时候,如果你听信阴谋理论而认为这是有人把它推下来的,就比认为它只是偶然事件要更安全。我们自己的语言里布满了带有意志的词。我早些时候写道,我的基因建造了我,并把日常责任分配给了我的大脑。我的基因没有做这一类的事,这些事仅仅是发生了。

    E.O. 威尔逊在他的《综合知识》一书里甚至提出,道德是我们的本能的成体系的表达,什么是正确的确实是由什么是自然的而衍生出来的,尽管自然主义也有站不住脚的地方。这引出了一个矛盾的结论:信仰上帝或神是自然的,因此是正确的。但是威尔逊本人在成长过程中是一个虔诚的浸礼教徒,现在却是不信其有也不信其无,这样,他就反抗了一个有决定作用的本能。同样的,史蒂文?频克接受了自私基因的理论,却没有要孩子,他告诉他的自私基因“去跳河吧”。

    所以,即使是决定论者也可以躲开决定的因素。我们有了一个矛盾。除非我们的行为是随机的,否则它就是事先决定的。如果它是事先决定的,它就不是自由的。但是,我们感到——而且可以被证明——我们是自由的。查尔斯?达尔文把自由意志描述成是一个幻觉,是因为我们没有能力分析我们自己的动机。现代达尔文学派人士——例如罗伯特?特斯里弗——甚至提出,在这样的事情上我们欺骗自己也是一个进化来的对环境的适应。频克曾经把自由意志说成是“使得伦理游戏成为可能的人类的理想化”。作家丽塔?卡特(RitaCarter)说它是事先装在思维里的幻觉。哲学家托尼?英格拉姆(TonyIngram)把自由意志说成是我们假设别人拥有的东西——我们似乎有内在的倾向认为我们周围所有人和所有事物都有自由意志,从不听使唤的外板发动机到带着我们基因的不听话的孩子。

    我愿意相信,在解决这个矛盾时我们能够做得更好。还记得吗?在谈论第十号染色体的时候我描述过,组成对于压力的反应的,是对环境变化迅速做出回应的基因,而不是相反。如果基因能够影响行为,行为又能影响基因,那么就有了一个循环的因果关系。在一个循环反馈的系统里,简单的因果过程可以产生非常难以预料的结果。

    这种说法出自于混沌理论。我讨厌承认这一点,不过,是物理学家先发明的这个理论。18世纪法国的伟大数学家皮埃尔-西蒙·德·拉普拉斯(Pierre-Simonde La Place)曾经设想过,作为一个优秀的牛顿学派人士,如果他能够知道宇宙中每一个原子的位置和运动,他就能够预言未来。或者说,他猜到了自己不能预知未来,在琢磨为什么不能。时髦的说法是,答案在亚原子水平上,我们现在知道,那里的量子力学事件只是在统计的意义上是可以预测的,世界不是牛顿的桌球组成的。但是这并没有什么帮助,因为牛顿物理学其实在我们日常生活的尺度上是对事件的很精确的描述,没有人认真相信我们的自由意志依赖于海森伯(Heisenberg,量子物理学家,所提出的“不确定性原理”认为人们无法同时准确地测量一个粒子的位置与动量,对其一的测量越准确,对另一个的测量就越不准确。后来有人认为,这个原理给人的行为从本质上加上了随机性:因为我们无法在任何一个时刻测量出所有的变量,我们也无法预测人的行为)不确定性原理的那个概率框架。把原因说得直接一些:今天下午我在决定写这一段的时候,我的大脑没有掷骰子。采取随机的行动与自由地行动根本不是一回事——事实上,正相反。

    混沌理论给拉普拉斯提供了一个更好的回答。与量子物理不同,它不依赖几率。数学家所定义的混沌系统是事先决定的而不是随机的。但是这个理论说,即使你了解所有决定这个系统的因素,你可能还是无法预测这个系统的发展轨迹,这是因为不同的因素之间相互作用的结果。即使是因果关系简单的系统也可能有混沌状态的行为。它们这样是因为“自激”性,在这里,一个行动影响下一个的初始状况,所以,很小的结果会成为很大的原因。股票市场指数的走向、未来的天气和海岸线的“分形几何”,都是混沌系统:在每一种情况下,大概的轮廓和事件发展的大体方向是可以预测的,但是精确的细节却不能。我们知道冬天会比夏天冷,但是我们不知道下一个圣诞日是否会下雪。

    人类行为也具有这些特点。压力可以改变基因的表达,基因表达又可以影响对压力的反应,如此这般。因此,人类的短期行为是无法预测的,但是长期行为却大致可以。这样,在一天中的任何一个时刻我可以选择不吃饭,我有不吃饭的自由,但是几乎可以肯定,在那一天之内我是要吃饭的。我吃饭的时间可能会由很多因素决定——我的饥饿程度(部分由我的基因决定),天气(由众多的外界因素以混沌的方式决定),或者是另外某人决定问我要不要出去吃午饭(他是一个做事有因果的个体,我无法控制他)。这些基因与外界影响的相互作用使我的行为无法预测,但是它们并非没有决定我的行为。在字词的空隙里,有着自由。

    我们永远不可能逃避决定性,但是我们可以在好的决定性与坏的决定性之间做出区别——自由的和不自由的。假设我坐在加州理工学院下条信辅(Shin Shimojo)的实验室里,他此刻正用一根电极戳我的大脑里离前环沟(anteriorcingulate sulcus)很近的地方。因为对于“自愿行为”的控制就是在这个地方,也许他使我做了一个动作,在我看来具有所有的自愿行动的特征。如果问我为什么要动胳膊,我几乎肯定会很确信地回答,那是我自己的决定。下条教授要知道得更清楚(让我赶快加上一句,这是下条向我建议的一个设想的实验,不是真的)。与我的关于自由的幻觉相矛盾的,不是我的动作是被其他因素所决定这一事实,而是因为它是另外某人从外部决定的。

    哲学家A.J.艾尔(Ayer)是这样说的:
    如果我患上了强迫型精神病,以至于我会站起身来走到房间另外一头去,不管我想不想这样做,或者如果有人强迫我这样做,那么,我就不是在自由地行动。但是如果我现在这样做,我就是在自由行动,仅仅是因为上面说的两种情况不存在。从这个角度来看,我的行动仍然有一个原因这个事实是无关紧要的。

    一位研究孪生子的心理学家林登·伊弗斯(Lyndon Eaves)曾经说过类似的观点:
    自由是站起来超越环境限制的能力。这个能力是自然选择赋予我们的,因为它具有适应性……如果你要被推着走,你是宁愿被你的环境推着走,还是被你的基因推着走?环境不是你,而基因在某种意义上说就是你。

    自由在于表达决定你自己的那些因素,而不是决定别人的那些。“决定”不是区别所在,谁是决定因素的主人才是区别所在。如果自由是我们想要的,那么最好是让来自于我们内部的力量来决定我们,而不是让其他人内部的力量来决定。我们对于克隆人的厌恶有一部分是来自于这样的一个恐惧:我们的独特性要被另外一个人分享了。让基因在它们自己的体内为自己作决定,这样一个执著的信念是我们反对把自由丢给外界因素的最强堡垒。你是否已经开始看出来我为什么要半开玩笑地随便想想一个自由意志基因的想法?一个自由意志基因不是什么自相矛盾的事,因为它会把我们行为的来源放到我们身体之内,其他人拿不着。当然,自由意志不会由一个基因决定,而是由与基因相比无限宏伟、给人激励的力量决定:整个的人类本性,事先建立在我们的基因组里,具有灵活性,又是每个人所特有的。每一个人都有一个独特的与众不同的内在本性。这就是“自我”。

  • 密码保护:被评判的文章

    此内容受密码保护。如需查阅,请在下列字段中输入您的密码。

  • 陈洁:内幕交易特殊侵权责任的立法逻辑与规则设计

    自2005年《中华人民共和国证券法》(以下简称《证券法》)首次规定内幕交易民事赔偿责任至今20年间,我国证券市场内幕交易行政处罚的案件数量逐渐增多,但由法院作出裁决的内幕交易民事赔偿案件却寥寥无几。究其因,由于最高人民法院就内幕交易民事赔偿尚未出台类似虚假陈述侵权损害赔偿的司法解释,致使证券法中内幕交易民事赔偿责任条款因其过于原则更多起到宣言性作用。而推究最高人民法院未能出台内幕交易民事赔偿司法解释的深层次原因,主要有二:一是基础法理层面。理论界与实务界对内幕交易民事责任存在认识分歧,诸如是否需要构建内幕交易民事责任制度、内幕交易民事责任制度要达到何种法律效果等根本性问题,至今难以达成共识。二是技术规则层面。由于传统侵权责任制度规范难以直接适用于内幕交易侵权范畴,因此,关于内幕交易侵权行为的性质、内幕交易侵权责任的构成要件、内幕交易行为给投资者造成的损失怎样计算等问题,业界亦存在诸多分歧。

    鉴于最高人民法院现已明确启动内幕交易民事赔偿责任司法解释工作,为此,暂时搁置学理层面的争议,尽力厘清内幕交易民事赔偿制度的立法逻辑,并大力推进规则层面的体系化、规范化已是当务之急。为此,本文无意于在理论层面过多纠结于业界对内幕交易民事责任“肯定说”“否定说”的争论,而只是在认可内幕交易民事责任作为资本市场基础性制度构成并获得政策性选择的前提下,从有效防范和制裁内幕交易行为,充分保护投资者权益的视角出发,尝试解析我国内幕交易民事赔偿责任制度的构建逻辑,以及与立法逻辑相照应的且具有可操作性的核心规则设计,以期为我国内幕交易民事赔偿责任司法解释的出台贡献绵薄之力。

    一、追究内幕交易侵权责任的逻辑起点

    综观关于内幕交易民事赔偿责任的基础法理争议,主要有三点:一是内幕交易是否给投资者造成损害?二是内幕交易究竟侵犯了投资者什么权利?三是内幕交易民事赔偿的请求权基础是什么?这三个问题其实也是追究内幕交易侵权责任的逻辑起点。

    (一)内幕交易是否给投资者造成损害

    实务界对内幕交易民事赔偿责任持“否定论”的一个重要理由是,内幕交易确实对公平市场秩序造成危害,但是市场危害性不能当然推导出个别投资者民事索赔的正当性,因为内幕交易行为不会对个别投资者的个人权益造成损害。该观点进一步认为,内幕交易攫取的并不是某个或者某些可确定的特定投资者的利益,而是不特定投资者所共同代表的公共利益。对此,笔者以为,内幕交易对证券市场造成的损害是普遍的且严重的,它既给证券市场规则造成一般性的损害,也给投资者利益造成具体损害;既给所有的投资者造成普遍损害,也给具体投资者造成个别损害。此外,内幕交易也对证券的发行者造成了损害。

    内幕交易对投资者造成的损害,其实包括两个方面:一般损害和具体损害。1.一般损害是指投资者在一个规则受到损害的市场上从事交易,其实所有的市场投资者都承担了交易风险。析而言之,当掌握了内幕信息的人(以下简称“内幕人员”)利用内幕信息从事证券交易以求获利或者避损,其行为本身会减少其他投资者获利的机会。因为在证券市场中,投资既有亏损的时候,也有盈利的时候,盈亏相抵之后即为投资的净回报率。如果一般投资者亏损的概率保持不变,而盈利的概率却因为内幕人员参与交易而减少了,投资的净回报率显然就会降低,这样就间接地伤害了广大的投资者。这些损害虽然是难以计量的,但确是所有市场投资者所面临的风险。2.具体损害是指在一个具体的证券交易中,利用内幕信息进行交易的内幕人员获得了利益,而作为其交易相对人的投资者则受到了损害。客观上,在与内幕人员进行交易时,投资者会受到额外损害,这个损害就是内幕人员获得内幕交易所得的超过利润部分。内幕人员的额外收入,不是基于其自身的努力,诸如其对市场的分析调查或者其他生产性活动等,而是以其他投资者的损害为代价,内幕人员的所得正是对方所失。更进而言之,由于内幕信息的获得机会与对内幕信息的控制程度有关,而市场中往往只有大投资者才能够控制内幕信息并预防内幕交易,因此内幕交易的所得往往是以中小投资者的损失为代价的。这种损害取决于内幕交易发生的可能性,危害的大小与可能性的大小成正比。因此,严格禁止内幕交易,可以确保投资者之间处于实质平等的地位,有利于保护投资者权益。

    综上,由于内幕人员是在获取了内幕信息的情况下作出交易决策,因此可以推定如果内幕人员不知悉内幕信息,就不会实施交易,或者至少不会以相同的价格或者在时间区间实施交易。因此,内幕交易使内幕人员与普通投资者处于形式上平等而实质上不平等的地位,内幕人员实质上是从市场上攫取了本来不应当有的交易机会。这种不应当有的交易机会,既给不特定投资者造成一般损害,也给特定投资者造成了具体损害,同时还从根本上破坏了证券市场规则统一、地位平等、方式公平和机会均等的基本结构与功能要素。

    (二)内幕交易侵犯了投资者什么权利

    证券内幕交易本质上是个别内幕信息知情人利用信息优势与普通投资者开展的不公平交易行为。内幕人员与普通投资者交易,内幕人员必然具有更多的获利避损机会,而与其作相反交易的投资者则难免因此受损。在这样的交易中,内幕人员究竟侵犯了投资者什么权利呢?主流观点认为,内幕交易侵犯了股东知情权和公平交易权。对此,笔者以为,这个知情权的概念过于狭窄且定位有所偏差,内幕交易实际上侵犯了投资者公平信息获取权。

    首先,股东知情权与投资者公平信息获取权的差异。股东知情权是公司法上的概念,投资者公平信息获取权是证券法上的概念。尽管证券法与公司法都调整公司与其股东之间的关系,但是,公司法所调整的是公司与其股东之间的关系,证券法则调整证券发行主体与投资者之间的关系。在公司法结构框架下,公司法将公司与其股东之间的关系,以及基于这种关系而确定的公司董事、监事和高级职员与股东的关系,作为公司的内部关系来调整。但证券法则将股东(除了法定的内部人外)视为发行股票公司的“外部人”,将作为发行人的公司与投资者之间的关系作为外部关系来调整。这种调整模式差异的根源在于,证券法上的投资者包括公司现有股东和潜在的股东,其范围要大于公司法上股东的范围。公司法保护股东的权益,包括股东知情权,主要是对私益(特定股东的利益)的保护,而证券法所保护的投资者权益则更具有公益(不特定投资者的利益)的性质。公司法调整公司与股东之间的关系以股东平等为原则,证券法调整公司与股东之间的关系也以股东平等为原则,但证券法上的这一原则是投资者平等原则在特定范围中的应用,其所强调的是股东重大信息了解权的平等(如信息披露制度)和股东投资机会的平等。鉴于证券法所要实现的是证券市场的公平性与秩序性,所以,内幕交易实质上损害了投资者公平信息获取权。尽管公平信息获取权与股东知情权之间存在密切联系,甚至有相当重合,但二者的权利主体范围、权利性质还是有差异的。

    其次,内幕交易是否侵犯了投资者的公平交易权?这个问题的回答其实取决于公平信息获取权与公平交易权的关系解释。如前所述,证券法与公司法是两个相对独立的法域,各自具有不同的质的规定性。股东知情权的规制,是为了维护公司现存股东的股东权益公平合法的实现;对投资者的保护,以及对证券信息披露实施环节的规制,是为了维护投资者(包括公司的潜在股东)权益安全与公平的实现。在证券市场上,投资者是依据其所掌握的证券市场信息进行交易决策,为此证券法规定了严格的追求效率的信息披露制度。但投资者在公平获取信息之后能否作出最优投资决策,则不是证券法所要规制的问题。就内幕交易而言,内幕信息的重大性使其必然对证券市场价格产生重要影响,因此该信息是投资者对发行公司进行投资判断或者对该公司股票市场价格进行判断的依据。换言之,投资者与内幕人员不平等的核心是获取信息权。正是由于信息获得的差异,导致投资者投资决策的差异。至于公平交易权,它只是损害公平信息获取权附带的结果,损害了公平信息获取权必然损害公平交易权。因此,就内幕交易而言,投资者的公平交易权并非其直接侵害的对象,而是侵害投资者公平信息获取权的附属,故不应将公平信息获取权与公平交易权等同视之。事实上,在资本市场上,包括操纵市场、程序化交易等方式可能直接损害的是投资者的公平交易权。

    (三)内幕交易损害赔偿的请求权基础

    内幕交易民事赔偿责任是指违反《证券法》第53条规定的义务而产生的侵权损害赔偿责任,因此,内幕交易民事赔偿责任的性质是基于违反法定义务而产生的侵权之债。不过,在内幕交易侵权行为认定上,将内幕交易视为“欺诈”的观点相当盛行。对此,笔者以为,民事侵权法上的欺诈,欺诈者要有捏造事实或隐瞒真相的行为。而在发生内幕交易的场合,内幕人员对内幕信息的隐瞒,与欺诈行为中隐瞒真相的通常意义有所不同。其一,内幕人员并不一定是信息披露义务人,相反,在内幕信息公开之前,因职务或业务而获得内幕信息的人要负有保密义务,因此,对于内幕人员不将内幕信息透露给相对人的情形,不能全然认定为违法;其二,在证券集中市场上进行交易,内幕人员只需表示以特定价格买卖特定数量的特定证券,即可根据证券买卖的成交原则成就交易,内幕人员与相对人之间并无交流内幕信息的必要与机会。所以,在法律上不应当把内幕交易定性为欺诈行为,而应当定性为法律所禁止的不正当交易行为。

    关于内幕交易民事责任的请求权基础,对内幕交易民事责任持“否定论”的观点认为,内幕交易侵犯的是股东知情权,知情权不属于民事权利范畴,故投资者无法受到救济。其具体论证的过程是,原《侵权责任法》第2条第2款以列举加兜底的方式确定了18种代表性的“人身、财产权益”,但内幕交易的侵权客体皆无直接对应的权利类型。民法典相较于此前侵权责任法,其并没有以列举加兜底的方式来界定“人身、财产权益”,而是在第1164条中概括规定“侵权责任编”调整对象为“因侵害民事权益产生的民事关系”,知情权能否被“民事权益”这一概念所涵摄值得探讨。

    对此,笔者以为,民法典之所以放弃原侵权责任法列举加兜底的确定“人身、财产权益”的方式,而采用“民事权益”的宽泛表述,就是考虑到民事权益多种多样,立法难以穷尽,而且随着社会、经济的发展,还会不断有新的民事权益被纳入侵权法的调整范围。《民法典》第五章“民事权利”中第125条“投资性权利”规定:“民事主体依法享有股权和其他投资性权利”,结合公司法的规定,股权的内容通常包括股利分配请求权、公司剩余财产分配请求权、知情权等,所以,股东知情权以及由此延伸的投资者公平信息获取权作为投资性权利当然属于“民事权益”的范畴。

    此外,从体系化视角出发,尽管民法典总则编对民事权利的类型化进行重构,并通过分编对类型化的民事权利之变动和保护(包括救济)予以全面具体的规定,然而,某些民事主体因自身特征而享有的其他民事权利,包括知识产权、股权等投资性权利,民法典分编难以提供周到的保护。为此,就必须通过民商事单行法对民法典分编“无暇顾及”的“具体民事权利”提供保护,而且《民法典》第11条就此专门规定了“特别法优先”的法律适用规则。因此,对于同一事项,民商事单行法对民法典总则编或分编的相应规定作细化规定的,如补充性规定、限制性规定或例外规定的,应适用其规定。

    就股东知情权或投资者公平信息获取权而言,鉴于民法典总则编与公司法、证券法原则上是“抽象与具体”、“一般与特殊”的逻辑关系,股东知情权或投资者公平信息获取权涉及公司法、证券法的具体规定,尤其是《证券法》第53条明确规定:“内幕交易行为给投资者造成损失的,应当依法承担赔偿责任”。因此,依据“特别法优先”的法律适用规则,内幕交易侵权责任请求权问题就应当交由公司法、证券法单独处理,而不必机械纠结于民法典总则编的规定。

    二、内幕交易特殊侵权责任的逻辑结构

    最高人民法院于2015年12月24日发布的《关于当前商事审判工作中的若干具体问题》(以下简称《若干规定》)针对“虚假陈述、内幕交易和市场操纵行为引发的民事赔偿案件”提出,“在实体方面要正确理解证券侵权民事责任的构成要件。要在传统民事侵权责任的侵权行为、过错、损失、因果关系四个构成要件中研究证券侵权行为重大性、交易因果关系特殊的质的规定性。”该《若干规定》表明,最高人民法院是将内幕交易违法行为视为侵权行为并要求适用民事侵权责任的四大构成要件。但是,基于内幕交易侵权行为的特殊性,立法上是否应将其定性为特殊侵权行为并确立特殊的责任构成机制是内幕交易民事赔偿司法解释无法回避的基础性问题。

    (一)内幕交易特殊侵权行为的定位逻辑

    从侵权责任的基本法理出发,任何违法行为给他人利益造成损害,均须承担民事赔偿责任,因此,侵权民事责任的承担,并不以法律有明文规定为必要。但就特殊侵权行为而言,则必须依据法律的规定来认定。一般侵权行为与特殊侵权行为的识别,法技术层面判断的关键在归责原则。按一般法理,民法以过错责任为原则,若法律特别规定某类主体或某类行为须承担过错推定责任或无过错责任的,就可以认定是特殊侵权行为。而法律在一般侵权责任之外,要特别设置过错推定责任、无过错责任的特殊侵权规则,主要是考虑到案件双方力量失衡,某些特殊情形中要求受害人承担举证责任是不切实际或者颇为困难的,所以必须采用特殊归责原则以平衡双方利益,也体现对弱势受害人的倾斜保护。

    就内幕交易侵权行为而言,在我国当前规范意义的立法中,从证券法到相关司法解释,均未对内幕交易侵权责任的归责原则予以特殊规定,所以在实然层面,我国内幕交易侵权行为目前的定性应为一般侵权行为。但从内幕交易行政处罚和刑事责任追究的角度考察,2007年中国证券监督管理委员会(以下简称“证监会”)出台的《证券市场内幕交易行为认定指引(试行)》(证监稽查字〔2007〕1号,以下简称《内幕交易认定指引》)以及最高人民法院于2011年7月13日下发的《关于审理证券行政处罚案件证据若干问题的座谈会纪要》(法〔2011〕225号,以下简称《纪要》),都在试图根据内幕交易主体距离内幕信息的远近距离、对获取内幕信息的难易程度以及“知悉”内幕信息途径的不同,对内幕人员“知悉、利用内幕消息”的认定做分层次的推定规定。再从市场实践来看,不管行政执法还是刑事司法,执法机构在内幕交易事实认定中广泛适用推定规则是显而易见的。对此,笔者以为,我国有关内幕交易侵权责任的现行立法过于简单,尚未确立内幕交易特殊侵权行为的救济机制,因此无法实现保护投资者的目的。如果未来要在立法政策层面实现对内幕交易受害人的民事赔偿责任保护,并使《证券法》第53条规定的民事责任条款能够具体落地,就必须从内幕交易行为的特殊性出发,将其认定为特殊侵权行为,并规定过错推定等特殊归责原则。这也是此次出台内幕交易司法解释的出发点和落脚点。

    其一,内幕交易行为的特殊性。内幕交易行为技术性很强且兼具隐匿性。证监会曾指出,内幕交易案件“参与主体复杂,交易方式多样,操作手段隐蔽,查处工作难度很大。随着股指期货的推出,内幕交易更具隐蔽性、复杂性。”客观而言,在证券集中市场上,证券交易由计算机自动撮合成交,内幕人员只需在交易软件上下单,即可根据证券买卖的成交原则,以特定价格买卖特定数量的特定证券。由于是在非面对面的交易市场环境下,内幕人员与其交易相对人之间根本没有机会交流相关的内幕信息,因此投资者实难发现其与内幕人员之间的信息不对称。

    其二,内幕交易行为造成损害的特殊性。内幕交易行为的侵害对象,往往是不特定的投资者,因此,内幕交易行为造成的损害具有涉众性。此外,证券市场瞬息万变,投资者的损失是市场多种因素综合造成的。内幕交易行为造成损害的表现与计算具有复杂性,确定内幕交易侵权责任造成的损失需要运用更多的证券市场专业知识。而如何区分因内幕交易侵权行为造成的损害和正常市场风险带来的损害是证券损害赔偿的主要难点之一。

    其三,内幕人员与普通投资者之间力量的不平衡。与普通投资者相较,内幕人员往往掌握优势的信息和良好的技术、知识、经验,因而普通投资者在证券市场中处于弱势地位。加上内幕交易行为的技术性、隐蔽性等特征,权利受到侵害的投资者需要承担证明受到的损害与内幕交易行为具有因果关系,往往举证难度很大。在此情况下,由作为被告方的内幕人员举证证明投资者遭受的权利侵害并非因内幕交易而是由其他因素导致,无疑降低了受害方投资者的举证难度,亦对促进投资者进行民事权利救济具有关键性作用。

    综上,由于证券市场的特殊性以及内幕交易行为的特殊性,若按照一般侵权责任规则,要求投资者承担内幕交易与其损害之间的因果关系的举证责任,对于绝大多数投资者而言是“不可承受之重”。为保护弱势的公众投资者,增强公众投资者对资本市场公正的信心,就需要“通过无过错责任或者过错推定之下对特定侵权领域受害人权益做出特别的保护,在民事领域的行为自由与权益保护之间划分出不同于一般侵权行为的责任与行为之边界。”就内幕交易民事赔偿案件而言,应将内幕交易定性为特殊侵权行为并采取特殊侵权责任的构造模式,即内幕交易的归责原则应主要适用过错推定原则和无过错责任原则。鉴于内幕交易的主观构成要件必须是故意,无过错行为不构成内幕交易,而且内幕交易也存在免责事由,因此内幕交易的归责原则应该是过错推定规则,不适用无过错责任。概言之,内幕交易特殊侵权行为的立法模式才能体现出立法上对投资者保护,也体现出立法政策上对内幕人员与投资者之间利益平衡的考量。

    (二)内幕交易特殊侵权行为归责原则的特殊性

    如前所述,内幕交易侵权行为应定性为特殊侵权行为,内幕交易侵权行为的归责原则为过错推定原则。在过错推定原则下,一般行为人只要证明自己没有过错就可免责。不过,就内幕交易侵权责任的归责原则而言,其又具有特殊性。

    其一,内幕交易行为人只要证明自己没有故意,就可以免责。关于过错,侵权法上过错的基本形态可分为故意和过失,其中,故意可分为恶意和一般故意,过失可分为重大过失、一般过失和轻微过失。但在最高人民法院《关于审理证券市场虚假陈述侵权民事赔偿案件的若干规定》(法释〔2022〕2号,以下简称《虚假陈述新司法解释》)中,为了对中介机构的连带责任予以限缩,《虚假陈述新司法解释》将《证券法》第85条规定中的“过错”限定为“故意和重大过失”。就内幕交易而言,《证券法》第50条规定,“禁止证券交易内幕信息的知情人和非法获取内幕信息的人利用内幕信息从事证券交易活动”,这里的“利用”表明内幕人员必须有意识地使用内幕信息才构成内幕交易行为,即内幕交易构成要件中的主观方面只能是故意,过失不构成内幕交易。因此,内幕交易行为人只要证明自己不是故意,即便可能存在过失,也不构成内幕交易,也就无需承担内幕交易侵权损害赔偿责任。

    其次,内幕交易行为的类型化导致其归责原则存在差异。根据《证券法》第53条之规定,内幕交易在客观上具有三种表现形式,一是内幕信息知情人利用内幕信息买卖证券或者根据内幕信息建议他人买卖证券;二是内幕信息知情人向他人泄露内幕信息,使他人利用该信息进行内幕交易;三是非法获取内幕信息的人通过不正当手段或者其它途径获得内幕信息,并根据该信息买卖证券或者建议他人买卖证券。简言之,内幕交易行为通常是由内幕信息知情人实施的,但现实中也有不少非法获取内幕信息的人通过某种途径获得内幕消息并根据该信息从事内幕交易行为。《证券法》第50条规定:“禁止证券交易内幕信息的知情人和非法获取内幕信息的人利用内幕信息从事证券交易活动。”据此规定,无论是内幕信息知情人还是非法获取内幕信息的人,凡是利用内幕信息从事证券交易均可构成内幕交易行为,即构成内幕交易行为的实质在于是否利用内幕信息进行内幕交易,而不在于是否系内幕人员所为。但是,在实施规制内幕交易的制度措施时,对内幕信息知情人和非法获取内幕信息的人在规制原理与方式上是有所不同的。对于内幕信息知情人从事内幕交易予以禁止和制裁,其法理依据在于其不正当利用了在职务上或业务上的便利和优势地位。对于非传统内幕人员如非法获取内幕信息的人,禁止和制裁其从事内幕交易的法律依据,在于其盗用了公司的信息资产。就公司法层面而言,传统内幕信息知情人与公司之间存在基于身份联结形成的信义关系,基于其特殊地位、职责以及能够直接接触到内幕信息,这些人应当承担比一般人(如非法获取内幕信息的人)更高程度的信托责任与注意义务,因而在举证责任分配方面,内幕信息的法定知情人应承担更严格的举证责任。申言之,鉴于行为人距离内幕信息越近就越容易获取内幕信息,因此监管机构需要证明其内幕交易的内容就越少,甚至部分内容可以采取推定方式;随着行为人距离内幕信息越来越远,其推定方式受限,证明难度增加,故需要区别对待。具体体现在内幕交易侵权责任归责原则上,不同行为主体基于身份的差异承担的举证责任应该是差异化的。对于内幕信息的法定知情人从事内幕交易,可以实行过错推定,即只要内幕信息没有公开,内幕人员从事相关证券买卖的,即可认定其在从事内幕交易并且具有利用内幕交易谋利的过错。而对于内幕信息的法定知情人以外的人,认定其从事内幕交易,应当由投资者证明该事实存在,归责原则亦应实行过错原则。不过,在实践中,由于内幕交易的隐蔽性,投资者往往只能在监管机构对内幕交易实施行政处罚的“前置程序”后才提起民事赔偿诉讼。因此,内幕交易的存在以及内幕人员的过错问题实际上已经由监管机构予以解决了。

    三、“同时交易规则”的引入与适用

    (一)“同时交易规则”的确立

    凡是内幕交易必有受损害的投资者,但因内幕交易具有隐蔽性,很难在证券市场主体中辨别出与内幕交易行为人直接交易的投资者,因此,即便想对内幕交易提起集团诉讼,仅在确定和寻找适格原告这一环节就非常困难。为解决内幕交易侵权责任之难题,1988年美国修订《证券交易法》第20A条规定,“(a)任何人违反本款及其规则、规章,在掌握重要未披露信息时买卖证券,对任何在违反本款的证券买卖发生的同时,购买(违反以出售证券为基础时)或出售(违反以购买证券为基础时)了同类证券的人在有管辖权的法院提起的诉讼承担责任。”从该规定可以看出,那些掌握了内幕消息而在市场上从事交易者,必须对在同时期从事相反买卖的投资者负担民事赔偿责任。这一规定确立了“同期反向交易者”标准,同时解决了内幕交易因果关系推定以及原告的范围问题。

    对域外实践经验考察,“同时交易规则”的适用难点主要在于对“同时”的认定。从美国联邦地区法院的司法判例来看,大致有三种标准:第一种是要求必须是与内幕交易之后且同一天的反向交易者;第二种是要求必须是内幕交易之后三个交易日内的反向交易者,理由是交易结算采取T+2模式;第三种则是在少部分案件中,法院将“同期”的时限宽限至6-10个交易日,但没有说明理由和裁决依据。总的来看,美国司法实践对于“同期交易”的解释比较严格,大部分案件还是限制在同一交易日或者按照结算规则可以合理解释的同期范围内。

    就我国而言,“同时交易规则”已经在光大证券“乌龙指”事件引发的内幕交易民事赔偿案中被参考。对此,笔者以为,我国的内幕交易司法解释可以直接以推理的方式划出内幕交易行为的相对人范围,即在一定期间同时作与内幕交易相反买卖的投资者,具体指内幕交易行为人买入证券,则同时作卖出该证券的投资者;内幕交易行为人卖出证券,则同时作买入该证券的投资者。至于“同时交易”的确定问题,应该指内幕信息发生至公开之间的一段期限。如此规定,可能导致出现如下现象:1.有些投资者的交易可能发生在内幕交易行为人实施交易之前,即成为内幕交易的相对人。这在证券法上是应当允许存在的立法效果。因为内幕信息发生后,知情人员要么应当依法公开信息,要么依法禁止交易。如果知情人员违反规定进行了内幕交易,在内幕信息发生后但在内幕交易行为发生之前作相反买卖的投资者,同样会受到内幕交易的侵害。2.作为内幕交易相对人的投资者及其交易数量,可能远远多于内幕交易应有的相对人及其交易数量。这在证券法上也是应当允许的立法效果。因为作内幕交易相反买卖的投资者虽然远远多于内幕交易应有的相对人,但每一个作相反买卖的投资者都可能或多或少地受到内幕交易的损害。何况这种规定既可方便对内幕交易受害人的认定,又有对内幕交易行为的惩罚意义。不过,鉴于每个内幕交易案件具体情形不同,在司法实践中可以由法官对于“同期交易”的严格或宽松解释作必要的自由裁量。

    (二)“同时交易规则”与内幕交易损害赔偿请求权人的认定

    追究内幕交易的损害赔偿责任,首先要确定可以通过民事诉讼要求内幕交易者赔偿损失的投资者范围。如前所述,美国通过一系列判例法和成文法,使内幕交易民事诉讼的原告逐步限制在“同时交易者”。我国台湾地区“证券交易法”借鉴美国之规定,第157条之一规定,“违反法律关于禁止内幕交易规定之人,对善意从事相反买卖之人负损害赔偿责任。”

    与美国立法相较,我国台湾地区“证券交易法”似乎强调了内幕交易损害赔偿请求权人的“善意”问题。“所谓善意从事相反买卖之人,系指在证券集中交易市场与店头市场不知或非可得而知该公司内幕人员利用未经公开之内部消息,从事该公司之上市股票或上柜股票买卖之事实,而于内幕人员买入时,其正逢卖出,或内幕人员卖出时,其正逢买入而受有损害之人,包括在此项消息公开后开始买进而发生损害,或是在此项消息公开前卖出而产生价格差额损失之人”,“另善意从事相反买卖者虽系委托经纪商以行纪名义买入或卖出者,亦视为善意从事相反买卖之人。”其实,尽管美国《证券交易法》第20A条未提及善意问题,但针对内幕交易损害赔偿请求权人的资格问题,美国1981年上诉法院的判例中就曾指出,依据证券交易的性质,内幕人员为内幕交易时之卖出或买入行为,即为对在交易同时为相反竞价买卖行为的善意投资者的有效要约或承诺行为,因而,在当时为相反买卖的善意投资者均可被认为是内幕交易的当事人,也为恶意获利企图的牺牲者。由此,在证券市场中与内幕交易进行交易的善意投资者,是内幕交易的受害者,具有对内幕交易人提起损害赔偿之诉的资格。

    就我国而言,为了落实内幕交易民事赔偿责任制度,使其既要能够有力地制裁内幕交易行为,又要便于在司法实务中适用,采取“善意”+“同时作相反交易的规则”确定内幕交易受害人(即内幕交易损害赔偿请求权人)范围,是一种可资参照的制度建构思路。1.按照同时作相反交易的规则,内幕交易的受害人不限于与内幕交易行为人有直接交易联系的人,即并不是内幕交易行为人所卖出的特定证券的直接购买人,也不是内幕交易行为人所购买的特定证券的直接出售人。只要与内幕交易行为人作同种类证券的相反买卖时,即内幕交易卖出某种证券时,其他投资者正好作该种证券的买进,或者内幕交易行为人买进某种证券时,其他投资者正好作该种证券的卖出,即可认定为该内幕交易行为的受害人。2.作与内幕交易行为人相反的证券买卖,应当是与内幕交易行为同时发生的。不过,法律对于证券交易活动的“同时”,应当是有一定时间长度的时限。如果从内幕信息发生至公开之间的这段期限比较长,为避免可能的滥诉,实务中也可以将“同时”自内幕交易者进行的第一笔内幕交易开始计算,并将之限定在与内幕交易的同一个交易日内。当然,立法上还可以赋予法官根据具体的交易情形对“同时”加以分析判断并作出必要的时间长度限缩。3.“善意”是指投资者必须不知道内幕交易的存在,并非为了要求赔偿或其他非法目的而进行证券买卖。4.以“善意”与“同时作相反交易规则”作为认定受害人的标准,不同于民法上的一般规则,须以法律有明确规定为必要。因此,只有内幕交易侵权赔偿司法解释明确规定内幕交易受害人的认定规则,才能确定内幕交易损害赔偿请求权人即原告的范围,在司法实务中才能得以据此适用。

    (三)“同时交易规则”与因果关系的推定

    因果关系认定是内幕交易民事赔偿责任的核心问题。一方面,只有能够证明因果关系的投资者才有资格提起侵权赔偿诉讼;另一方面,因果关系也是决定投资者能获得多少赔偿的重要因素。这两个方面其实体现为两层因果关系,一是内幕交易行为和受损投资者投资行为之间的交易因果关系;二是内幕交易行为和受损投资者损失之间的损失因果关系。

    在虚假陈述情形下,各国立法大都利用市场欺诈理论来实现信赖推定,从而解决交易因果关系问题。简言之,在一个有效的证券市场中,如果所有因虚假陈述导致的不真实和具有欺诈性的信息都反映在证券的市场价格上,那么,所有接受了该证券市场价格从事交易的投资者都可以被看作是信赖了所有不真实和具有欺诈性的信息,从而推定投资者信赖了虚假陈述。因此,投资者只要证明其所投资的证券价格受到虚假陈述行为的影响而不公正,即可认为投资者的损失与虚假陈述行为之间存在因果关系。

    在内幕交易侵权责任中,如前所述,美国判例及立法确立的同时交易原则表明,只要原告属于和内幕交易同时反向交易者,法律就确认其所受损害与内幕交易行为之间存在因果关系,从而赋予原告要求内幕交易者赔偿其损失的权利。

    比较虚假陈述与内幕交易侵权责任因果关系推定的背后逻辑,二者其实是一脉相承的。依据美国内幕交易规范的法理,内幕交易之所以应予以处罚,并非因为内幕交易的行为人因知悉内幕消息而交易,而是因其知悉内幕消息,未经揭露而交易的缘故。因此,禁止内幕交易的本质,乃属于“单纯之隐匿”(pure omission)。从这个意义上说,内幕交易与遗漏型虚假陈述性质颇为相似。为此,在具有里程碑性质的1974年联邦第二巡回区Shapiro v.Merrill Lynch案中,法院援引了最高法院在遗漏型虚假陈述中推定对未披露信息之信赖和因果关系存在的1972年Affiliated Ute Citizens案之先例,在此基础上进而推定了因果关系在未披露内幕信息的内幕交易者和其他交易者之间存在。此判例确立了事实因果关系可由于其他理性投资者会因为知晓未披露信息的内容而改变交易决定而建立。

    对上述推定论证,当然有不同的声音。其中核心反对观点认为,在非“面对面交易”情况下,不知情的投资者并不是基于内幕交易者的引诱,而是独立作出的交易决策,因此内幕交易影响了原告的交易决策并认定存在交易因果关系的论断显然是不成立的。此外,内幕人员并不一定是信息公开义务人,并不负有公开信息之义务,甚至在内幕信息公开之前,因职务或业务而获得内幕信息的人要负有保密义务。因此,对于内幕人员不将内幕信息透露给相对人的情形,不能认定为违法。

    对此,笔者以为,欺诈市场理论就是为了解决证券市场无法像传统面对面那样来证明因果关系的困境而创造出来的。尽管在虚假陈述情形下,对信赖的证明通常表现为,只要被告负有公开义务而未公开重要信息,就认定已满足了因果关系的要求,而在内幕交易情形下,可能内幕人员并不负有信息公开义务,所以似乎无法满足信赖要求。但笔者以为,欺诈市场理论的核心就是有效市场价格包含了各种信息,投资者只要相信市场价格进行交易,就说明他信赖了市场,进而信赖了交易对手方。对内幕交易而言,其与虚假陈述核心的区别在于,虚假陈述是必须公开真实信息,但虚假陈述人违反了义务;而内幕交易是在信息公开前不能进行交易,但内幕人员违反了戒绝交易的义务。尽管二者违反的义务并不相同,但实质上都是让投资者因信赖市场而陷入错误的认识并进行了交易。就虚假陈述而言,如果知道真实信息,投资者就不会交易或不会以这样的价格进行交易;而如果知道了内幕信息,投资者同样不会交易或不会以这样的价格进行交易。所以,对投资者的损害而言,内幕交易与虚假陈述其实并没有本质的不同,或者说某种意义上内幕交易与虚假陈述确有共同之处,二者的实质区别在于,虚假陈述是在信息虚假上的主动行为,让投资者对信息的真实性产生误解而从事交易,而内幕交易则是在信息披露上的不作为行为,它让投资者在不明真相的情形下也从事交易,最终导致交易结果的不公平。因此,内幕交易侵权责任也可以效仿虚假陈述侵权责任的赔偿逻辑,基于欺诈市场理论建立起事实因果关系之推定。

    归结而言,在证券集中市场交易中,一个具体内幕交易的真正直接相对人是难以确认的,内幕交易的相对人所受损害与内幕交易行为之间的因果关系也是难以认定的。若不在立法层面直接建立因果关系推定规则,实务中就难以追究内幕交易行为人的民事责任。为此笔者建议,我国可规定对善意与内幕交易同时相反交易者推定交易因果关系成立,并进而推定损失因果关系成立,但被告能够证明原告的损失是由其他因素造成的除外。这种因果关系的推定,具有两个层面的法律意义,其一,把与内幕交易行为人同时作相反交易而产生的损失,在法律上视为与内幕交易行为有因果关系的损害;其二,对于该项因果关系,内幕交易的损害赔偿请求权人不需举证证明,投资者只要证明其作了与内幕交易同时相反的交易,法律即可推定该项因果关系存在。内幕交易行为与损害结果之间因果关系的推定,把内幕交易行为与具体的受害人及其损害在法律上连接起来,由此才使得追究内幕交易者的民事责任真正成为可能。

    四、内幕交易侵权损害赔偿的损失计算问题

    (一)内幕交易侵权损害赔偿的基本思路

    如何合理确定内幕交易的损失认定方式及赔偿金额一直是内幕交易民事审判中的难点。对于内幕交易的损害赔偿,从侵权责任法律规则填补损害的基本功能出发,内幕交易民事责任亦应坚持填补损害原则,即在原则上,投资者获得的赔偿数额不能超过其损失数额。

    但是,如前文所述,确定内幕交易中投资者权利受到侵害的损失,核心在于区分证券价格波动给投资者造成的损失中,哪一部分是由于内幕信息形成的价格波动对投资者造成的损失。但在实务中,证券价格波动受到多种因素影响,影响因素确认十分复杂且带有预测性,即使连专业的证券分析师也无法作出准确测算,更遑论由法院去进行实质性判断。

    参考成熟市场相关立法例,对内幕交易诉讼中损害赔偿额,多依据消息未公开前买入或卖出该股票之价格,与消息公开后的“合理期间”内股票价格之差额来确定。据我国台湾地区“证券交易法”第157之一规定,内幕交易损害赔偿之范围是在“就消息未公开前其买入或卖出该股票之价格,与消息公开后10个营业日收盘平均价格之差额限度内”。其中,所谓“消息未公开前其买入或卖出该股票之价格”,应指从内幕消息发生之日起到消息首次公开之日期间,违反内幕交易禁止规定者在集中交易市场或店头市场买进或卖出股票的价格。这里的“消息发生之日”,一般指公司决定或决议做成之日,相关契约签订之日等等。但对于内幕交易情节重大者,法院得依善良从事相反交易之人的请求,将责任限额提高3倍。因此,内幕交易者最高赔偿责任数额,可达到其通过内幕交易获利的3倍。

    概而言之,笔者以为,我国内幕交易司法解释关于内幕交易所造成损害数额的确定规则,应当包括以下内容:1.内幕交易受害人在特定证券交易中的单价损失幅度。即与内幕交易作相反交易时的特定证券价格,与内幕信息公开后一定期间内该证券平均价格之间的差额,就是内幕交易受害者在特定证券上遭受的损失。2.内幕交易受害人在该次交易中的损失范围。即由受害人作相反交易时买卖的证券数量,乘以单价损失幅度。3.确定内幕交易行为人的责任限额。由于内幕交易受害人是根据同时相反交易规则推定的,其损害范围及其与内幕交易行为之间的因果关系也是推定的,如果完全以充分填补损失为赔偿原则,那么发生一次内幕交易,行为人所赔偿的数额可能是天文数字。因此,法律应当确定内幕交易行为人的责任限额,以求制度公平。内幕交易行为人的责任限额,通常就是内幕交易非法所得的数额。4.通过司法调整责任限额与受损数额之间的平衡。法律可以规定法院在确定内幕交易行为人责任限额上有一定的裁量权,这样即可以根据受害人所受损失的情况、受害人的请求,以及内幕交易的情节,对内幕交易行为人的责任限额予以适当提高,既可提高受害人获得补偿的程度,又可适度加重对内幕交易人的民事制裁。

    (二)因内幕交易受损的具体损失的计算

    如前所述,受内幕交易行为的损失计算,应当是投资者买入或卖出的证券价格与内幕信息公开后该证券市场价格之间的差价损失。归纳上述经验分析,可以将内幕信息公开后10个交易日为内幕信息的市场吸收期间,即内幕信息公开经过10个交易日之后,该项公开的信息视为不再影响投资者的投资判断。当然,根据我国当前证券市场的交易量、交易换手率等具体情形,还可以对内幕信息的市场吸收期间予以更精确地确定。

    此外,内幕交易行为人的赔偿数额是否应当以其违法所得额为限的问题,笔者以为,如果相关规则设定内幕交易行为人应对内幕信息发生至公开期间作相反买卖的投资者,就其股票买入或卖出价格与内幕信息公开后10个交易日平均价格之间的差价损失进行赔偿,受内幕交易行为损害的投资者损失数额通常要大于内幕交易行为人的违法所得额。为了进一步惩罚内幕交易行为,可以规定将其赔偿额度提高至违法所得额的3倍。但是,如果这样仍然不能足额赔偿投资者损失的话,可以规定按比例赔偿的制度,即按照投资者损失额占所有投资者损失总额的比例予以赔偿。在这种情况下,如果继续坚持对投资者实行足额赔偿,可能产生新的不公平。其一,按照内幕交易行为人违法所得额的3倍赔偿给投资者,已经是对内幕交易行为人的严厉惩罚。如果继续提高赔偿额度,对内幕交易行为人亦不公平。其二,投资者的损失与内幕交易之间的因果关系,本来就是根据证券法规定而推定的,而且投资者的损失也只是部分地与内幕交易有关。按比例赔偿措施对投资者的保护力度,实际上也是十分充分的。

    五、代结论

    任何规范市场行为的法律制度,都应当根据市场活动的机制和特点而定。我国内幕交易民事赔偿责任制度的构建,也必须根据内幕交易的活动方式确定其行为构成,并设计相应的规制措施。笔者以为,立足我国当前证券市场的发展阶段以及证券市场交易的现实情形,我国内幕交易民事赔偿规则的核心规则大致如下:“内幕交易行为人应对内幕信息发生至公开期间作出相反买卖的善意投资者,就其股票买入或卖出价格与内幕信息公开后10个交易日平均价格之间的差价损失,在内幕交易行为人违法所得额度内承担赔偿责任。内幕交易违法所得额不足以赔偿投资者损失的,应受损害投资者的请求,可以将内幕交易行为人的赔偿额度提高至其违法所得额的3倍。投资者仍然得不到足额赔偿的,按照其损失数额占所有投资者损失总额的比例受偿。”该规则第1款规定了因内幕交易而造成投资者损失的计算方式,投资者损失与内幕交易的因果关系,以及内幕交易赔偿额度的一般标准。规则第2款规定了内幕交易赔偿额度的惩罚性标准,以及在内幕交易赔偿额度不足以赔偿时,对投资者赔偿数额的计算方法。

    本文来源:《法律适用》2024年第10期。

  • 欧阳晓莉:两河流域文化元素在古埃及前王朝时期的发现

    古埃及文明在发展之初就受到了西亚的影响。公元前6500—前6000年间,地中海东岸黎凡特地区连年干旱,致使部分居民迁徙到埃及。他们把早已驯化的大麦、小麦、绵羊和山羊带到埃及,从此揭开了当地农业革命的序幕。前王朝时期(对应涅迦达文化,约公元前4000—前3000年),在上埃及河谷地区从希拉康波利斯经涅迦达到阿拜多斯这段不足250千米的尼罗河两岸,古埃及文明的火种得以点燃并最终以燎原之势扩散到上下埃及全境。正是在涅迦达文化的中晚期,来自古代两河流域的文化元素在古埃及崭露头角。

    一、图案、滚印与青金石

    古代两河流域对古埃及文明影响的讨论始于1923年的一篇新闻报道:著名考古学家皮特里(Flinders Petrie,1853-1942)认为卢浮宫博物馆收购的一件文物——戈贝尔?艾尔—阿拉克刀柄(Gebel el-Arak Knife Handle)——证实了埃及王朝的创立者,即所谓的王朝人种,来自两河流域南部的苏美尔地区和伊朗西南部的苏萨。该刀柄象牙材质,长约25.5厘米,宽约4.5厘米,可能属涅迦达文化III期(约公元前3200—前3100年)。皮特里基于这一刀柄对古埃及历史起源的阐释早已过时,但刀柄正面上方图案中一名成年男子双臂分别搏击两头狮子的主题毋庸置疑来自两河流域,男子连面蓄须、头系宽边发带、上身赤裸、下身着过膝长袍的形象,同样具有显著的两河流域苏美尔文明的特色。此图像的两河流域风格是如此之明显,以至于有学者提出刀柄乃两河流域的工匠在埃及所制。

    同样的驯兽者主题还出现于希拉孔波利斯第100号墓的彩绘壁画。虽然墓室的建筑特点指向涅迦达文化II期(约公元前3500—前3200年),但壁画风格属于III期。其中一幅画面有一位通体红褐色、腰系白带的男性用双臂分别与两头站立的狮子做搏击状。这被认为是古埃及艺术借用外来元素的最早案例。埃及同时期的本土图像艺术更具自然主义的风格,这类外来主题与之相比具有一定独特性。

    除搏击猛兽的驯兽者外,其他来自两河流域的图像主题还包括:长颈猫科动物、带翅膀的狮身鹰首兽、盘绕花朵的蛇以及行进中的动物行列。在那尔迈调色板这一埃及前王朝时期最著名的文物上,就出现了一对长颈猫科动物的形象:它们的长脖互相交缠,其间的凹槽形成一个调色碟,颈部靠头的位置各系着一根绳,都由一位男性拉住。一对高度相似的长颈缠绕的动物形象同样出现在两河流域乌鲁克时期的一枚碧玉滚印(cylinder seal)之上。那尔迈调色板也出土于希拉孔波利斯,在一处神庙遗址掩埋宝藏的地方被发现。传统解释认为它表现了上埃及国王那尔迈征服下埃及并最终统一埃及全境的进程,但更新的学说强调浮雕反映的并非上下埃及统一的特定历史事件,而是象征着国王率军打败敌人的一般化战斗过程。

    上述传播至埃及的图案在两河流域的重要载体是滚印,它是两河流域独创的印章形制,目的在于使印章图案及铭文在泥球或泥板表面的面积最大化。在埃及迄今为止已发现了20枚左右两河流域风格的滚印,其中部分可能是出自埃及工匠之手的仿制品。涅迦达遗址有两处墓葬,一处同时出土了滚印和青金石,滚印图案类似于两河流域南部拉格什和伊朗西南部苏萨所发现的滚印的图案;另一处墓葬中的滚印,其图案类似于两河流域南部的乌尔和北部的高拉(Tepe Gawra,摩苏尔东北20余公里)以及叙利亚境内布拉克遗址(Tell Brak)所出土滚印的图案。上述墓葬中的滚印年代都约为涅迦达文化II期的中晚阶段。

    青金石质地的印章是两河流域滚印中最昂贵的种类之一。在公元前4千纪的西亚和北非,青金石最可能的来源是阿富汗东部的巴达赫尚省(Badkhshan)。在古埃及前王朝15000处左右墓葬中,约167处出土了青金石(比例略高于1%)。考虑盗墓以及早期发掘报告欠完备的因素,以青金石为陪葬品的墓葬比例应该更高。但到了古埃及第一王朝时期(约公元前3000—前2890年),青金石的分布范围缩减,仅出现在最顶级的社会精英成员的墓葬中。在随后几个世纪,青金石甚至从古埃及墓葬中完全消失。它再度出现是在第四王朝法老斯奈夫鲁(约公元前2613—前2589年在位)的王后墓中。在本文涉及的时段内,阿富汗东部是唯一已知的青金石产地。因此,追踪考古发现中的青金石有助于重构中亚、伊朗、两河流域和埃及之间的贸易路线。

    在整个埃及前王朝时期,在阿拉伯半岛都未发现青金石的考古遗迹。因此现有模型推断,青金石首先从阿富汗经伊朗高原运抵两河流域,再转运至两河南部居民在叙利亚建立的殖民地(详见下文),最后经海路从黎凡特北部(如叙利亚遗址杰贝勒?阿鲁达(Jebel Aruda))到达埃及。两河流域最早的青金石实物则回溯至公元前6千纪晚期,是一些发现于北部亚明遗址(Tepe Yarim)的念珠。在公元前5千纪至前4千纪中期之间,两河流域北部垄断了上述陆上青金石之路,仅在高拉一处遗址就发现了500余件青金石的念珠、印章和镶嵌物。虽然青金石原料并非两河流域的特产,青金石文化却首先绽放于该地区并传播到古代近东其他区域。在两河流域,青金石被赋予了非同一般的象征意义,并与神祇和王权密切联系在一起。

    二、“乌鲁克扩张”

    古埃及前王朝对应于两河流域的乌鲁克时期(约公元前3900/3800—前3200/3100年)。此时的两河流域以南部城市乌鲁克为中心,经历了一个社会突变和飞速发展的阶段,学术界称之为“乌鲁克扩张”(Uruk Expansion)。在此之前,两河流域南部与北部、伊朗西南部以及地中海东岸的黎凡特地区,在定居点的绝对规模和定居点相互间的差异上并没有显著不同。但到了公元前4千纪下半期,两河流域南部冲积平原上的政治组织(polity)在整体规模、内部分化程度以及定居点的等级结构上都远超古代近东乃至世界其他地区。进入公元前4千纪晚期,两河流域南部已发展出相互竞争的若干城邦,它们政治上分裂但文化上彼此相近,同时还向外扩张。两河流域南部最早的苏美尔文明从此腾飞,在城市化、社会政治复杂化和经济差异化等方面均领先于世界。

    两河流域南部的经济增长始于公元前5—前4千纪早期,主要贸易品是地方性特产,如羊毛及其纺织品、皮革制品、奶制品、谷物、蔬菜瓜果、亚麻纺织品、各种熏鱼或咸鱼、禽类以及芦苇制品,这些物产分别来自游牧部落、农业定居人群以及生活在底格里斯河和幼发拉底河入海口沼泽地带的居民。第二阶段始于公元前4千纪中期,此时精英阶层的社会意识增强,贸易品生产的专业化程度降低,各群体都利用前一阶段积累的剩余产品和人力资源进行生产,以取代从周边乃至两河流域以外地区的进口。这一使用当地产品替代进口物品的机制促进了经济发展。第三阶段则是公元前4千纪下半期,特点是对外贸易的大幅度增长。此时两河流域的羊毛织物在周边地区大受欢迎,需求强劲。加之驴被驯化为驮兽,其负重至少是人的两倍,大幅提升了长途运输能力。这两个刺激因素在后来公元前2千纪早期两河流域与小亚细亚的古亚述贸易中都有明确证据。

    随着地方贸易和对外贸易的发展,南部乌鲁克的居民开始迁入两河流域周边地区。他们最初组成小规模的移民社群,生活在当地的资源开发或调运中心。他们进而控制了当地的这类中心,将其建设为跨地区的贸易枢纽。还有一种情况就是乌鲁克殖民者白手起家,在一片处女地上建立定居点并沿袭两河流域的社会和城市习俗。判断这些定居点与两河流域南部有关的主要依据是物质文化材料,包括建筑、陶器和雕刻作品的风格,以及陶筹和滚印的使用等。

    最后一类乌鲁克居民白手起家建立的定居点,在考古遗迹中最易辨认,主要坐落于土耳其东南部和叙利亚北部的幼发拉底河河畔,周围环绕着小规模的乌鲁克村落群,为其提供农业和畜牧业产品。最具代表性的遗址是叙利亚境内的哈布巴?卡比拉(Habuba Kabira-süd)和附近规模更小的杰贝勒?阿鲁达,两地距幼发拉底河的传统渡河点迈斯凯内(Meskene)都不远。杰贝勒?阿鲁达是一处乌鲁克晚期新建的定居点,占地略大于3公顷,所在的小山包俯瞰幼发拉底河河谷。它的主要建筑是规模较大的民居,显然是精英人士的住宅,说明该定居点可能是哈布巴?卡比拉的行政中心。杰贝勒?阿鲁达南面约8公里处就是同期大得多的定居点哈布巴?卡比拉。它同样是一处乌鲁克晚期新建的定居点,初期面积约为6公顷,后来扩张加倍,建有防御工事且规划整齐,居住区、产业区及行政区界限分明,显然是人为规划的成果。防御工事外还延伸出一大片居住区,面积最大时达22公顷。

    上述三类遗址——有乌鲁克居民居住的当地定居点、乌鲁克居民控制的当地中心、完全由乌鲁克居民建立的定居点——的功能引发了诸多讨论。它们可能是出于各种原因离开两河流域南部的居民所建立的住所,也可能位于两河流域南部与周边地区进出口产品的商业要道附近,同时还是各种信息和情报的汇聚点。前两类规模较小的定居点在个人或团体的投资范围内,但像杰贝勒?阿鲁达和哈布巴?卡比拉这类新建定居点应该得到了两河流域母邦的财力和人力资助。

    乌鲁克扩张过后,接踵而来的捷姆迭特?那色时期(Jemdet Nasr,约公元前3100—前2900年)是两河流域发展相对缓慢的年代。自公元前2900左右开始,两河流域进入了城邦争霸的早王朝时期。

    三、黎凡特地区的走廊作用

    近几十年间,研究两河流域与埃及早期文明交流的重点在于探讨“乌鲁克扩张”期间两河流域对埃及的影响,以及居间的叙利亚和土耳其的定居点、村落、市镇乃至城邦在两地间的物资流动和文化传播中所发挥的作用。阿拉伯半岛的角色还有待深入探讨,但也不容忽视。

    考古人员在南黎凡特考察或发掘了约40处关乎古埃及历史初期的遗址,它们基本分布在后世古埃及人称之为“荷鲁斯之路”的东地中海沿岸的狭长地带。埃及风格的建筑遗迹和来自埃及的陶器(那尔迈的名字在陶片上最常见)都有发现,还有类似堡垒的设施,其目的可能就是保障通往埃及的贸易路线畅通。位于尼罗河三角洲东部的遗址泰尔—法卡(Tell el-Farkha)被认为是上埃及权贵建立的物品中转站和管理中心,以保证和促进与西亚的贸易。这里出土了大量来自巴勒斯坦和两河流域的器物,还有作为驮兽的驴的遗骸。在阿拜多斯的代表性墓葬U—J墓中(约公元前3200—前3150年)发现了约2000件陶罐,其中仅少数产自埃及本地,大部分则来自今天的巴勒斯坦地区,用于盛装那里出产的葡萄酒;还有用黎巴嫩松木制作的冥器,以及可能来自埃塞俄比亚的黑曜石等坚硬宝石。

    两河流域对埃及的影响也表现在建筑材料和装饰风格上。在公元前3千纪的早王朝和古王朝时期,埃及盛行一种长方形的马斯塔巴大墓(mastaba),因其梯形体的地上建筑酷似阿拉伯板凳而得名。第一王朝的创建者阿哈(Aha)就在涅迦达为其母建造了巨大的马斯塔巴墓。这类王室大墓往往建有附属的祭庙,祭庙外墙由泥砖砌成且呈现出壁凹式的装饰(niched facade)。带有这类装饰的建筑通常被称为“宫殿正门”,因为它与王宫的出现相关且与王权的关系密切。作为建筑材料的泥砖和壁凹式的装饰风格都类似于两河流域乌鲁克遗址的神庙外墙。这类外墙先后发现于两河流域南部的乌尔和乌鲁克以及北部的高拉,后来又发现于叙利亚境内的乌鲁克扩张时代遗址哈布巴?卡比拉和杰贝勒?阿鲁达(见上文),其空间分布佐证了乌鲁克的扩张现象和叙利亚在埃及与两河之间的桥梁作用。

    四、两大文明交相辉映

    总体而言,两河流域对早期埃及的影响在物质文化上的表现主要包括青金石、滚印、雕刻艺术中的特定图案主题和建筑装饰上的壁凹式外墙,这些因素在前王朝时期下半段(公元前4千纪下半期)已经出现在埃及,在希拉孔波利斯、涅迦达和阿比多斯三地的墓葬遗存中均有发现,其背后的推动力可能与两河流域南部的“乌鲁克扩张”密切相关。这一时期乌鲁克文化正处在扩张的高潮阶段,两河流域南部的长途贸易发达,进出口产品丰富,居民们或私人筹资,或得到所在城市资助,得以远赴两河流域北部和黎凡特地区建立规模不等的定居点乃至商业殖民地,两河流域的文化也随之得以传播。通过黎凡特这一地理走廊以及从黎凡特到埃及的海路运输,上述特定文化元素到达了埃及。与此相反,在两河流域并未发现来自埃及的文化元素。

    但笔者以为,这一反差并没有强烈到足以宣称两河文明影响了埃及早期文明的进程。无论青金石、滚印还是泥砖,抑或雕刻图案和壁凹式装饰,其绝对数量和空间分布都相当有限,基本局限在精英阶层的墓葬中。与其说它们来自两河流域这点吸引了埃及受众,毋宁说它们的异域风情被埃及的社会上层所借用以传达王权至高的意识形态。此时埃及的社会精英或许在摸索建立权威、彰显等级秩序的不同路径,因而借用了这些来自两河流域的文化要素。

    随着上下埃及的统一和第一王朝的建立,来自两河流域的文化元素在公元前3千纪初从埃及本土一度消失了。埃及与西亚的贸易明显减少,原来派驻在加沙、以色列南部以及努比亚的人员撤回本土,尼罗河第一瀑布和尼罗河三角洲东北部分别成为埃及与努比亚以及西亚之间的界限。埃及文明从此步入自我形塑的时代,以象形文字、金字塔和木乃伊为基本特征的埃及文化其本土特色日益鲜明。同期的两河流域则在“乌鲁克扩张”后进入了发展相对缓慢和地方化倾向愈加突出的阶段,似乎丧失了文化输出的动力和能力。

    纵观古代埃及和两河流域的史前时代,两大文明既各有千秋,又交相辉映。在埃及,当农业和畜牧业发展后,一定规模的定居区域在公元前4000年后才逐步形成。埃及定居点的出现不仅比两河流域晚得多,而且过程缓慢,其城市化也没有后者那样普遍和彻底。它在史前的政治格局起初也朝着两河流域南部众多城邦共生并存的方向发展,但这一趋势不久后便中断,转而向上下埃及统一的国家迈进。而在两河流域,史前的多城邦共存竞争局面一直延续到长达五六百年的早王朝时期(约公元前2900—前2350年),之后才迎来首个统一南北两部的阿卡德王朝。正因为统治阶层建构权力的模式不同,尽管两个文明之间存在诸多相似之处,但埃及发展成了统一的、以王权为特征的领土国家,两河流域则保持了众多城邦竞争和并存的传统。

    本文转自《世界历史》2023年第1期

  • 许宏:考古学视角下的中国诞生史[辑编]

    从司马迁的记载开始,三代王朝夏、商、周是华夏族的成丁礼,再之前是悠长的婴儿和少年时期,从这个时候开始成熟起来,然后有了一个比较大的王朝国家。但是究竟是夏还是商,现在还有争议,我们看这个表就比较清楚。我们一直以来就存在着历史文献学和考古学两大话语系统,这两大话语系统最初是边界明显的:一边是历史文献上的伏羲、女娲、三皇五帝、夏商周王朝;一边是考古学上的前仰韶、仰韶、龙山、二里头、二里岗时代。这两大话语系统的合流是在殷墟。为什么是在殷墟?有一个绝对不可逾越的条件就是,当时有可以证明自己族属和王朝归属的文字材料出现,这才可以把这两大话语系统整合,以后的西周、东周、秦汉魏晋都可以证明,但在那之前没有文字材料,没有史证。在前殷墟时代,如果我们把考古学遗存跟它的族属、王朝归属相对应的话,都只能是推论和假说。就是因为它没有直接性的文字材料,所以在大的历史分期上,我们习惯于把它分成历史时期(history)——有明确文字记载的时期;原史时期(proto-history)——文字开始出现,但还不足以解决狭义的历史问题;史前时期(pre-history),基本上就是这样一个脉络。

    史前、原史、历史阶段划分与对应史料

    前中国时代与“中国”的初兴

    许宏:考古学视角下的中国诞生史
    华夏文明腹心地区的五颗明珠——五大都邑遗址,都背靠邙山,面向古洛河

    任何事物都有其从无到有,从小到大,发生发展的过程,国家起源以及中国文明的形成也不例外。考古学揭示出的距今五六千年以来的东亚大陆展现了这样的图景。大约距今六千年以前,广袤的东亚大陆上的史前人群,还都居住在不大的聚落中,以原始农业和渔猎为主,过着大体平等、自给自足的生活。各区域文化独立发展,同时又显现出一定的跨地域的共性。到了距今5500~3800年间,也就是考古学上的仰韶时代后期至龙山时代,被称为东亚“大两河流域”的黄河流域和长江流域的许多地区,进入了一个发生着深刻的社会变革的时期。随着人口的增长,这一时期开始出现了阶层分化和社会复杂化现象,区域之间的文化交流和摩擦冲突都日趋频繁。许多前所未见的文化现象集中出现,聚落形态上发生着根本的变化。如大型中心聚落及以其为核心形成的一个个大群落,城墙与壕沟、大型台基和殿堂建筑、大型祭坛、大型墓葬等耗工费时的工程,随葬品丰厚的大墓和一贫如洗的小墓所反映出的社会严重分化等等,都十分令人瞩目。

    众多相对独立的部族或古国并存且相互竞争。如中原及周边的仰韶文化、石峁文化、陶寺文化、王湾三期文化,西北地区的大地湾文化、齐家文化,辽西和内蒙东部的红山文化,山东地区的大汶口文化、龙山文化,江淮地区的薛家岗文化,长江下游的凌家滩文化、崧泽文化、良渚文化,长江中游的屈家岭文化、石家河文化,长江上游的宝墩文化等,在文化面貌上各具特色,异彩纷呈。

    红点是当时邦国中心所在地

    那是一个“满天星斗”的时代,邦国林立是那个时代最显著的特征。有的学者将其称为“古国时代”或“邦国时代”,有的则借用欧美学界的话语系统,将其称之为“酋邦时代”。无论如何,那是一个小国寡民的时代。整个东亚大陆的面积,与现在的欧洲差不多,而当时的这些星罗棋布的古国或部族,也和现在欧洲的样态差不多。那么,问题来了:它们都属于“中国”吗?

    要说清这件事,得先捋一捋相关的概念。关于“文明”的解说五花八门,这里无法详细展开,但说古代文明是人类文化发展的较高阶段或形态,而其标志是“国家”的出现,应会得到大多数人的认可。[……]

    显然,中国有5000年文明史的提法,是把这些都当成了中华文明史也即“中国”诞生史的一部分。其认知脉络是,这些人类群团在相互交流、碰撞的文化互动中,逐渐形成了一个松散的交互作用圈,这也就奠定了后世中华文明的基础。随着1970年代末期以来一系列重要发现的公布,中国在三代王朝文明之前即已出现了城市和国家,它们是探索中国文明起源的重要线索的观点得到了普遍认同。源远流长,单线进化,从未间断,成为中国学术界在中国文明起源问题上的主流看法。

    这当然是有道理的。[……]说中华文明可以上溯到新石器时代甚至旧石器时代的认识,显然出于这样的考虑。但这样无限制地追溯,意义何在?同时,其认知前提是百川归海的单线进化论,而事实果真如此吗?甚而,在不少人心目中,一个默认的前提是,现中华人民共和国境内的古代遗存,理所当然就是中华文明的源头。这样的认识,可以成立吗?

    首先,考古学家观察到的上述许多古国或部族,大都经历了发生、发展乃至最后消亡的全过程,也即它们各自谱写了完整的生命史的篇章,而只是给后起的中原王朝文明以程度不同的文化给养或影响。到公元前2000年前后,它们先后退出历史舞台,在这些人类共同体和后来崛起的中原文明之间,有一个“连续”中的“断裂”。这种断裂究竟是出于天灾还是人祸,原因想必多种多样,学术界还在探索之中。在某些区域,“大禹治水”传说中的大洪水,或许就是原因之一。考古学的研究对象是支离破碎的古代遗存,所以知其然不知其所以然的事,所在多有。

    如前所述,我们知道在现在的中国境内,上古时期曾有众多相互独立的国家并存。而顾名思义,在“国”前冠以“中”字,“中国”也就有了“中央之城”或“中央之邦”的意蕴。这同时也说明“中国”已并非初始阶段的国家,显然,它一定是一个在当时具有相当的影响力、具有排他性的核心。因而,我们也就不能说最初有多个“中国”,作为发达、复杂的政治实体的“中国”也是不能无限制地上溯的。

    许宏:考古学视角下的中国诞生史
    史前时代东亚城址的三大系统

    说到“中国”,还要捋捋这一概念的源起和演化。在出土文物中,“中国”一词最早见于西周初年的青铜器“何尊”的铭文。而在传世文献中,“中国”一词最早出现于东周时期成书的《尚书》和《诗经》等书中。“中国”一词出现后,仅在古代中国就衍生出多种含义,如王国都城及京畿地区、中原地区、国内或内地、诸夏族居地乃至华夏国家等。“中国”成为具有近代国家概念的正式名称,始于“中华民国”,是它的简称;现在也是“中华人民共和国”的简称。其中,最接近“中国”一词本来意义的是“王国都城及京畿地区”,那里是王权国家的权力中心之所在,已形成具有向心力和辐射性的强势文化“磁场”。其地理位置居中,有地利之便,因此又称为“国中”、“土中”或“中原”。

    那么,究竟是什么时候,后世“中国”的雏形或者说“最早的中国”崛起于世呢?

    按古代文献的说法,夏王朝是中国最早的王朝,是破坏了原始民主制的世袭“家天下”的开端。一般认为,夏王朝始建于公元前二十一世纪,国家级重大科研项目“夏商周断代工程”,把夏王朝建立的年代定为公元前2070年左右。在考古学上,那时仍属于龙山时代,在其后约200多年的时间里,中原地区仍然处于邦国林立,战乱频仍的时代,各人类群团不相统属,筑城以自守,外来文化因素明显。显然,“逐鹿中原”的战争正处于白热化的阶段,看不出跨地域的社会整合的迹象。也就是说,至少在所谓的夏王朝前期,考古学上看不到与文献相对应的“王朝气象”。

    与此同时,兴盛一时的中原周边地区的各支考古学文化先后走向衰落;到了公元前1800年前后,中原龙山文化系统的城址和大型中心聚落也纷纷退出历史舞台。代之而起的是,地处中原腹地嵩(山)洛(阳)地区的二里头文化在极短的时间内吸收了各区域的文明因素,以中原文化为依托最终崛起。二里头文化的分布范围首次突破了地理单元的制约,几乎遍布于整个黄河中游地区。二里头文化的因素向四围辐射的范围更远大于此。

    伴随着区域性文明中心的衰落,此期出现了超大型的都邑——二里头遗址。地处中原腹地洛阳盆地的二里头遗址,其现存面积达300万平方米。经半个多世纪的田野工作,在这里发现了中国最早的城市主干道网,最早的宫城,最早的多进院落大型宫殿建筑,最早的中轴线布局的宫殿建筑群,最早的封闭式官营手工业作坊区,最早的青铜礼乐器群、兵器群以及青铜器铸造作坊、最早的绿松石器作坊、最早的使用双轮车的证据,等等。这样的规模和内涵在当时的东亚大陆都是独一无二的,可以说,这里是中国乃至东亚地区最早的具有明确城市规划的大型都邑。

    二里头文化与二里头都邑的出现,表明当时的社会由若干相互竞争的政治实体并存的局面,进入到广域王权国家阶段。黄河和长江流域这一东亚文明的腹心地区开始由多元化的邦国文明走向一体化的王朝文明。作为广域王权国家概念的“中国”,在前一阶段还没有形成。

    要之,我们倾向于以公元前1700年前后东亚地区最早的核心文化——二里头文化,最早的广域王权国家——二里头国家的出现为界,把东亚大陆的早期文明史划分为两个大的阶段,即以中原为中心的“中原(中国)王朝时代”,和此前政治实体林立的“前中国时代”和“前王朝时代”。

    许宏:考古学视角下的中国诞生史
    郑洛地区龙山时代聚落分布(赵春青 2001)

    值得注意的是,这两大阶段也恰是东亚大陆青铜时代和前青铜时代的分野。

    在二里头时代之前的数百年时间里,东亚大陆的多数区域,早期铜器的使用呈现出红铜、砷铜、青铜并存的状况。铜制品多为器形简单的小件工具和装饰品等生活用具,锻、铸均有,制造工艺处于初级阶段,尚未熟练掌握合金比例。如多位学者已分析指出的那样,东亚大陆用铜遗存的出现,应与接受外来影响关系密切。至于东亚大陆部分区域进入青铜时代的时间,依据最新的年代学研究,要晚到公元前1700年前后了。

    考古学观察到的现象是,出土最早的青铜礼容器的中原地区,也是东亚大陆最早出现广域王权国家的地区。青铜礼器的出现和当时的中原社会,都经历了文化交流中的碰撞与裂变的历程。其同步性引人遐思。二者相互作用刺激,导致中原地区自公元前二千纪上半叶,进入了史上空前的大提速时代。早期中国,由此起步。那么,是青铜礼器及其铸造术,催生了最早的“中国”?

    随着二里头文化在中原的崛起,这支唯一使用复杂的合范技术生产青铜容器(礼器)的先进文化成为跃入中国青铜时代的一匹黑马。值得注意的是,这些青铜礼器只随葬于二里头都邑社会上层的墓葬中,在这个金字塔式的等级社会中,青铜礼器的使用成为处于塔尖的统治阶层身份地位的标志。这些最新问世的祭祀与宫廷礼仪用青铜酒器、乐器,仪仗用青铜武器,以及传统的玉礼器,构成独具中国特色的青铜礼乐文明。“国之大事,在祀与戎”(《左传•成公十三年》)。保有祭祀特权与强大的军力,自古以来就是一个国家立于不败之地的根本。从早期王朝流传下来的祭天崇祖的传统,几千年来一直是中国人宗教信仰和实践的主要内容。二里头都城规划中祭祀区的存在,以及以青铜为主的祭祀用礼仪用器,都与大型礼制建筑一样,是用来昭示早期王朝礼制传统的重要标志物。由于军事力量在立国上的重要性,青铜与玉石兵器也成为祭祀礼器和表现身份地位的仪仗用器的有机组成部分。二里头文化青铜礼器产品的使用范围主要限于二里头都邑的贵族。也就是说,二里头都邑不仅垄断了青铜礼器的生产,也独占了青铜礼器的“消费”即使用权。

    其中,酒器是具有中国特色的酒文化乃至它背后的礼仪制度的重要载体。作为统治阶层身份地位的象征,以酒器为中心的礼器群,成为中国最早的青铜礼器群。从这里,我们可以看出中国古代文明主要是建立在社会关系的巨变(在等级秩序下人际关系的大调整)而非人与自然关系巨变的基础上的。而铸造铜爵等造型复杂的酒器,至少需要精确地组合起内模和3件以上的外范,即当时已采用了先进的复合范工艺。克服其中的种种困难,最终铸造出青铜礼器的内在动力,应当就是这一时期新兴王权对宫廷礼仪的整饬。

    二里头遗址发现的青铜钺,是迄今所知中国最早的青铜钺。钺作为象征军事权威的仪仗用器,也是一种用于“大辟之刑”的刑具。甲骨文金文中“王”字的字形,像横置的钺,在最初应指代秉持斧钺之人即有军事统帅权的首领,随着早期国家的出现,逐渐成为握有最高权力的统治者的称号。早于甲骨文时代数百年的二里头都城中出土的玉石钺,和迄今所知中国最早的青铜钺,就应是已出现的“王权”的又一个重要象征。换言之,钺的礼仪化是中国王朝文明形成与早期发展的一个缩影。

    在早期王朝的礼器群中,爵、钺等器种持续兴盛于三代逾千年,甚至成为后世中国社会政治文化的重要符号,个中原因,颇具深意。

    二里头的聚落变迁

    另一个可资观察的角度是都邑的城郭形态。这一问题上的权威观点是,城墙是构成都城的基本政治要素,不存在没有城墙的都城。通过对以先秦至秦汉时期为中心的都城发展历程的初步考察,笔者认为整个中国古代都城史可以依城郭形态的不同,划分为两个大的阶段,即防御性城郭阶段和礼仪性城郭阶段。在自最早的广域王权国家都邑二里头至曹魏邺城前近两千年的时间里,庞大的都邑不设防,有宫城而无外郭城,是都城空间构造的主流,这一现象可以概括为“大都无城”。在二里头、殷墟、周原、丰镐、洛邑、秦咸阳、西汉长安和东汉洛阳等一系列都邑中有清晰的显现。这与广域王权国家强盛的国势及军事、外交优势,作为“移民城市”的居民成分复杂化,对都城所处自然条件的充分利用等,都有一定的关联。处于都城发展史早期阶段的防御性城郭的实用性,导致城郭的有无取决于政治、军事、地理等诸多因素,“大都无城”的聚落形态应即这一历史背景的产物;而后起的带有贯穿全城的大中轴线、实施里坊制的礼仪性城郭,因同时具有权力层级的象征意义,才开启了汉代以后城、郭兼备的都城发展的新纪元。

    在这一早期中国都邑布局的演变过程中,最令人瞩目的是二里头时代的到来,这是“大都无城”传统的肇始。如上所述,二里头遗址是迄今可以确认的中国最早的具有明确规划的都邑,其布局开中国古代都城规划制度的先河。但在逾半世纪的田野工作中,却一直没有发现圈围起整个二里头都邑聚落的防御设施,仅知在边缘地带分布着不相连属的沟状遗迹,应具有区划的作用。

    二里头遗址地理位置

    如果将二里头时代的聚落形态与更早的龙山时代作比较,可知前者最大的变化,一是中心聚落面积的大幅度提升,由龙山时代的10余至数十余万平方米,扩大至300万平方米;二是基本上摒弃了龙山时代普遍筑城的传统,代之而起的环壕成为这一时代的主流防御设施。

    由对考古材料的分析可知,进入二里头时代,聚落内部社会层级间的区隔得到强化,而与此同时,对外防御设施则相对弱化。从聚落形态的角度看,二里头都邑是“大都无城”的一个最早的典范。究其原因,不能不考虑到都邑内的居民。二里头可能是最早集聚了周边人口的中心城市,其人口由众多小规模的、彼此不相关连的血亲集团所组成,这种特征又与其后的殷墟和西周时代的都邑颇为相近。而广域王权国家则是从二里头时代至西周时代社会结构上的共性。以“大都无城”为主要特征的都邑聚落形态与早期王朝阶段社会结构上的关联性,值得进一步探究。显然,“大都无城”,是前中国时代终结、最早的“中国”初兴的一个重要的标志。

    要之,以二里头时代为界,东亚大陆的国家起源进程呈现出非连续性和多歧性。以良渚、陶寺、石峁文明为代表的龙山时代众多区域性邦国文明,各领风骚数百年,最终退出了历史舞台。它们走完了其生命史的全过程,而与后起的中原青铜文明仅有或多或少的间接关系,这就使东亚大陆的国家起源进程呈现出“连续”中的“断裂”的态势。这是我们把东亚大陆国家起源进程划分为两大阶段的重要依据。

    通观东南良渚的水城、中原陶寺的土城、西北石峁的石城,都是因地制宜、适应环境的产物,它们也的确都是区域性文明;这与“大都无城”的二里头形成了鲜明的对比。它们所拥有的“前铜礼器群”还看不到像以二里头为先导的中原王朝礼器群那样严格的礼仪规制尤其是重酒的礼器组合。而以软实力见长的二里头,显然通过社会与文化的整合具有了“普世”的魅力,在众多族群的膜拜与模仿中扩大了自身的影响,其范围远远超出了中原地区。更为重要的是,它的文明底蕴通过二里岗时代、殷墟时代乃至西周时代王朝间的传承扬弃,成为中国古代文明的主流。

  • 葛剑雄:九州的传说和现实

    虽然把“中国”确定为我们整个国家的名称是到19世纪后期才出现的事情,但中国统一的概念却已经存在了三千多年。甚至在中原的统一国家形成之前,政治家和学者已经纷纷推出了各自的统一蓝图。虽然当时还没有一个君主真正能够统治这片广袤的土地,但“溥(普)天之下,莫非王土”的颂歌却在西周时就已经普遍流传,并且被视为真理而接受。

    不过,这首颂歌的作者(或许不止一个)大概不会想到,这种统一观居然统治了中国二千多年,并且到今天还没有消除它的潜在影响。

    在中国儒家的经典著作《尚书》中有一篇《禹贡》,一开始就写道:“禹铺土,随山刊木,奠高山、大川。”意思是说,在洪水横流以后,大禹一面规划治水,一面根据名山大川的分布重新划定区域。接着列出的九个单位是:冀州、兖州、青州、徐州、扬州、荆州、豫州、梁州、雍州,这就是九州。

    葛剑雄 | 九州的传说和现实
    《禹贡》所描述的九州区域图

    在另一篇《舜典》中,又提到在尧、舜时,“肇十有二州”。“肇”是开始的意思。对这句话,西汉的学者谷永和东汉初的学者班固解释为:在尧的时候遭到洪水,全国被大水分割为十二部分。但东汉末年的马融的说法是:舜在大禹治水之后,从禹所划分的九州中又分出幽州、并州、和营州三个单位,因而总共有了十二个州。这一说法获得后世多数学者的赞同。

    从未实行过的九州制

    由于这些记载都出于儒家经典,又得到后世众多学者的肯定,所以从西汉以来就成为不可动摇的定论,几乎没有人表示怀疑。人们一般都认为,从大禹治水开始就有了九州这样的政区,以后又演变为十二州。直到现在,一些在叙述一个地方行政区域的历史时,往往还要从九州讲起,似乎这是历史事实。

    由于全国就分为九州,所以九州又一直被当作全国、“天下”的代名词。如南宋诗人陆游《示儿》诗中的名句“死去原知万事空,但悲不见九州同”,就是取这样的用意;晚清诗人龚自珍“九州生气恃风雷”一句也是如此。

    五四运动以后,学者们向儒家经典提出了挑战。经过反复的争论和研究,历史学界已经把这传统的九州说推翻了。原来,《禹贡》中的记载并不是历史事实,九州也不是中国最早的行政区划。

    《禹贡》虽然托名为大禹所作,其实却是战国后期人的作品。具体的证据很多,最主要的理由是《禹贡》中所记的不少地理状况都是战国时的现象,有的地名和水名甚至要到战国后期才出现,如果真是大禹所作,他岂能未卜先知?而且在《尚书》各篇中,《禹贡》的语言照理应比出现在它以后的《盘庚》(记录商朝中期的君主盘庚迁都事)等篇难懂,事实恰恰相反;这也只能说明《禹贡》问世的时间较晚。

    《禹贡》所讲的内容不符合历史事实,至多只有传说的价值。到目前为止的考古发掘和研究的成果,还只能证实商朝的历史。近年来在河南等地发现的一些文化址,一些学者认为就是属于夏朝。如果这一观点得到进一步的证明和普遍的承认,那末夏朝的主要统治区应该在今河南一带,与文献记载传说中的夏都不超出今山西南部、山东西部和河南的范围是一致的。而《禹贡》所叙述的九州的范围,北至燕山山脉和渤海湾,南至南岭一带,西至陇东高原;至于具体涉及的内容更广;当然不可能是夏朝的事实。

    现有的研究成果足以证明,不仅传说中的大禹时代还不可能有什么行政区划,就是商朝和更后的西周时代也还没有出现行政区划。既然《禹贡》是战国后期的产物,那么九州制是不是当时的制度呢?也不是。大家知道,到战国后期,周天子的权力早已荡然无存,而秦始皇还没有统一六国,七个主要的诸侯国各自为政,又有谁有这样的权威能制定并且实行包括各国的疆域在内的行政区划呢?

    可见,九州制只是当时学者对未来统一国家的一种规划,反映了他们一种政治理想。

    秦始皇在全国推行了郡县制,却没有在郡以上设立州。到了公元前二世末,也就是在《禹贡》问世的一二百年以后的西汉元封五年(前106年),汉武帝将全国除首都附近的七个郡级单位以外政区分属于十三部,即豫州、兖州、青州、徐州、冀州、幽州、并州、凉州、益州、荆州、扬州、交趾、朔方;每部设刺史一人,负责巡察境内的地方官和豪强地主;称为十三刺史部, 简称十三部或十三州。但那时的州还是一种监察区,而且这十一个以州命名的单位中没有《禹贡》九州中的梁州和雍州,增加了凉州、益州、并州和幽州。在公元1世纪后的东汉,州才成为最高一级的行政区域。朔方并入了并州,加上管辖首都一带的司隶校尉部,总数仍为十三。由于交趾改称交州,以州命名的单位就有了十二个,也不是九个。东汉末年曹操曾想按九州来重划政区,却没有成功;从此再也没有人作过这样的尝试。从这一角度来讲,九州从来没有成为中国的现实。

    胎死腹中的五服制

    在《禹贡》中还记载了一种“五服”制:五百里甸服,五百里侯服,五百里绥服,五百里要服,五百里荒服。

    根据这样一种国家模式,在王居住的京城往外,第一等是甸服(以农业为主的直接统治区),第二等是侯服(诸侯统治区),第三等是绥服(必须加以绥抚的地区),第四等是要服(边远地区),第五等是荒服(蛮荒地区)。

    葛剑雄 | 九州的传说和现实
    五服图

    如果说,九州制因为是以名山大川为主要界限,所以还能使人相信为实际行政区域的话,五服制这样四四方方二千五百里的划分就难以自圆其说了。连宋代的儒家学者蔡沈在给《尚书》作注释时也不得不指出:“尧的都城在冀州,冀州的北界在今河北北部和内蒙古南部,恐怕不会有二千五百里。即使算到这么远,也都是沙漠不毛之地了。而东南最富庶的地区反而被列入要服和荒服(离冀州一千五百至二千五百里),根据地势来考察,简直弄不明白是怎么回事!”

    但是五服制中有一点却反映了这样一个事实:在生产力低下、运输相当困难的情况下,王(天子)对臣民的贡品的征收不得不随距离的远近而改变。例如在天子直属区“五百里甸服”的范围内就规定了五种不同的纳贡标准:一百里内割下来的作物连穗带秆起交,二百里内只交谷穗,三百里内交谷子,四百里内交粗米,五百里内交精米。实际实行的制度虽不可能如此刻板,但运输能力显然是必须考虑的因素。

    九州制是对未来的设想,五服制却是对过去的理想化。因为在西周和以前虽然采用类似的分等级统治体制,却并没把每一等级固定为五百里,实际上也不存在这样的可能。所以五服制虽见于《禹贡》,却从来没有哪一个君主或政治家有意实行过,只能胎死腹中。

    大九州说

    正因为九州制仅仅是一种理想,所以在《禹贡》问世以后,还出现了另外几种九州的方案,如《周礼》(也是托名周朝制度的著作)中的《职方》、《尔雅》中的《释地》和《吕氏春秋》中的《有始览》都提出了自己的九州规划,各州名称与《禹贡》不尽相同,划分的范围也有所差异。

    战国时齐国学者邹衍又提出了他的大九州学说,大意是这样的(今译):儒家所谓的中国,不过只有天下的八十一分之一。中国的名称叫赤县神州,内部有九个州,就是大禹划定的,但这还不能算是真正的州。在中国之外像赤县神州这样的单位共有九个,这才是九州。在九州的周围有大海包围,人类和动物都无法来往。这样的九州合起来又是一个州,像这样的单位也有九个,在它们的周围有更大的海洋包围着,这就到了天地的边缘。

    这种学说与其说是对外部世界的了解,还不如说是出于臆想和推理。比起那种中国就等于天下,除了中国(实际上只是中原)之外就没有文明社会的观点来,大九州学说高明地承认了还存在着不止一个同样发达的人类社会。但恰恰在这一点上又作了实际上的自我否定:由于各州之间都由无边无际的大海阻隔,人民禽兽是无法来往的。所以这种存在只具有理论和思辨上的意义,而不是对中国有影响的现实。

    中原和华夏

    无论是九州的设想,还是大九州的学说,出现在战国后期都不是偶然的。

    《禹贡》所描述的地理范围已经相当广大,涉及今天中国内地的绝大部分。要具备这样丰富的地理知识,活动范围只限于黄河中下游的夏人、商人和西周人是办不到的。而在战国后期,秦、楚、齐、燕、韩、赵、魏这七个主要诸侯国的疆域已经达到了这样的范围,在互相的交流中,各国的学者就可能掌握这些地理知识。《禹贡》中还记录了各地的农业生产条件,如土壤的类型、土地的等级、水文状况等;应纳贡赋的等级和物产等;都是经济发展达到一定水准的反映。例如梁州的贡物中有铁和镂,镂就是钢。如果没有冶金技术的进步,学者的想像力再丰富,也不可能把这种品种载入著作中。

    在七国的竞争中,尽管鹿死谁手还没有最终明朗,但统一已是大势所趋。秦国变得越来越强大,在错综复杂的形势中明显处于主导地位。一些有远见的知识分子纷纷投向秦国,并为秦国战胜其他六国,完成统一事业出谋划策,也为统一后的未来规划蓝图。多数研究者认为《禹贡》是秦国学者的作品,就考虑到这个因素。

    在经过战争、吞并和融合之后,华夏族已经成为黄河流域乃至东亚大陆人数最多、经济文化最发达、实力最强的民族,占据了当时地理条件最优越的地区。而非华夏民族则被迫迁出了黄河流域,或者逐步融入了华夏族,或者接受了华夏文化并以华夏的一支自居。在蒙古高原、青藏高原、长江流域及其以南和大陆附近的茫茫海洋上,还不存在在总体上能与之匹敌的其他民族和政权,而对此范围之外的情况,虽然人们不至于一无所知(例如穿越河西走廊至中亚的陆上交通线和通向东南亚的海上交通线可能已经存在),但肯定相当有限。

    然而随着境外玉石、珠宝、香料等珍奇异物的流入和亲历者见闻的传播,以中原为中心的观念不能不有所动摇。根据九州的理论,中原是文明的中心,九州是文明的范围,但这些珍异并不产在九州,而是来自“非我族类”的夷狄之邦;莫非那里存在着比中原更高的文明?国君、贵族和上层人士享用着来自境外的珍奇,却从不承认会有文明程度超过自己的社会,于是西方的昆仑山、西王母、瑶池和东方的海上神山一类神话便合适地弥补了这一漏洞——原来在中国之外的确存在着一个可望而不可及的神灵世界。但这丝毫不会动摇中国的中心地位,因为西王母尽管伟大,昆仑山尽管崇高,蓬莱尽管奇妙,却都属于神仙的体系,而除了神仙之外,境外就只是一片早期愚昧落后的混沌世界。

    可以认为:在战国时期形成的统一观,是以华夏族(汉族的前身)为主干、以黄河中下游平原地区为中心的,是一种封闭的观念。

    本文摘自《昔日的天下观》《统一与分裂:中国历史的启示》商务印书馆2013年版

  • 《中华人民共和国黑龙江省牡丹江市中级人民法院公告》(2024.10.11)

    《人民法院报》2024年10月11日公告专版刊发黑龙江省牡丹江市中级人民法院公告,公告显示,曾任鸡西市副市长、鹤岗市副市长的李传良涉嫌贪污罪、受贿罪、挪用公款罪、滥用职权罪,案发后,扣押、冻结资金共计人民币140987.522529万元、查封1021处房产、查封土地、滩涂27宗、查封林地8宗、扣押汽车38辆、扣押机械设备10台(套),冻结18家公司股权。公告如下:

    中华人民共和国黑龙江省牡丹江市中级人民法院公告

    黑龙江省牡丹江市人民检察院没收犯罪嫌疑人李传良违法所得申请一案,本院经审查认为,有证据证明犯罪嫌疑人李传良实施了贪污、受贿、挪用公款、滥用职权犯罪,犯罪地在黑龙江省鸡西市,经黑龙江省高级人民法院、黑龙江省人民检察院指定,将没收犯罪嫌疑人李传良违法所得申请一案交由牡丹江市人民检察院申请,牡丹江市中级人民法院审判,依照《中华人民共和国刑事诉讼法》第二百九十九条之规定,于2024年9月29日立案受理。

    现予公告:

    一、犯罪嫌疑人的基本情况犯罪嫌疑人李传良,男,1963年9月27日出生于黑龙江省鸡西市,公民身份号码23030419630927423X,汉族,硕士研究生文化,鸡西市财政局原局长、鸡西市国有资产监督管理委员会办公室原主任、鸡西市原副市长、鹤岗市原副市长,户籍地黑龙江省哈尔滨市南岗区联部街47号2栋12层2号。因涉嫌犯贪污罪、受贿罪、挪用公款罪、滥用职权罪于2020年7月10日被黑龙江省监察委员会立案调查,同年9月20日被黑龙江省人民检察院批准逮捕。李传良于2018年11月15日逃匿境外,2020年12月1日黑龙江省公安厅对其发布通缉令。

    二、检察院申请内容牡丹江市人民检察院牡检没申〔2024〕1号没收违法所得申请书载明:犯罪嫌疑人李传良在担任鸡西市财政局局长、鸡西市国有资产管理委员会办公室主任、鸡西市副市长、鹤岗市副市长期间及辞去公职后,利用职务上的便利以及伙同其他国家工作人员,利用其他国家工作人员的职务便利,侵吞、骗取公共财物共计人民币292586.011967万元;利用职务上的便利,为他人谋取利益,以及利用职权或者地位形成的便利条件,通过其他国家工作人员职务上的行为,为他人谋取不正当利益,非法收受他人财物共计人民币4892.1128万元;利用职务上的便利,挪用公款共计人民币11000万元,进行营利活动;利用职务上的便利,擅自使用国有资金注册公司、擅自决定由其实际控制的公司承揽工程,违法所得及收益共计人民币7325.185136万元。犯罪嫌疑人李传良使用上述违法所得投入到其个人实际控制的公司、项目中,用于土地一级开发整理、房产开发、工程建设等以及购买房产、车辆、土地、设备等,案发后扣押、冻结资金共计人民币140987.522529万元、查封1021处房产、查封土地、滩涂27宗、查封林地8宗、扣押汽车38辆、扣押机械设备10台(套),冻结18家公司股权。(各类财产详细情况见附件清单)牡丹江市人民检察院认为,犯罪嫌疑人李传良涉嫌贪污罪、受贿罪、挪用公款罪、滥用职权罪,逃匿后被通缉一年不能到案。有证据证明前述在境内被查封、扣押、冻结的财产属于犯罪嫌疑人李传良的违法所得及收益,依法应予以追缴。依照《中华人民共和国刑事诉讼法》第二百九十八条之规定,提出没收违法所得的申请。

    三、利害关系人权利、义务犯罪嫌疑人李传良的近亲属和其他利害关系人在公告期间可以书面形式向本院申请参加诉讼,也可以委托诉讼代理人参加诉讼;李传良的近亲属申请参加诉讼,应当提供与李传良关系的证明材料;其他利害关系人申请参加诉讼,应当提供对申请没收的财产享有权利的证据材料。如不按规定申报权利,可能承担不利法律后果。

    四、公告期间本公告期间为六个月。公告期满后,本院将依法审理。联系人:杨柏苓蒋利龙通讯地址:中华人民共和国黑龙江省牡丹江市西安区西三条路339号黑龙江省牡丹江市中级人民法院。联系电话:0453-6377062邮编:157000

    附件:黑龙江省牡丹江市人民检察院申请没收财产清单

    一、资金

    1.张亚凤上交的扣押在案的资金及利息。

    2.姜伟上交的扣押在案的资金及利息。

    3.赵丽艳上交的扣押在案的资金及利息。

    4.于颖上交的扣押在案的资金及利息。

    5.鸡西市鸡煤专社保服务中心上交的扣押在案的资金及利息。

    6.鸡西市金源物业管理有限公司上交的扣押在案的资金及利息。

    7.李丽凡上交的扣押在案的资金及利息。

    8.宋雨微上交的扣押在案的资金及利息。

    9.鸡西业丰煤矿机械设备制造有限公司上交的扣押在案的资金及利息。

    10.黑龙江省三强建筑工程有限公司(鸡西市保障性安居工程基础配套设施项目部)上交的扣押在案的资金及利息。

    11.黑龙江创恒建筑工程有限公司(鸡西市保障性安居工程基础配套设施项目部)上交的扣押在案的资金及利息。

    12.黑龙江康程招标代理有限公司上交的扣押在案的资金及利息。

    13.鸡西市保障性安居工程建设中心在黑龙江鸡西农业商业银行股份有限公司账户的资金及利息。(冻结账号为730010122000104484)

    14.鸡西赫阳燃气有限公司上交的扣押在案的资金及利息。

    15.黑龙江省鹏通煤层气开发有限公司上交的扣押在案的资金及利息。

    16.沈阳焦煤鸡西盛隆矿业有限责任公司上交的扣押在案的资金及利息。

    17.黑龙江沈矿物流有限公司上交的扣押在案的资金及利息。

    18.黑龙江沈煤林木开发有限公司上交的扣押在案的资金及利息。

    19.黑龙江沈矿瓦斯发电有限公司上交的扣押在案的资金及利息。

    20.李克峰上交的扣押在案的资金及利息。

    21.崔立新上交的扣押在案的资金及利息。

    22.王明秋上交的扣押在案的资金及利息。

    23.董玉玲上交的扣押在案的资金及利息。

    24.卢井芳上交的扣押在案的资金及利息。

    25.孔令宝上交的扣押在案的资金及利息。

    26.杨君上交的扣押在案的资金及利息。

    27.刘智宏上交的扣押在案的资金及利息。

    28.解伟山上交的扣押在案的资金及利息。

    29.陈磊上交的扣押在案的资金及利息。

    30.刘洪生上交的扣押在案的资金及利息。

    31.刘德在中国银行股份有限公司账户的资金及利息。(冻结账号为170248459042)

    32.朱玉杰上交的扣押在案的资金及利息。

    33.鸡西金色农业科技有限公司上交的扣押在案的资金及利息。

    34.鸡西三元机械制造有限公司上交的扣押在案的资金及利息。

    35.鸡西元通城市燃气投资有限公司上交的扣押在案的资金及利息。

    36.黑龙江省华诚建筑安装工程有限公司上交的扣押在案的资金及利息。

    37.鸡西新能供热有限公司上交的扣押在案的资金及利息。

    38.鸡西市产权交易服务中心上交的扣押在案的资金及利息。

    39.黑龙江北唐煤矿量费监控系统工程开发有限公司上交的扣押在案的资金及利息。

    40.黑龙江绅港能源开发有限公司上交的扣押在案的资金及利息。

    41.刘立红上交的扣押在案的资金及利息。

    42.黑龙江亚润建筑工程有限公司上交的扣押在案的资金及利息。

    43.鸡西市鸿淦房地产开发有限公司上交的扣押在案的资金及利息。

    44.黑龙江安泰矿产开发有限公司上交的扣押在案的资金及利息。

    45.刘鸿雁上交的扣押在案的资金及利息。

    46.鸡西市国有企业留守处管理中心上交的扣押在案的资金及利息。

    47.鸡西市消防培训中心上交的扣押在案的的资金及利息。

    48.鸡西市矿山森林消防抢险救援大队上交的扣押在案的资金及利息。

    49.黑龙江省天源煤炭股份有限公司上交的扣押在案的资金及利息。

    50.黑龙江优丰农业开发有限公司上交的扣押在案的资金及利息。

    51.鸡西隆衡房地产开发有限公司上交的扣押在案的资金及利息。

    52.鸡西市矿山森林消防抢险救援训练中心上交的扣押在案的资金及利息。

    53.黑龙江农垦正基房地产开发有限公司上交的扣押在案的资金及利息。

    54.赵伟上交的扣押在案的资金及利息。

    55.鸡西华誉农工贸有限责任公司鸡西建筑分公司上交的扣押在案的资金及利息。

    56.孙德清上交的扣押在案的资金及利息。

    57.陈长文上交的扣押在案的资金及利息。

    58.徐玉国上交的扣押在案的资金及利息。

    59.哈尔滨市滨港投资有限公司在中国农业银行股份有限公司的资金及利息。(冻结账号为08064101040002809)

    60.北京泛华置业有限公司上交的扣押在案的资金及利息。

    61.李维上交的扣押在案的资金及利息。

    62.陈红博在中国交通银行股份有限公司账户、中国银行股份有限公司账户的资金及利息。(冻结账号为中国交通银行股份有限公司6222620910009410840、6222620910009792866账户;中国银行股份有限公司168990504834账户)

    63.吴亮靓上交的扣押在案的资金及利息。

    64.大庆百世环保科技开发有限公司上交的扣押在案的资金及利息。

    65.鸡西市宇晨房地产中介有限公司上交的扣押在案的资金及利息。

    66.徐艳华上交的扣押在案的资金及利息。

    67.黑龙江省华诚建筑安装工程有限公司鸡西分公司上交的扣押在案的资金及利息。

    68.黑龙江泛华物流产业园投资管理有限公司上交的扣押在案的资金及利息。

    69.哈尔滨市融达路桥工程有限公司鸡西分公司上交的扣押在案的资金及利息。

    70.鸡西百盛苗木繁育有限公司在龙江银行股份有限公司的资金及利息。(冻结账号为31090120000000026)

    71.罗云兵上交的扣押在案的资金及利息。

    72.鸡西泛华城市建设投资有限公司上交的扣押在案的资金及利息。

    73.黑龙江百世金融产业园管理有限公司上交的扣押在案的资金及利息。

    74.吴柏年上交的扣押在案的资金及利息。

    75.泛华北方投资管理(北京)有限公司上交的扣押在案的资金及利息。

    76.北京福瑞祥达建筑工程有限公司在中国建设银行股份有限公司账户的资金及利息。(冻结账号为1100107060005303147)

    77.黑龙江顺城投资有限公司上交的扣押在案的资金及利息。

    78.黑龙江同亨投资有限公司上交的扣押在案的资金及利息。

    79.黑龙江沈矿瓦斯发电有限公司梨树分公司上交的扣押在案的资金及利息。

    80.李明上交的扣押在案的资金及利息。

    81.刘玉松上交的扣押在案的资金及利息。

    82.任立恒上交的扣押在案的资金及利息。

    83.赵国英上交的扣押在案的资金及利息。

    84.黑龙江龙远房地产开发有限责任公司上交的扣押在案的资金及利息。

    85.董凤珍上交的扣押在案的资金及利息。

    86.黑龙江省业丰投资管理有限公司上交的扣押在案的资金及利息。

    87.鸡西阔远房地产开发有限公司上交的扣押在案的资金及利息。

    88.黑龙江正麒房地产开发公司上交的扣押在案的资金及利息。

    89.陶胜强上交的扣押在案的资金及利息。

    90.张欣上交的扣押在案的资金及利息。

    91.黑龙江省镝森房地产开发有限责任公司上交的扣押在案的资金及利息。

    92.黑龙江省龙城专用车有限公司在中国农业银行有限公司账户的资金及利息。(冻结账号为08700201040013218)

    93.鸡西市财务会计继续再教育中心上交的扣押在案的资金及利息。

    94.胡桂芝上交的扣押在案的资金及利息。

    二、房产

    1.鸡西市鸡煤机社保服务中心有限公司名下鸡西市中心塔小区一组团二期A﹢B栋转角楼-门市(12)-10门市、中心塔小区一组团-地下室-2门市、中心塔小区一组团二期A﹢B栋转角楼13号4层(办公室)、中心塔小区一组团二期工程C座-车库-7、中心塔小区一组团二期工程转角楼-门市-11、二期工程转角楼门市12,地址:鸡西市鸡冠区中心塔小区。其名下鸡西市南山办花园2-门市-1,地址:鸡西市鸡冠区电工路南山办。

    2.鸡西市鸡冠区伟沟净水设备经销处名下鸡西市鸡冠区南山一组团B座0-00201号门市、C座0-00101号门市、C座0-00102号门市、C座0-00103号门市、C座000104号门市、C座0-00105号门市、D座0-00101号门市、D座0-00102号门市、D座0-00103号门市,地址:鸡西市鸡冠区电台路南山小区一组团;其名下黄楼浴池,地址:鸡西市鸡冠区中心大街煤机厂幼儿园后黄楼浴池;其名下中心塔小区二组团1-门市(1-2)-20号、二期工程C-6号车库,地址:鸡西市鸡冠区中心塔小区;其名下鸡西市东山小区东山阳光家园17-1-9号门市,地址:鸡西市鸡冠区东山阳光家园安置小区;其名下鸡西市东山小区北山11-11号门市,地址:鸡西市鸡冠区向阳办。

    3.王宗健名下鸡西市中心塔小区二组团1号楼5单元71号住宅(产权证号S200813881)、1号楼5单元8-1号住宅(产权证号S200813952)、1号楼5单元4-2号住宅(产权证号S200813892)、1号楼5单元5-2号住宅(产权证号S200813880)、1号楼5单元6-2号住宅(产权证号S200813943)、1号楼5单元7-2号住宅(产权证号S200813885)、1号楼5单元8-2号住宅(产权证号S200813951),地址:鸡西市中心塔小区。

    4.马奎武名下鸡西市中心塔小区二组团1号楼4单元7-1号住宅(产权证号S200813886)、1号楼4单元8-1号住宅(产权证号S200813889)、1号楼4单元4-2号住宅(产权证号S200813882)、1号楼4单元5-2号住宅(产权证号S200813883)、1号楼4单元6-2号住宅(产权证号S200813884)、1号楼4单元7-2号住宅(产权证号S200813888)、1号楼4单元8-2号住宅(产权证号S200813891),地址:鸡西市鸡冠区中心塔小区。

    5.鸡西神龙煤矿机械有限公司名下鸡西市中心塔小区一组团一期(1-2)-2号门市、一组团一期(1-2)-3号门市、一组团一期(1-2)-6号门市、一组团一期(1-2)-14号门市、二组团1-13号门市、二组团1-14号门市、二组团1-15号门市、二组团1-16号门市、二组团1-17号门市、二组团1-18号门市、二组团1-19号门市、二组团1-10号门市,地址:鸡西市鸡冠区中心塔小区;其名下鸡西市向阳办黄楼13-3号门市、向阳办黄楼13-5号门市、向阳办黄楼13-6号门市、向阳办黄楼14-1号门市,地址:鸡西市鸡冠区向阳办黄楼;其名下鸡西市开元综合楼门市13号门市,地址:鸡西市鸡冠区开元综合楼;其名下鸡西市东风小区6号楼-11门市,地址:鸡西市鸡冠区向阳办。

    6.鸡西市湖泊湿地保护研究中心名下密山市兴凯湖乡湖岗的木屋2栋,产权证号:013030211(包含沐沁舍木屋2处、井房、锅炉房、宿舍),产权证号:013030212(包含湖边别墅、宿舍车库、门卫),地址:密山市兴凯湖湖西检查站后院。

    7.鸡西市兴凯湖大白鱼养殖繁育推广中心名下密山市兴凯湖水产养殖场鲤鱼港分场的木屋,鸡房权证密建字第017020040号,地址:密山市兴凯湖湖岗鲤鱼港东侧;其名下密山市兴凯湖水产养殖场鲤鱼港家属区房产,鸡房权证密建字第017020041号(包含库房、车库、办公楼、养殖房、门卫、一层别墅、二层别墅、三层别墅),地址:密山市兴凯湖湖岗鲤鱼港东侧。

    8.黑龙江省镝森房地产开发有限责任公司名下鸡西市福地洞天小区一期9号楼5号门市、7号楼17号车库、7号楼18号车库、7号楼19号车库、7号楼20号车库,地址:鸡西市鸡冠区福地洞天小区。

    9.鸡西泛华城市建设投资有限公司名下鸡西市鸡冠新城公共租赁住房项目3号楼1号车库、3号楼2号车库、3号楼3号车库、3号楼4号车库、3号楼5号车库、3号楼6号车库、3号楼7号车库、3号楼8号车库、3号楼9号车库、3号楼10号车库、3号楼11号车库、3号楼12号车库、3号楼13号车库、3号楼14号车库、3号楼15号车库、3号楼16号车库、4号楼1号车库、4号楼2号车库、4号楼3号车库、4号楼4号车库、4号楼5号车库、4号楼6号车库、4号楼7号车库、4号楼8号车库、4号楼9号车库、4号楼10号车库、4号楼11号车库、4号楼12号车库、4号楼13号车库、4号楼14号车库、4号楼15号车库、4号楼16号车库、5号楼1号车库、5号楼2号车库、5号楼3号车库、5号楼4号车库、5号楼5号车库、5号楼6号车库、5号楼7号车库、5号楼8号车库、5号楼9号车库、5号楼10号车库、5号楼11号车库、5号楼12号车库、5号楼13号车库、5号楼14号车库、5号楼15号车库、5号楼16号车库、6号楼1号车库、6号楼2号车库、6号楼3号车库、6号楼4号车库、6号楼5号车库、6号楼6号车库、6号楼7号车库、6号楼8号车库、6号楼9号车库、6号楼10号车库、6号楼11号车库、6号楼12号车库、6号楼13号车库、6号楼14号车库、7号楼1号车库、7号楼2号车库、7号楼3号车库、7号楼4号车库、7号楼5号车库、7号楼6号车库、7号楼7号车库、7号楼8号车库、7号楼9号车库、7号楼10号车库、7号楼11号车库、7号楼12号车库、7号楼13号车库、7号楼14号车库,地址:鸡冠区红星乡加油站西侧、南环路北侧;其名下鸡西泛华城市建设投资有限公司泛华创业大厦、4栋钢结构厂房,地址:鸡西市鸡冠区鸡恒路66号;其名下鸡西市鸡冠区红星乡朝阳村房产环境综合整治项目(环卫车库)1栋,地址:鸡西市鸡冠区红军办广益8-办公楼。

    10.北京泛华置业有限公司名下鸡西市鸡冠区松林小区一期1号楼1单元603住宅、一期2号楼1单元602住宅、一期2号楼2单元602住宅、一期2号楼3单元602住宅、一期2号楼5单元602住宅、一期3号楼2单元602住宅、一期3号楼1单元202住宅、一期4号楼2单元402住宅、一期4号楼2单元502住宅、一期5号楼1单元502住宅、一期5号楼2单元502住宅、一期5号楼2单元602住宅、一期6号楼1单元502住宅、一期6号楼2单元502住宅、一期6号楼3单元602住宅、一期6号楼4单元502住宅、一期7号楼2单元502住宅、一期7号楼2单元602住宅、二期9号楼1单元602住宅、二期10号楼1单元602住宅、一期1号楼2号门市、一期1号楼3号门市、一期1号楼4号门市、一期1号楼5号门市、一期1号楼6号门市、一期2号楼1号门市、一期2号楼2号门市、一期2号楼3号门市、一期2号楼4号门市、一期2号楼5号门市、一期2号楼6号门市、一期2号楼7号门市、一期3号楼2号门市、一期3号楼3号门市、一期3号楼4号门市、一期3号楼5号门市、一期4号楼1号门市、一期4号楼2号门市、一期4号楼3号门市、二期8号楼1号门市、二期8号楼2号门市、二期8号楼3号门市、二期8号楼4号门市、二期8号楼5号门市、二期8号楼6号门市、二期9号楼1号门市、二期9号楼2号门市、二期9号楼3号门市、二期9号楼4号门市、二期9号楼5号门市、二期9号楼6号门市、二期9号楼7号门市、二期9号楼8号门市、二期10号楼1号门市、二期10号楼2号门市、二期10号楼3号门市、二期10号楼4号门市、二期10号楼5号门市、二期10号楼6号门市、二期10号楼7号门市、二期11号楼1号门市、二期11号楼2号门市、二期11号楼3号门市、二期11号楼4号门市、一期1号楼1号车库、一期1号楼2号车库、一期1号楼3号车库、一期1号楼4号车库、一期1号楼5号车库、一期1号楼6号车库、一期1号楼7号车库、一期2号楼1号车库、一期2号楼2号车库、一期2号楼3号车库、一期2号楼4号车库、一期2号楼5号车库、一期2号楼6号车库、一期2号楼7号车库、一期2号楼8号车库、一期2号楼9号车库、一期2号楼10号车库、一期2号楼11号车库、一期2号楼12号车库、一期2号楼13号车库、一期2号楼14号车库、一期2号楼15号车库、一期2号楼16号车库、一期2号楼17号车库、一期2号楼18号车库、一期2号楼19号车库、一期2号楼20号车库、一期3号楼2号车库、一期3号楼3号车库、一期3号楼4号车库、一期3号楼5号车库、一期3号楼6号车库、一期3号楼7号车库、一期3号楼8号车库、一期4号楼1号车库、一期4号楼2号车库、一期4号楼3号车库、一期4号楼4号车库、一期4号楼5号车库、一期4号楼6号车库、一期4号楼7号车库、一期4号楼8号车库、一期5号楼1号车库、一期5号楼2号车库、一期5号楼3号车库、一期5号楼4号车库、一期5号楼5号车库、一期5号楼6号车库、一期5号楼7号车库、一期5号楼8号车库、一期5号楼10号车库、一期6号楼1号车库、一期6号楼2号车库、一期6号楼3号车库、一期6号楼5号车库、一期6号楼6号车库、一期6号楼7号车库、一期6号楼8号车库、一期6号楼9号车库、一期6号楼10号车库、二期8号楼1号车库、二期8号楼2号车库、二期8号楼3号车库、二期8号楼4号车库、二期8号楼5号车库、二期8号楼6号车库、二期8号楼7号车库、二期8号楼8号车库、二期8号楼9号车库、二期8号楼10号车库、二期8号楼11号车库、二期8号楼12号车库,地址:鸡西市鸡冠区松林小区。

    11.鸡西市鸿淦房地产开发有限公司名下鸡西市柳盛馨园小区9号楼2单元601住宅、8号楼5单元203住宅、1号楼4号车库、2号楼6号车库、2号楼7号车库、2号楼13号车库、2号楼16号车库、2号楼17号车库、3号楼17号车库、3号楼18号车库、3号楼19号车库、5号楼5号车库、5号楼7号车库、7号楼2号车库、7号楼3号车库、7号楼4号车库、7号楼5号车库、7号楼6号车库、7号楼7号车库、7号楼8号车库、8号楼9号车库、8号楼14号车库、8号楼15号车库、8号楼22号车库、9号楼7号车库、9号楼9号车库、10号楼1号车库、10号楼4号车库、11号楼3号车库、11号楼4号车库、11号楼5号车库、12号楼9号车库、12号楼21号车库、14号楼4号车库、14号楼5号车库、14号楼6号车库、14号楼7号车库、14号楼8号车库、14号楼9号车库、14号楼14号车库、14号楼15号车库、14号楼16号车库、14号楼17号车库、14号楼18号车库、14号楼19号车库、15号楼3号车库、15号楼4号车库、15号楼6号车库、15号楼7号车库、15号楼8号车库、15号楼9号车库、15号楼10号车库、15号楼12号车库、15号楼13号车库、15号楼16号车库、15号楼17号车库、16号楼18号车库、16号楼22号车库、16号楼24号车库、17号楼3号车库、17号楼7号车库、17号楼11号车库、17号楼12号车库、17号楼18号车库、17号楼19号车库、17号楼21号车库、17号楼20号车库、17号楼26号车库、17号楼29号车库、18号楼1号车库、18号楼11号车库、18号楼12号车库、20号楼14号车库、22号楼6号车库、22号楼8号车库、22号楼9号车库、22号楼10号车库、22号楼12号车库、22号楼17号车库、22号楼18号车库、22号楼19号车库、商务会馆11号车库、9-3车库、9-4车库,地址:鸡西市鸡冠区腾飞路北柳浪街东柳盛馨园小区。

    12.鸡西市兴凯湖国际大酒店有限公司名下鸡西市鸡冠区中心大街37号国际经贸大厦,产权证号S201501270,地址:鸡西市鸡冠区中心大街37号大厦。

    13.鸡西市产权交易服务中心名下鸡西市鸡冠区建安街东、技师学院北校区北侧残疾人综合服务中心综合楼1栋,鸡冠房字第S201408620,地址:鸡西市鸡冠区建安街东、技师学院北校区北侧残疾人综合服务中心综合楼,鸡西市残疾人联合会2号楼。

    14.王洋名下北京市昌平区定泗路88号北七家镇羊各庄世纪星城住宅小区二期(一区)0151号1层0101别墅,产权证号:X京房权证昌字第583662号,地址:北京市昌平区定泗路88号北七家镇羊各庄世纪星城住宅小区。

    15.杨桂芝名下三亚市鲁能三亚湾度假区高一区B14栋,产权证号三土房(2014)字第09996号,地址:三亚市鲁能三亚湾度假区高一别墅区B14栋。

    16.张亚杰名下三亚市凤翔路鲁能三亚湾美丽城1区1期2栋1单元1A号住宅,产权证号琼(2019)三亚市不动产权第0007112号,地址:海南省三亚市鲁能三亚湾美丽城区一期二栋一单元1A号房。

    17.金思江名下三亚市凤翔路鲁能三亚湾美丽城1区1期2栋1单元2A号住宅,产权证号三土房(2014)第09783号,地址:海南省三亚市鲁能三亚湾美丽城区一期二栋一单元2A号房。

    18.鸡西元通城市燃气投资有限公司名下鸡西市红胜花园小区B栋1101、B栋1-102、B栋1-202。地址:鸡西市鸡冠区学府街西,涌新路南;其名下鸡西市红胜花园小区C4号楼-7号车库、C4号楼-8号车库、C4号楼-9号车库、C4号楼-10号车库、C4号楼-11号车库。地址:鸡西市鸡冠区学府街西,涌新路南;其名下鸡西市唯美新城一期2-4-202室、一期3-1-201室、唯美新城二期2-3-201室、二期2-3301室、二期3-1-401室、唯美新城11-3-402室、8-11403室,鸡西市唯美新城三期16号楼1号门市,地址:鸡冠区新区建工街与涌新路交汇处唯美新城小区。

    19.黑龙江泛华物流产业园投资管理有限公司名下鸡西泛华物流园区B、C厂房,地址:鸡西市鸡冠区腾飞路北、柳浪街东;其名下鸡西泛华物流园区信息交易综合楼、发电机房、门卫室、零担用房、快递分拣中心、零担库房、仓储库房等23处房产和厂房,地址:鸡西市鸡冠区腾飞路418号物流园区。

    20.梁焕名下哈尔滨市南岗区大成街140号龙电花园H栋11层2号,产权证号:1401083332,地址:哈尔滨市南岗区大成街140号。

    21.赵成芳(又名赵成方)名下鸡西市城子河区永丰乡新兴村房产1处,二层楼房1栋,产权证号S160843号、S160844号,地址:鸡西市城子河区永丰乡新兴村。

    22.鸡西金色农业科技有限公司名下5个日光棚、1个生态餐厅、7个温室大棚以及8处房屋(产权证号C201400561号至C201400568号);地址:鸡西市城子河区长青乡良种场。鸡西市东风办教育学院住宅-6-(1-5)层,产权证号S201506763,地址:鸡西市东风办教育学院住宅-6-(1-5)层。

    23.黑龙江北唐煤矿量费监控系统工程开发有限公司名下鸡西市北唐煤矿量费监控系统工程开发有限公司厂房及办公楼,地址:鸡西市鸡恒路东太村南。

    24.鸡西隆衡房地产开发有限公司名下鸡西市信合大厦二单元1101、二单元1102、二单元2401、二单元2402、三单元1501、三单元1502、三单元1901、三单元1902、三单元2001、四单元1102、一单元1102、一单元2502、一单元2503、一单元2602、二单元1001、二单元1002、二单元1401、二单元1402、二单元1801、二单元1802、二单元2601、二单元2602、二单元2701、二单元2702、三单元1001、三单元1002、三单元1101、三单元1202、三单元1301、三单元1302、三单元1401、三单元1402、三单元1801、三单元1802、三单元2002、三单元2101、三单元2102、三单元2201、三单元2202、三单元2301、三单元2401、三单元2402、三单元2501、三单元2502、三单元2601、三单元2602、三单元2701、三单元2702、四单元1001、四单元1002、四单元1101、四单元1401、四单元1402、四单元1801、四单元1802、四单元2601、四单元2602、四单元2701、四单元2702、五单元1001、五单元1002、五单元1003、五单元1102、五单元1302、五单元1401、五单元1402、五单元1801、五单元1802、五单元2601、五单元2701、五单元2702、五单元2703、一单元1001、一单元1002、一单元1003、一单元1401、一单元1402、一单元1403、一单元1801、一单元1802、一单元1803、一单元2701、一单元2702、一单元2703号住宅。地址:鸡西市鸡冠区文化路南、西山路西信合大厦。信合大厦1层1号、1层2号、2层1号、2层2号、3层1号、3层2号、4层1号、4层2号、5层1号、5层2号、6层1号、6层2号、7层1号、7层2号、-1层1号门市,地址:鸡西市鸡冠区文化路南。信合大厦车位2-1-04号、车位-2-2-09号、车位-2-2-10号、车位2-2-12号、车位-2-2-13号、车位-2-2-14号、车位2-2-15号、车位-2-2-16号、车位-2-3-01号、车位2-3-02号、车位-2-3-03号、车位-2-3-04号、车位2-4-01号、车位-2-4-02号、车位-2-4-03号、车位2-5-01号、车位-2-5-02号、车位-2-5-03号、车位2-5-04号、车位-2-5-05号、车位-2-5-06号、车位2-6-01号、车位-2-6-05号,地址:鸡西市鸡冠区文化路南、西山路西。

    25.鸡西市嘉盈沥青搅拌有限公司名下门卫房、锅炉房、煤仓、料场及混凝土地、办公楼,地址:鸡西市鸡冠鸡密南路1号朝阳村村口沥青搅拌站,鸡西市北钢烧砖厂西侧、冷家路北。

    26.鸡西市国有资产经营管理有限公司名下天马特种耐火材料厂房产,产权证号S201104631、产权证号S201104632、产权证号S201104633、产权证号S201104634、产权证号S201104635、产权证号S201104636、产权证号S201104637、产权证号S201104638、产权证号S201104639、产权证号S201104640、产权证号S201104641、产权证号S201104642、产权证号S201104643、产权证号S201104644,地址:鸡西市鸡冠区201国道特耐厂院内。

    27.鸡西煤矿专用设备厂名下厂房,产权证号S201001968、产权证号S201001974、产权证号S201001971、产权证号G888、产权证号G875、产权证号G879、产权证号G887、产权证号G886、产权证号S201001965、产权证号G884、产权证号032960、产权证号S201001967、产权证号S201001973、产权证号S201001970、产权证号S201001969、产权证号031774、产权证号031770、产权证号031768、产权证号031782、产权证号031772、产权证号031777、产权证号031781、产权证号031765、产权证号031783、产权证号031776、产权证号031771、产权证号031769、产权证号031780、产权证号031778、产权证号031766、产权证号031775、产权证号017007,产权证号024920、产权证号S201001966、产权证号S201001972、产权证号026304、产权证号G871、产权证号031779、产权证号031767、产权证号031773,地址:鸡西市鸡冠区南山路59号。

    28.李传纲(鸡西市产权交易服务中心)名下熙雅寓C座7单元502室,地址:鸡西市鸡冠区文化路熙雅寓C座7单元502室。

    29.李继明名下勤奋二组团4单元302室,产权证号黑(2017)鸡西市不动产权第000634号,地址:鸡西市鸡冠区文化路勤奋二组团4-00302。

    30.蒋一赫名下勤奋二组团4单元402室,产权证号黑(2017)鸡西市不动产权第000635号,地址:鸡西市鸡冠区文化路勤奋二组团4-00402。

    31.刘立红名下勤奋二组团4单元1002室,地址:鸡西市鸡冠区文化路勤奋二组团4-01002,产权证号S201508613;其名下鸡西市先锋小区5号楼3-6-3住宅,产权证号S200900862,地址:鸡西市鸡冠区先锋小区。

    32.黑龙江龙远房地产开发有限公司名下勤奋二组团2单元2201住宅、勤奋二组团2单元2202住宅、勤奋二组团2单元2301住宅、勤奋二组团3单元2201住宅、勤奋二组团3单元2202住宅、勤奋二组团4单元201住宅、勤奋二组团4单元301住宅、勤奋二组团4单元401住宅、勤奋二组团4单元801住宅、勤奋二组团4单元901住宅、勤奋二组团4单元1902住宅、勤奋二组团4单元2001住宅、勤奋二组团4单元2002住宅、勤奋二组团4单元1202住宅,勤奋二组团负1层7号门市、勤奋二组团负1层8号门市、勤奋二组团负1层9号门市、勤奋二组团负1层10号门市、勤奋二组团1层1号门市、勤奋二组团1层8号门市、勤奋二组团1层9号门市,勤奋二组团地下车位A区101车位、勤奋二组团地下车位A区102车位、勤奋二组团地下车位A区103车位、勤奋二组团地下车位A区104车位、勤奋二组团地下车位A区201车位、勤奋二组团地下车位A区203车位、勤奋二组团地下车位A区205车位、勤奋二组团地下车位A区206车位、勤奋二组团地下车位B区201车位、勤奋二组团地下车位B区203车位、勤奋二组团地下车位B区204车位、勤奋二组团地下车位B区206车位、勤奋二组团地下车位C区101车位、勤奋二组团地下车位C区102车位、勤奋二组团地下车位C区103车位、勤奋二组团地下车位C区104车位、勤奋二组团地下车位C区201车位、勤奋二组团地下车位C区202车位、勤奋二组团地下车位C区203车位、勤奋二组团地下车位C区206车位、勤奋二组团地下车位D区102车位、勤奋二组团地下车位D区201车位、勤奋二组团地下车位D区202车位、勤奋二组团地下车位D区203车位、勤奋二组团地下车位D区204车位、勤奋二组团地下车位D区205车位、勤奋二组团地下车位D区206车位、勤奋二组团地下车位E区101车位、勤奋二组团地下车位E区201车位、勤奋二组团地下车位E区202车位、勤奋二组团地下车位E区203车位、勤奋二组团地下车位H区201车位、勤奋二组团地下车位H区202车位、勤奋二组团地下车位H区203车位、勤奋二组团地下车位H区204车位、勤奋二组团地下车位H区空车位、勤奋二组团地下车位H区空车位、勤奋二组团地下车位I区201车位、勤奋二组团地下车位I区203车位、勤奋二组团地下车位I区空车位、勤奋二组团地下车位I区空车位、勤奋二组团地下车位J区201车位、勤奋二组团地下车位J区203车位、勤奋二组团地下车位J区204车位、勤奋二组团地下车位J区206车位、勤奋二组团地下车位K区201车位、勤奋二组团地下车位K区202车位、勤奋二组团地下车位K区203车位、勤奋二组团地下车位K区204车位、勤奋二组团地下车位K区205车位、勤奋二组团地下车位K区206车位,1单元东侧三层独栋建筑,地址:鸡西市勤奋二组团;其名下鸡西市鸡冠区电台路南山小区一组团A栋2单元702室住宅,南山小区一组团车位2#、南山小区一组团车位3#、南山小区一组团车位4#、南山小区一组团车位7#、南山小区一组团车位8#、南山小区一组团车位10#、南山小区一组团车位11#、南山小区一组团车位12#、南山小区一组团车位13#、南山小区一组团车位14#、南山小区一组团车位17#、南山小区一组团车位18#、南山小区一组团车位19#、南山小区一组团车位20#、南山小区一组团车位21#、南山小区一组团车位22#、南山小区一组团车位24#、南山小区一组团车位25#、南山小区一组团车位26#、南山小区一组团车位27#、南山小区一组团车位28#、南山小区一组团车位29#、南山小区一组团车位30#、南山小区一组团车位31#、南山小区一组团车位34#、南山小区一组团车位35#、南山小区一组团车位36#、南山小区一组团车位37#、南山小区一组团车位38#、南山小区一组团车位39#、南山小区一组团车位40#、南山小区一组团车位41#、南山小区一组团车位42#、南山小区一组团车位43#、南山小区一组团车位44#、南山小区一组团车位45#、南山小区一组团车位46#、南山小区一组团车位47#、南山小区一组团车位48#、南山小区一组团车位49#、南山小区一组团车位50#、南山小区一组团车位51#、南山小区一组团车位52#、南山小区一组团车位53#、南山小区一组团车位54#、南山小区一组团车位55#、南山小区一组团车位56#、南山小区一组团车位57#、南山小区一组团车位59#、南山小区一组团车位60#、南山小区一组团车位61#、南山小区一组团车位62#、南山小区一组团车位63#、南山小区一组团车位64#、南山小区一组团车位69#、南山小区一组团车位70#、南山小区一组团车位71#、南山小区一组团车位72#、南山小区一组团车位73#、南山小区一组团车位74#、南山小区一组团车位75#、南山小区一组团车位76#、南山小区一组团车位77#、南山小区一组团车位78#、南山小区一组团车位79#、南山小区一组团车位80#、南山小区一组团车位81#、南山小区一组团车位82#、南山小区一组团车位83#、南山小区一组团车位84#、南山小区一组团车位85#、南山小区一组团车位86#、南山小区一组团车位87#、南山小区一组团车位88#、南山小区一组团车位89#、南山小区一组团车位90#、南山小区一组团车位91#、南山小区一组团车位92#、南山小区一组团车位93#、南山小区一组团车位94#、南山小区一组团车位95#、南山小区一组团车位96#、南山小区一组团车位97#、南山小区一组团车位98#、南山小区一组团车位101#、南山小区一组团车位103#、南山小区一组团车位104#、南山小区一组团车位105#、南山小区一组团车位106#、南山小区一组团车位107#、南山小区一组团车位108#、南山小区一组团车位109#、南山小区一组团车位110#、南山小区一组团车位111#、南山小区一组团车位112#、南山小区一组团车位113#、南山小区一组团车位114#、南山小区一组团车位116#、南山小区一组团车位117#、南山小区一组团车位118#、南山小区一组团车位119#、南山小区一组团车位123#、南山小区一组团车位124#、南山小区一组团车位125#、南山小区一组团车位126#、南山小区一组团车位127#、南山小区一组团车位128#、南山小区一组团车位129#、南山小区一组团车位130#、南山小区一组团车位131#、南山小区一组团车位132#、南山小区一组团车位133#、南山小区一组团车位134#、南山小区一组团车位135#、南山小区一组团车位136#、南山小区一组团车位137#、南山小区一组团车位138#、南山小区一组团车位139#、南山小区一组团车位140#、南山小区一组团车位141#、南山小区一组团车位142#、南山小区一组团车位143#、南山小区一组团车位144#、南山小区一组团车位145#、南山小区一组团车位146#、南山小区一组团车位147#、南山小区一组团车位148#、南山小区一组团车位149#、南山小区一组团车位150#、南山小区一组团车位151#、南山小区一组团车位152#、南山小区一组团车位153#、南山小区一组团车位154#、南山小区一组团车位155#、南山小区一组团车位156#、南山小区一组团车位157#、南山小区一组团车位158#、南山小区一组团车位159#、南山小区一组团车位160#、南山小区一组团车位161#、南山小区一组团车位162#、南山小区一组团车位163#、南山小区一组团车位15#,南山小区一组团A区3单元901住宅、南山小区一组团A区3单元902住宅、南山小区一组团B区5单元17-01住宅、南山小区一组团B区4单元1701住宅、南山小区一组团B区4单元17-02住宅、南山小区一组团B区1单元17-01住宅、南山小区一组团B区4单元16-02住宅、南山小区一组团B区2单元16-01住宅、南山小区一组团B区4单元10-01住宅、南山小区一组团B区4单元15-01住宅、南山小区一组团B区4单元14-01住宅、南山小区一组团B区4单元14-02住宅、南山小区一组团B区4单元13-01住宅、南山小区一组团B区4单元12-01住宅、南山小区一组团B区4单元11-01住宅、南山小区一组团B区4单元9-01住宅、南山小区一组团B区4单元8-01住宅、南山小区一组团B区4单元8-02住宅、南山小区一组团B区4单元7-02住宅、南山小区一组团B区4单元6-01住宅、南山小区一组团B区4单元6-02住宅、南山小区一组团B区4单元5-01住宅、南山小区一组团B区4单元5-02住宅、南山小区一组团B区4单元401住宅、南山小区一组团B区4单元4-02住宅、南山小区一组团B区4单元3-01住宅、南山小区一组团B区4单元3-02住宅、南山小区一组团C区2单元13-03住宅、南山小区一组团C区2单元5-03住宅、南山小区一组团C区2单元4-03住宅、南山小区一组团D区2单元6-02住宅、南山小区一组团D区1单元603住宅、南山小区一组团D区1单元303住宅,南山小区一组团A座104门市、南山小区一组团A座105门市、南山小区一组团A座106门市、南山小区一组团A座107门市、南山小区一组团B座101门市、南山小区一组团B座102门市、南山小区一组团B座103门市、南山小区一组团B座105门市、南山小区一组团B座106门市、南山小区一组团B座107门市、南山小区一组团B座108门市、南山小区一组团B座109门市,地址:鸡西市鸡冠区电台路南山小区一组团。

    33.鸡西阔远房地产开发有限公司名下鸡西市城子河区长青综合楼1-201室住宅、鸡西市城子河区长青综合楼1-202室住宅、鸡西市城子河区长青综合楼1-301室住宅、鸡西市城子河区长青综合楼1-401室住宅、鸡西市城子河区长青综合楼1402室住宅、鸡西市城子河区长青综合楼1-502室住宅、鸡西市城子河区长青综合楼1-601室住宅、鸡西市城子河区长青综合楼1-602室住宅、鸡西市城子河区长青综合楼2-302室住宅、鸡西市城子河区长青综合楼2-401室住宅、鸡西市城子河区长青综合楼2-402室住宅、鸡西市城子河区长青综合楼2-501室住宅、鸡西市城子河区长青综合楼2-502室住宅、鸡西市城子河区长青综合楼2-601室住宅、鸡西市城子河区长青综合楼2-602室住宅、鸡西市城子河区长青综合楼3-201室住宅、鸡西市城子河区长青综合楼3-202室住宅、鸡西市城子河区长青综合楼3-301室住宅、鸡西市城子河区长青综合楼3-302室住宅、鸡西市城子河区长青综合楼3-401室住宅、鸡西市城子河区长青综合楼3-402室住宅、鸡西市城子河区长青综合楼3501室住宅、鸡西市城子河区长青综合楼3-502室住宅、鸡西市城子河区长青综合楼3-601室住宅、鸡西市城子河区长青综合楼3-602室住宅、鸡西市城子河区长青综合楼4-201室住宅、鸡西市城子河区长青综合楼4-202室住宅、鸡西市城子河区长青综合楼4-301室住宅、鸡西市城子河区长青综合楼4-302室住宅、鸡西市城子河区长青综合楼4-401室住宅、鸡西市城子河区长青综合楼4-402室住宅、鸡西市城子河区长青综合楼4-501室住宅、鸡西市城子河区长青综合楼4-502室住宅、鸡西市城子河区长青综合楼4-601室住宅、鸡西市城子河区长青综合楼4-602室住宅、鸡西市城子河区长青综合楼5-201室住宅、鸡西市城子河区长青综合楼5-202室住宅、鸡西市城子河区长青综合楼5-301室住宅、鸡西市城子河区长青综合楼5302室住宅、鸡西市城子河区长青综合楼5-401室住宅、鸡西市城子河区长青综合楼5-402室住宅、鸡西市城子河区长青综合楼5-501室住宅、鸡西市城子河区长青综合楼5-502室住宅、鸡西市城子河区长青综合楼5-601室住宅、鸡西市城子河区长青综合楼5-602室住宅、鸡西市城子河区长青综合楼6-201室住宅、鸡西市城子河区长青综合楼6-202室住宅、鸡西市城子河区长青综合楼6-301室住宅、鸡西市城子河区长青综合楼6-302室住宅、鸡西市城子河区长青综合楼6-401室住宅、鸡西市城子河区长青综合楼6-402室住宅、鸡西市城子河区长青综合楼6-501室住宅、鸡西市城子河区长青综合楼6-502室住宅、鸡西市城子河区长青综合楼6-601室住宅、鸡西市城子河区长青综合楼6-602室住宅,鸡西市城子河区长青综合楼4号门市、鸡西市城子河区长青综合楼8号门市、鸡西市城子河区长青综合楼9号门市,地址:鸡西市城子河区长青综合楼。

    34.鸡西市中城建房地产开发有限公司名下鸡西市滴道区同乐六组团3号楼5号门市、鸡西市滴道区同乐六组团3号楼6号门市、鸡西市滴道区同乐六组团3号楼7号门市、鸡西市滴道区同乐六组团3号楼8号门市、鸡西市滴道区同乐六组团4号楼4号门市、鸡西市滴道区同乐六组团4号楼5号门市、鸡西市滴道区同乐六组团9号楼113号门市、鸡西市滴道区同乐六组团9号楼114号门市、鸡西市滴道区同乐六组团9号楼115号门市、鸡西市滴道区同乐六组团9号楼116号门市、鸡西市滴道区同乐六组团7号楼4号门市、鸡西市滴道区同乐六组团7号楼6号门市、鸡西市滴道区同乐六组团7号楼14号门市、鸡西市滴道区同乐六组团7号楼15号门市,地址:鸡西市滴道区同乐六组团(金街花园)。

    35.陈彦彬名下北京市通州区八里桥京铁潞园1号楼3单元2504室住宅,地址:北京市通州区八里桥京铁潞园1号楼3单元2504室。

    36.董凤珍名下鸡西市鸡冠区向阳办东风委4035-1-2-4住宅,地址:鸡西市鸡冠区向阳办东风委。

    37.王立明名下鸡西市鸡冠区南山办跃进鸡西大学1-1-1-103住宅,产权证号S200707622,地址:鸡西市鸡冠区南山办跃进委鸡西大学。

    38.刘泓弢名下鸡西市电工路商住楼-2-7-1住宅,产权证号S200702951,地址:鸡西市鸡冠区电工路商住楼。

    39.黑龙江正麒房地产开发有限公司名下鸡西市鸡兴东路北中国银行西鸡西市消防培训中心综合楼及附属设施,产权证号S201408904,地址:鸡西市鸡冠区消防培训中心综合楼西侧。其名下尼斯花园A1栋4处住宅、A2栋4处住宅、A3栋4处住宅、B1栋4处住宅、B2栋4处住宅、B3栋4处住宅、B4栋6处住宅、C1栋4处住宅、C2栋4处住宅、C3栋4处住宅、C4栋6处住宅,地址:鸡西市鸡冠区鸡兴东路、鸡西气象局东侧。

    40.黑龙江沈矿瓦斯发电有限公司名下独栋鸡西市鸡冠区广益城农贸市场北侧办公楼1栋(鸡西市交通运输局原办公楼),产权证号G5235,地址:鸡西市鸡兴东路北、中国人民银行西。

    41.陈红博名下河北省固安县大卫城三期孔雀城大卫城乐园4栋1单元1层0107门市,地址:河北省固安县大卫城三期孔雀城大卫城乐园4栋1单元1层0107门市。

    42.黑龙江省华诚建筑安装工程有限公司名下兴凯湖新开流景区5处木屋餐厅,地址:密山市兴凯湖新开流观景台东侧。

    三、车辆

    1.鸡西滨港特种车有限公司名下车辆,品牌型号夏工XG951,牌照号CXG00951C0L1A9165。

    2.鸡西滨港特种车有限公司名下车辆,品牌型号夏工XG955,牌照号XG955CXG00955P0L1C2231。

    3.鸡西滨港特种车有限公司名下车辆,品牌型号夏工XG955,牌照号CXG00955K0L1C2232。

    4.鸡西滨港特种车有限公司名下车辆,品牌型号夏工XG955,牌照号CXG00955T0L1B3379。

    5.鸡西滨港特种车有限公司名下车辆,品牌型号钩机,牌照号SH350SMT350A5P00BH3036。

    6.鸡西滨港特种车有限公司名下车辆,品牌型号山推SD16,牌照号AA126720。

    7.鸡西滨港特种车有限公司名下车辆,品牌型号山推SD16,牌照号AA126998。

    8.李克峰名下车辆,品牌型号白色雷克萨斯5700,牌照号黑A8570F。

    9.董国政名下车辆,品牌型号白色丰田兰德酷路泽5700,牌照号黑G7868E。

    10.李彬名下车辆,品牌型号福特嘉年华,牌照号黑G8K267。

    11.杨贵春名下车辆,品牌型号福特嘉年华,牌照号黑G6K700。

    12.郑鑫名下车辆,品牌型号大众辉腾,牌照号黑A923DZ。

    13.黑龙江省鹏通煤层气开发有限公司名下车辆,品牌型号东风皮卡,牌照号黑G12180。

    14.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号本田雅阁,牌照号黑G06729。

    15.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号依维柯,牌照号黑G08078。

    16.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号依维柯,牌照号黑G06829。

    17.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号依维柯,牌照号黑G06866。

    18.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号别克轿车,牌照号黑G81090。

    19.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号长城汽车,牌照号黑G08106。

    20.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号沈阳金杯,牌照号黑GB5017。

    21.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号长城汽车,牌照号黑G08137。

    22.沈阳焦煤鸡西盛隆矿业有限责任公司名下车辆,品牌型号沈阳金杯,牌照号黑GG2396。

    23.黑龙江沈矿瓦斯发电有限公司名下车辆,品牌型号大众辉腾,牌照号黑GG5507。

    24.鸡西泛华城市建设投资有限公司名下车辆,品牌型号庆铃皮卡,牌照号黑G13736。

    25.鸡西泛华城市建设投资有限公司名下车辆,品牌型号本田汉兰达,牌照号黑G07969。

    26.鸡西泛华城市建设投资有限公司名下车辆,品牌型号大众桑塔纳,牌照号黑G07976。

    27.鸡西泛华城市建设投资有限公司名下车辆,品牌型号大众桑塔纳,牌照号黑G09692。

    28.鸡西泛华城市建设投资有限公司名下车辆,品牌型号大众桑塔纳,牌照号黑G09597。

    29.鸡西泛华城市建设投资有限公司名下车辆,品牌型号大众桑塔纳,牌照号黑G04711。

    30.鸡西泛华城市建设投资有限公司名下车辆,品牌型号大众桑塔纳,牌照号黑G04448。

    31.鸡西泛华城市建设投资有限公司名下车辆,品牌型号别克GL8,牌照号黑G08757。

    32.鸡西泛华城市建设投资有限公司名下车辆,品牌型号别克昂科威,牌照号黑G04447。

    33.鸡西泛华城市建设投资有限公司名下车辆,品牌型号别克昂科威,牌照号黑G04066。

    34.鸡西泛华城市建设投资有限公司名下车辆,品牌型号现代格瑞,牌照号黑G08878。

    35.鸡西泛华城市建设投资有限公司名下车辆,品牌型号五菱宏光,牌照号黑GU4966。

    36.鸡西泛华城市建设投资有限公司名下车辆,品牌型号五菱宏光,牌照号黑G08796。

    37.鸡西泛华城市建设投资有限公司名下车辆,品牌型号五菱宏光,牌照号黑G09708。

    38.鸡西泛华城市建设投资有限公司名下车辆,品牌型号铲车,牌照号柳工50。

    四、土地、滩涂

    1.鸡西市阔远房地产开发有限公司名下鸡西市鸡冠区财政局培训中心北、学府街东,学府街09-01号地块,面积27858平方米。

    2.鸡西泛华城市建设投资有限公司名下鸡西市鸡冠区文成街东、规划路南,A-60-01-b号净地,面积10015平方米。

    3.鸡西市德帮物贸有限公司名下鸡西市污水处理厂西侧2012-12号地块A-01号土地,面积14154平方米。

    4.黑龙江北唐煤矿量费监控系统工程开发有限公司名下鸡恒路东、鸡西永金液化气有限公司南侧,鸡冠国用(2013)第200052号土地,面积38100平方米。

    5.黑龙江省北方建成汽车贸易有限公司名下鸡西市鸡冠西端南侧,宗地号3-23-47号土地,面积154198平方米。

    6.黑龙江优丰农业开发有限公司名下租赁鸡西市城子河区永丰乡永平村土地,面积21571平方米。

    7.郭立星名下租赁鸡西市城子河区永丰乡永平村水域滩涂,面积44800平方米。

    8.鸡西金色农业科技有限公司名下租赁鸡西市良种场地块800亩(面积533336平方米)。

    9.黑龙江正麒房地产开发有限公司名下鸡西市鸡兴东路北中国银行西106/18/166地块(A-50),面积34985平方米。

    10.鸡西市中城建房地产开发有限公司名下鸡西市鸡冠区长征街东、涌新路北,鸡冠新区二期A-64-2号地块净地,面积61497平方米。

    11.鸡西市中城建房地产开发有限公司名下鸡西市鸡冠区长征街东、涌新路北,鸡冠新区二期A-66-2号地块净地,面积66724平方米。

    12.鸡西市阔远房地产开发有限公司名下鸡西市鸡冠区兴国东路北、冉昭街东,鸡冠区东A-02、东A-03、东A-04号A02-B-4-3地块,面积25681平方米。

    13.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北,鸡冠区2015-08号A-01(腾飞北路)地块,面积130162平方米。

    14.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北,鸡冠区2015-08号F-03(腾飞北路)地块,面积12911平方米。

    15.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北,柳浪街(鸡冠区2015-08号地块A-04地块)道路工程,面积25493平方米。

    16.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北,鸡冠区2015-08号A02(腾飞北路)地块,面积43992平方米。

    17.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北,鸡冠区2015-08号E-05-a(腾飞北路)地块,面积16531平方米。

    18.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北,鸡冠区2015-08号E05-b(腾飞北路)地块,面积2375平方米。

    19.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北、柳浪街东,鸡冠区2015-08号D-01地块,面积91296平方米。

    20.黑龙江泛华物流产业园投资管理有限公司名下鸡西市鸡冠区腾飞路北、柳浪街西,挂2016-9号,面积15423平方米。

    21.鸡西泛华城市建设投资有限公司名下鸡西市沈阳煤业(集团)鸡西盛隆矿业有限责任公司西侧,2011-30号A-07-5号地块,面积41118平方米。

    22.鸡西泛华城市建设投资有限公司名下鸡西市文成街西前进路北,鸡冠新区二期A-68-2号地块,面积3655平方米。

    23.鸡西泛华城市建设投资有限公司名下鸡西市鸡恒路西汽车产业园内,鸡西(鸡冠)工业新城B-03-4号地块,面积4579平方米。

    24.张亚凤名下租赁鸡西市和平林场沈家沟土地(120.69亩),面积80460平方米。

    25.鸡西滨港特种汽车有限公司名下鸡西市鸡冠区2011-30A-11号地块、鸡西市鸡冠区2011-30A-01号地块、鸡西市鸡冠区2011-30A-03号地块,三块土地共计面积137006平方米。

    五、林地

    1.哈尔滨市滨港投资有限公司名下鸡西市梨树区碱场矿林地18557亩(12371395.19平方米)。

    2.黑龙江优丰农业开发有限公司名下租赁鸡西市团山子水库西岸林地980亩(653336.6平方米)。

    3.黑龙江沈煤林木开发有限公司名下租赁鸡西市鸡冠区原立新矿樟子松、落叶松林地699亩(466002.33平方米)。

    4.黑龙江沈煤林木开发有限公司名下租赁鸡西市鸡冠区原立新矿樟子松林地144亩(96000.48平方米)。

    5.黑龙江沈煤林木开发有限公司名下租赁鸡西市鸡冠区原立新矿落叶松林地105亩(70000.35平方米)。

    6.鸡西盛隆矿业有限责任公司鸡东林场名下租赁小和平林场II区永和施业区、软阔叶混交林24825亩(16550082.75平方米)。

    7.黑龙江省青山煤矿林场名下租赁黑龙江省林口县(市)亚河公社(镇)青山煤矿林场26220亩(17480087.4平方米)。

    8.鸡西市园林绿化中心名下鸡西市鸡冠区太阳升村苗圃用地及表面栽种树木246860平方米。

    六、设备黑龙江绅港能源开发公司名下的2000KW太阳能光伏发电设备、蒸汽型吸收式热泵(水源热泵)、冷渣机。

    七、股权

    1.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江沈矿瓦斯发电有限公司100%股权。

    2.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江沈矿物流有限公司100%股权。

    3.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西泛华城市建设投资有限公司100%股权。

    4.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江泛华物流产业园投资管理有限公司100%股权。

    5.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江绅港能源开发有限公司100%股权。

    6.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西市湖泊湿地保护研究中心100%股权。

    7.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西兴凯湖大白鱼养殖繁育推广中心100%股权。

    8.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西隆衡房地产开发有限公司100%股权。

    9.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西阔远房地产开发有限公司100%股权。

    10.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江龙远房地产开发有限责任公司100%股权。

    11.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江北唐煤矿量费监控系统工程开发有限公司100%股权。

    12.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江优丰农业开发有限公司100%股权。

    13.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西金色农业科技有限公司100%股权。

    14.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江安泰矿产开发有限公司100%股权。

    15.犯罪嫌疑人李传良名下个人实际持有、控制的黑龙江省鹏通煤层气开发有限公司100%股权。

    16.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西元通城市燃气投资有限公司100%股权。

    17.犯罪嫌疑人李传良名下个人实际持有、控制的鸡西滨港特种汽车有限公司100%股权。

    18.犯罪嫌疑人李传良名下个人实际持有、控制的沈煤鸡西隆丰矿山机械制造有限公司鸡东林场100%股权。

  • 钱穆:中国文化之地理背景

    中国是一个文化发展很早的同家,他与埃及、巴比仑、印度,在世界史上上古部分里,同应占到很重要的篇幅。但中国因其环境关系,他的文化,自始即走上独自发展的路径。在有史以前,更渺茫的时代里,中国是否与西方文化有所接触,及其相互间影响何如,现在尚无从深论。但就大体言,中国文化开始,较之埃及、巴比仑、印度诸国,特别见为是一种孤立的,则已成为一种明显的事实。

    中国文化不仅比较孤立,而且亦比较特殊,这里面有些可从地理背景上来说明。埃及、巴比仑、印度的文化,比较上皆在一个小地面上产生。独有中国文化,产生在特别大的地面上。这是双方最相异的一点。人类文化的最先开始,他们的居地,均赖有河水灌溉,好使农业易于产生。而此灌溉区域,又须不很广大,四围有天然的屏障,好让这区域里的居民,一则易于集中而到达相当的密度,一则易于安居乐业而不受外围敌人之侵扰。在此环境下,人类文化始易萌芽。埃及尼罗河流域,巴比仑美索不达米亚平原,印度印度河流域,莫不如此。印度文化进展到恒河流域,较为扩大,但仍不能与中国相比。中国的地理背景,显然与上述诸国不同。

    普通都说,中国文化发生在黄河流域。其实黄河本身并不适于灌溉与交通。中国文化发生,精密言之,并不赖藉于黄河本身,他所依凭的是黄河的各条支流。每一支流之两岸和其流进黄河时雨水相交的那一个角里,却是古代中国文化之摇篮。那一种两水相交而形成的三角地带,这是一个水桠杈,中国古书里称之曰“汭”,汭是在两水环抱之内的意思。中国古书里常称渭汭、泾汭、洛汭,即指此等三角地带而言。我们若把中国古史上各个朝代的发源地和根据地分配在上述的地理形势上,则大略可作如下之推测。唐、虞文化是发生在现在山西省之西南部,黄河大曲的东岸及北岸,汾水两岸及其流入黄河的桠杈地带。夏文化则发生在现在河南省之西部,黄河大曲之南岸,伊水、洛水两岸,及其流入黄河的桠杈地带。

    周文化则发生在现在陕西省之东部,黄河大曲之西岸,渭水两岸,及其流入黄河的桠杈地带。这一个黄河的大隈曲,两岸流着泾、渭、伊、洛、汾、涑几条支流,每一条支流的两岸,及其流进黄河的三角桠杈地带里面,都合宜于古代农业之发展。而这一些支流之上游,又莫不有高山叠岭为其天然的屏蔽,故每一支流实自成为一小区域,宛如埃及、巴比仑般,合宜于人类文化之生长。而黄河的几个渡口,在今山西省河津、临晋、平陆诸县的,则为他们当时相互交通的孔道。

    据中国古史传说,虞、夏文化极相密接,大概夏部族便从洛水流域向北渡过黄河,而与汾水流域的虞部族相接触。其主要的渡口为平陆的茅津渡,稍东的有孟津。周部族之原始居地,据旧说乃自今陕西渭河上流逐步东移。但据本书作者之意见,颇似有从山西汾河下流西渡黄河转到陕西渭河下流之可能。无论如何,周部族在其定居渭河下游之后,常与黄河东岸汾水流域居民交通接触,则为断无可疑之事。因此上述虞夏周三氏族的文化,很早便能融成一体,很难再分辨的了。这可以说是中国古代较为西部的一个文化系统。

    中国古代的黄河,流到今河南省东部,一到郑县境,即折向北,经今河南浚县大伾山下,直向北流,靠近太行山麓,到今天津附近之渤海湾入海。在今安阳县(旧彰德府)附近,便有漳水、洹水流入黄河,这里是古代殷、商氏族的政府所在地。他们本由黄河南岸迁来,在此建都,达二百八十年之久。最近五十年内,在那里发掘到许多牛胛骨与龟版,上刻贞卜文字,正为此时代殷商王室之遗物,因此我们对于此一时期中在此地域的商文化,增多了不少新智识。

    原来的商族,则在今河南省归德附近,那里并非黄河流经之地,但在古代则此一带地面保存很多的湖泽,最有名的如孟诸泽、蒙泽之类。也有许多水流,如睢水、濊水(即涣水)之类。自此(归德)稍向北,到河南中部,则有荥泽、圃田泽等。自此稍东北,山东西部,则有菏泽、雷夏、大野等泽。大抵商部族的文化,即在此等沼泽地带产生。那一带正是古代淮水、济水包裹下的大平原,商代文化由此渐渐渡河向北伸展而至今河南之安阳,此即所谓殷墟的,这可以说是中国古代较为东部的一个文化系统。这一个文化系统,再溯上去,或可发生在中国之极东,燕、齐滨海一带,现在也无从详说了。

    但在有史以前很早时期,似乎上述的中国东西两大系统的文化,早已有不断的接触与往来,因此也就很难分辨说他们是两个系统。更难说这两大系统的文化,孰先孰后。

    现在再从古代商族的文化地域说起。因为有新出土的甲骨文为证,比较更可信据。那时商王室的政治势力,似乎向西直达渭水流域,早与周部族相接触,而向东则达今山东、河北两省沿海,中间包有济水流域的低洼地带。向东北则直至辽河流域,向南则到淮水流域,向西南则到汉水流域之中游,说不定古代商族的文化势力尚可跨越淮、汉以南,而抵达长江北岸。这些地带,严格言之,早已在黄河流域外,而远在商代早已在中国文化区域里。及到周代兴起,则长江流域、汉水、淮水、济水、辽河诸流域,都成为中国文化区域之一部分,其事更属显明。

    我们只根据上文约略所谈,便可见古代中国文化环境,实与埃及、巴比仑、印度诸邦绝然不同。埃及、巴比仑、印度诸邦,有的只籍一个河流和一个水系,如埃及的尼罗河。有的是两条小水合成一流,如巴比仑之底格里斯与阿付腊底河,但其实仍只好算一个水系,而且又都是很小的。只有印度算有印度河与恒河两流域,但两河均不算甚大,其水系亦甚简单,没有许多支流。只有中国,同时有许多河流与许多水系,而且都是极大和极复杂的。那些水系,可照大小分成许多等级。如黄河、长江为第一级,汉水、淮水、济水、辽河等可为第二级,渭水、泾水、洛水、汾水、漳水等则为第三级,此下还有第四级第五级等诸水系,如汾水相近冇涑水,漳水相近有淇水、濮水,入洛水者有伊水,入渭水者有沣水、滈水等。此等小水,在中国古代史上皆极著名。中国古代的农业文化,似乎先在此诸小水系上开始发展,渐渐扩大蔓延,弥漫及于整个大水系。我们只要把埃及、巴比仑、印度及中国的地图仔细对看,便知其间的不同。

    埃及和巴比仑的地形,是单一性的一个水系与单一性的一个平原。印度地形较复杂,但其最早发展,亦只在印度北部的印度河流域与恒河流域,他的地形仍是比较单纯。只有中国文化,开始便在一个复杂而广大的地面上展开。有复杂的大水系,到处有堪作农耕凭藉的灌溉区域,诸区域相互间都可隔离独立,使在这一个区域里面的居民,一面密集到理想适合的浓度,再一面又得四围的天然屏障而满足其安全要求。如此则极适合于古代社会文化之酝酿与成长。但一到其小区域内的文化发展到相当限度,又可藉着小水系进到大水系,而相互间有亲密频繁的接触。因此中国文化开始便易走进一个大局面,与埃及、巴比仑、印度,始终限制在小面积里的情形大大不同。若把家庭作譬喻,埃及、巴比仑、印度是一个小家庭,他们只备一个摇篮,只能长育一个孩子。中国是一个大家庭,他能具备好几个摇篮,同时抚养好几个孩子。这些孩子成长起来,其性情习惯自与小家庭中的独养子不同。这是中国文化与埃及、巴比仑、印度相异原于地理背景之最大的一点。

    其次再有一点,则关于气候方面。埃及、巴比仑、印度全都近在热带,全在北纬三十度左右,物产比较丰足,衣食易给,他们的文化,大抵从多量的闲暇时间里产生。只有中国已在北温带的较北地带,在北纬三十五度左右。黄河流域的气候,是不能和埃及、印度相比的,论其雨量,也远不如埃及、印度诸地之丰富。古代中国北部应该和现在的情形相差不远,我们只看周初时代《豳风·七月》诗里所描写那时的节令物产以及一般农民生活,便知那时情形实与现在山西、陕西一带黄河、渭水附近甚相类似。因此中国人开始便在一种勤奋耐劳的情况下创造他的文化,较之埃及、巴比仑、印度之闲暇与富足的社会,又是绝不相似了。

    根据上述,古代中国因其天然环境之特殊,影响其文化之形成,因有许多独特之点,自亦不难想像而知。兹再约举其大者言之。

    第一:古代文化发展,皆在小环境里开始,其缺点在于不易形成伟大的国家组织。独有中国文化,自始即在一大环境下展开,因此易于养成并促进其对于政治、社会凡属人事方面的种种团结与处理之方法与才能。遂使中国人能迅速完成为一内部统一的大国家,为世界同时任何民族所不及。

    第二:在小环境里产生的文化社会,每易遭受外围文化较低的异族之侵凌,而打断或阻碍其发展。独有中国文化,因在大环境下展开,又能迅速完成同家内部之团结与统一,因此对于外来异族之抵抗力量特别强大,得以不受摧残,而保持其文化进展之前程,逐渐发展。直至现在成为世界上文化绵历最悠久的国家,又为世界任何民族所不及。

    第三:古代文明多在小地面的肥沃区域里产生,因此易于到达其顶点,很早便失却另一新鲜向前的刺激,使其活力无地使用,易于趋向过度的奢侈生活,而招致社会内部之安逸与退化。独有中国文化,因在较苦瘠而较广大的地面产生,因此不断有新刺激与新发展的前途。而在其文化生长过程下,社会内部亦始终能保持一种勤奋与朴素的美德,使其文化常有新精力,不易腐化。直到现在,只有中国民族在世界史上仍见其有虽若陷于老朽,而仍有其内在尚新之气概,此又为并世诸民族所不逮。

    因于上述三点,所以中国文化经过二三千年的发展,完成了他的上古史之后,一到秦、汉统一时代,正为中国文化开始走上新环境、新气象之另一进程,渐渐由黄河流域扩展至长江流域的时代。而与他同时的几个文明古国,如埃及、巴比仑、印度等,皆已在世界文化史上开始退出他们重要的地位,而让给其他的新兴民族来扮演另一幕的主角了。

    若照全世界人类文化已往成绩而论,便只有西方欧洲文化和东方中国文化两大系统,算得源远流长,直到现在,成为人类文化之两大主干。我们不妨乘便再将此两大文化约略作一简单的比较。

    欧洲文化的远祖是希腊,希腊文化灿烂时期,正和中国西周乃至春秋、战国时代相平行。但双方有一极大的不同。希腊诸邦,虽则有他们共同的文化,却从没有他们共同的政治组织。希腊永远是一种互相独立的市府政治,每一市府,各成一单位。中国西周乃至春秋时代,虽亦同样有许多国家,每一国家虽则几乎亦同样以一个城市,即中国古书中称为“国”的为中心,但这些国家,论其创始,大体都由一个中央政府,即西周王室所分封,或经西周王室之正式承认。因此西周时代的中国,理论上已是一个统一国家,不过只是一种“封建式的统一”,而非后代郡县式的统一而已。

    中国此时之所谓“封建”,亦和欧洲中世纪的封建不同。惟其如此,所以一到春秋时代,虽则西周王室东迁,他为中原诸侯共主的尊严早已失去,但还可以有齐桓公、晋文公一辈在列国诸侯中称霸为盟主的起来,代替王室,继续联合和好与统一的工作。这是西方希腊政治所不能完成的。因此西方希腊诸市府,一到中国秦、汉时代,便不免完全为罗马所吞灭,从此西方文化又要走入一新境界。但中国秦、汉时代,却并非如西方般,由外面来了一个新势力,把旧有的中国吞灭,中国秦、汉时代,只是在旧中国的内部,自身有一种改进,由封建式的统一,转变而成“郡县式的统一”,使其统一之性质与功能,益增完密与强固而已。

    我们继此可以说到西方罗马与汉代之不同。罗马政府的性质,论其原始也和希腊市府一般。后来逐步向外伸张,始造成一个伟大的帝国。这一个帝国之组织,有他的中心即罗马城,与其四围之征服地。这是在帝国内部显然对立的两个部分。至于中国汉代,其开始并没有一个像希腊市府般的基本中心,汉代的中国,大体上依然承袭春秋、战国时代来,只在其内部组织上,起了一种新变化。这一种变化,即如上节所说,由封建式的统一转变成为郡县式的统一。

    因此汉代中国,我们只可说他有了一种新组织,却不能说他遇到一个新的征服者。罗马帝国由征服而完成,汉代中国则不然。那时的中国,早已有他二三千年以上的历史,在商、周时代,国家体制早已逐渐完成了。一到汉代,在他内部,另有一番新的政治组织之酝酿与转化。因此在罗马帝国里面,显然有“征服者”与“被征服者”两部分之对立,而在汉代中国,则浑然整然,只是一体相承,并没有征服者与被征服者之区分。西方习惯称罗马为帝国(Empire),汉代中国决不然,只可称为一国家(Nation)。照西方历史讲,由希腊到罗马,不仅当时的政治形态变了,由市府到帝国,而且整个的国家和人民的大传统也全都变了,由希腊人及希腊诸市府变到罗马人与罗马帝国。而那时的中国,则人民和国家的大传统,一些也没有变,依然是中国人和中国,只变了他内部的政治形态,由封建到郡县。

    我们再由此说到罗马覆亡后的西方中古时期,和中国汉代覆亡后之魏晋南北朝时期,两者中间仍有显著的不同。罗马覆亡,依然和希腊覆亡一样,是遇到了一个新的征服者,北方蛮族。此后的欧洲史,不仅政治形态上发生变动,由帝国到封建,而且在整个的人民和国家的大传统上也一样的发生变动,由南方罗马人转变到北方日耳曼人,又由罗马帝国转变到中世纪封建诸王国。中国汉代的覆灭,并不是在中国以外,另来了一个新的征服者,而仍然是在中国内部起了一种政治形态之动荡。东汉以后,魏、蜀、吴三国分裂,下及西晋统一,依然可以说是一种政治变动,而非整个民族和国家传统之转移。此后五胡乱华,虽有不少当时称为胡人的乘机起乱,但此等胡人,早已归化中国,多数居在中国内地,已经同样受到中国的教育。他们的动乱,严格言之,仍可看作当时中国内部的一种政治问题和社会问题,而非在中国人民与中国国家之外,另来一个新的征服者。

    若依当时人口比数论,不仅南方中国,全以中国汉人为主体,即在北方中国,除却少数胡族外,百分之八九十以上的主要户口依然是中国的汉人。当时南方政治系统,固然沿着汉代以来的旧传统与旧规模,即在北朝,除却王室由胡族为之,其一部分主要的军队由胡人充任以外,全个政府,还是胡、汉合作。中国许多故家大族,没有南迁而留在北方的,依然形成当时政治上的中坚势力,而社会下层农、工、商、贾各色人等,则全以汉人为主干。因此当时北朝的政治传统,社会生活,文化信仰,可以说一样承袭着汉代而仍然为中国式的旧传统。虽不免有少许变动,但这种变动,乃历史上任何一个时代所不免。若单论到民族和国家的大传统,文化上的大趋向,则根本并无摇移。

    因此西方的中古时代,北方蛮族完全以一种新的民族出现而为此下西方历史之主干,旧的罗马人则在数量上已成被压倒的劣势而逐渐消失。反之,在中国史上,魏晋南北朝时代,依然以旧的中国人为当时政治、社会、文化各部门各方面之主干与中坚。至于新的胡人,只以比较的少数加入活动,如以许多小支流浸灌入一条大河中,当时虽有一些激动,不久即全部混化而失其存在了。这一层是中国魏晋南北朝时代和欧洲中古时期的绝大不同处。

    因此西方的中古时期,可以说是一个转变,亦可说是一个脱节,那时的事物,主要的全是新兴的。北方日耳曼民族成为将来历史和文化之主干,这是新兴的。当时所行的封建制度,亦是新兴的。西方的封建,乃罗马政治崩溃后,自然形成的一种社会现象,根本与中国史上西周时代所谓的封建不同。中国的封建制度,乃古代中国统一政治进展中之一步骤、一动象;西方封建,则为罗马政治解消以后一种暂时脱节的现象。那时在西方主持联合与统一工作的,主要者并非封建制度,而为基督教的教会组织。这种教会组织又是新兴的。

    希腊、罗马和基督教会之三者,成为近代西方文化之三主源。在中国魏晋南北朝时代,虽同样有印度佛教之流入,并亦一时称盛,但在历史影响上,复与西方中古时期的基督教绝然不同。基督教是在罗马文化烂熟腐败以后,完全以新的姿态出现而完成其感化北方蛮族的功能的。但魏晋南北朝时代的中国,则以往传统文化并未全部衰歇。孔子的教训,依然为社会人生之最大信仰与最大归趋,只在那时又新增了一个由印度传来的佛教,而一到唐代以后,佛教也到底与儒教思想相合流相混化。因此我们可以说,在欧洲中古时期,论其民族,是旧的罗马民族衰歇而新的日耳曼民族兴起。在中国则只在旧的中国汉民族里面增加了一些新民族新分子,胡人。

    论政治,在欧洲中古时期,是旧的罗马统治崩溃,而新的封建社会兴起。在中国则依然是秦、汉的政治制度之沿续,根本上并无多少转换。论文化与信仰,在欧洲中古时期,则由旧的罗马文化转变到新的基督教文化。在中国,则依然是一个孔子传统,只另外又加进一些佛教的成分。却不能说那时的中国,由旧的孔教而变成为新的佛教了。

    由此言之,西方的中古时期,全是一个新的转变,而魏晋南北朝时代的中国,则大体还是一个旧的沿袭。那些王朝的起灭和政权之转移,只是上面说的一种政治形态之动荡。若论民族和国家的大传统,中国依然还是一个承续,根本没有摇移。

    根据上述,来看近代西方新兴的民族国家,他们在两洋史上,又都是以全新的姿态而出现的。论其民族和国家的大传统,他们复和古代的希腊、罗马不同。但中国史则以一贯的民族传统与国家传统而绵延着,可说从商、周以来,四千年没有变动。所有中国史上的变动,伤害不到民族和同家的大传统。因此中国历史只有层层团结和步步扩展的一种绵延,很少彻底推翻与重新建立的像近代两方人所谓的革命。这是中西两方历史形态一个大不同处,因此而影响到双方对于历史观念之分歧。

    西方人看历史,根本是一个“变动”,常由这一阶段变动到那一阶段。若再从这个变动观念上加进时间观念,则谓历史是“进步”的,人类历史常由这一时代的这一阶段,进展到另一时代的另一阶段。但中国人看历史,则永远在一个“根本”上,与其说是变动,不如说是“转化”。与其说是进步,不如说是“绵延”。中国人的看法,人类历史的运行,不是一种变动,而是一种转化。不是一种进步,而是一种绵延,并不是从这一阶段变动、进步而达另一阶段,只是依然在这一阶段上逐渐转化、绵延。

    变动、进步是“异体的”,转化、绵延则是“同体的”。变动、进步则由这个变成了那个。转化、绵延则永远还是这一个。因此两方人看历史,常偏向于“空间”的与“权力”的“向外伸展”;中网人看历史,常偏向于“时间”的与“生长”的“自我绵延”。西方人的看法,常是“我”与“非我”两个对立。中国人的看法,只有自我一体浑然存在。双方历史形态之不同,以及双方对于历史观念之不同,其后面便透露出双方文化意识上之不同。这一种不同,若推寻根柢,我们依然可以说中西双方全都受着一些地理背景的影响。中国在很早时期,便已凝成一个统一的大国家。在西方则直到近代,由中国人眼光看来,依然如在我们的春秋、战国时代,列国纷争,还没有走上统一的路。

    中国历史正因为数千年来常在一个大一统的和平局面之下,因此他的对外问题常没有像他对内问题那般的重要。中国人的态度,常常是反身向着内看的。所谓向内看,是指看一切东西都在他向己的里面。这样便成为自我一体浑然存在。西方历史则永远在列国纷争、此起彼仆的斗争状态之下,因此他们的对内问题常没有像他们对外问题那般的重要,西方人的态度,则常常是向外看的。所谓向外看,是指看一切东西都在他自己的外面,所以成为我与非我屹然对立。惟其常向外看,认为有两体对立,所以特別注意在空间的“扩张”,以及“权力”和“征服”上。惟其常向内看,认为只有一体浑然,所以特别注意到时间的“绵延”以及“生长”和“根本”上。

    其次说到双方经济形态,中国文化是自始到今建筑在农业上面的,西方则自希腊、罗马以来,大体上可以说是建筑在商业上面。一个是彻头彻尾的农业文化,一个是彻头彻尾的商业文化,这是双方很显著的不同点。

    依西方人看法,人类文化的进展,必然由农业文化进一步变成商业文化。但中国人看法,则并不如此。中国人认为人类生活,永远仰赖农业为基础,因此人类文化也永远应该不脱离农业文化的境界,只有在农业文化的根本上再加绵延展扩而附上一个工业,更加绵延展扩而又附上一个商业,但文化还是一线相承,他的根本却依然是一个农业。

    照西方人看,文化是变动的,进步的,由农到商截然不同。照中国人看,则文化还是根本的与生长的,一切以农为主。这里自然也有地理背景的影响。因为西方文化开始如埃及、巴比仑等,他们本只有一个狹小的农业区,他们的农业文化不久便要达到饱和点,使他们不得不转换方向改进到商业经济的路上去。希腊、罗马乃至近代西方国家莫不如此。在中国则有无限的农耕区域可资发展,因此全世界人类的农业文化,只有在中国得到一个继长增荣不断发展的机会。

    中国历史,在很早时期里,便已有很繁荣的商业了。但因中国开始便成为一个统一的大国,因此他的商业常是对内之重要性超过了对外。若西方各国,则常是对外通商的重要性超过了对内。因此双方对商业的看法,也便有异。西方常常运用国家力量来保护和推进其国外商业。中国则常常以政府法令来裁制国内商业势力之过分旺盛,使其不能远驾于农、工之上。因此在西方国家很早便带有一种近代所谓“资本帝国主义”的姿态,在中国则自始到今常采用一种近代所谓“民主社会主义”的政策。

    再换辞言之,农业文化是自给自足的,商业文化是内外依存的。他是要吸收外面来营养自己的。因此农业文化常觉得内外一体,只求安足。商业文化则常觉彼我对立,惟求富强。结果富而不足,强而不安,因此常要变动,常望进步。农业文化是不求富强但求安足的,因此能自本自根一线绵延。

    我们继此讲到科学和工业,科学知识和机械工业在现世界的中国是远为落后的。但中国已往历史上,也不断有科学思想与机械创作之发现,只因中国人常采用的是民主社会主义的经济政策,“不患寡而患不均”。对于机械生产,不仅不加奖励,抑且时时加以禁止与阻抑,因此中国在机械工业一方面,得不到一个活泼的发展。在中国的机械和工业,是专走上精美的艺术和灵巧的玩具方面去了。科学思想在中国之不发达,当然不止此一因,但科学没有实际应用的机会,自为中国科学不发达的最要原因之一。

    其次我们再说到中西双方对于人生观念和人生理想的异同。“自由”(Liberty & Freedom)一词是西方人向来最重视的。西方全部历史,他们说,即是一部人类自由的发展史。西方全部文化,他们说,即是一部人类发展自由的文化。“人生”、“历史”和“文化”,本来只是一事,在西方只要说到“自由”,便把这三方面都提纲挈领的总会在一处了。在中国则似乎始终并不注重“自由”这个字。西方用来和自由针对的,还有“组织”和“联合”(Organization & Unity)。希腊代表着自由,罗马和基督教会则代表着组织和联合。这是西方历史和西方文化的两大流,亦是西方人生之两大干。我们只把握这两个概念来看两方史,便可一一看出隐藏在两方历史后面的一切意义和价值。

    但中国人向来既不注重自由,因此也便不注重组织和联合,因为自由和联合的后面,还有一个概念存在的,这便是“两体对立”。因有两体对立,所以要求自由,同时又要求联合。但两体对立,是西方人注重向外看,注重在空间方面看的结果。是由西方商业文化内不足的经济状态下产生的现象。中国人一向在农业文化中生长,自我安定,不须向外寻求,因此中国人一向注重向内看,注重在时间方面看,便不见有严重的两体对立,因此中国人也不很重视自由,又不重视联合了。中国人因为常偏于向内看的缘故,看人生和社会只是浑然整然的一体。

    这个浑然整然的一体之根本,大言之是自然、是天;小言之,则是各自的小我。“小我”与“大自然”混然一体,这便是中国人所谓的“天人合一”。小我并不和此大自然体对立,只成为此体之一种根荄,渐渐生长扩大而圆成,则此小我便与大自然融和而浑化了。此即到达天人合一的境界。中国《大学》一书上所说的修身、齐家、治国、平天下,一层一层的扩大,即是一层一层的生长,又是一层一层的圆成,最后融和而化,此身与家、国、天下并不成为对立。这是中国人的人生观。

    我们若把希腊的自由观念和罗马帝国以及基督教会的一种组织和联合的力量来看中国史,便得不到隐藏在中国史内面深处的意义与价值。我们必先了解中国人的人生观念和其文化精神,再来看中国历史,自可认识和评判其特殊的意义和价值了。但反过来说,我们也正要在中国的文化大流里来认识中同人的人生观念和其文化精神。

    继此我们再讲到中两双方的宗教信仰。西方人常看世界是两体对立的,在宗教上也有一个“天国”和“人世”的对立。在中国人观念里,则世界只有一个。中国人不看重并亦不信有另外的一个天国,因此中国人要求永生,也只想永生在这个世界上。中国人要求不朽,也只想不朽在这个[让界上。中同古代所传诵的立德、立功、立言三不朽,便从这种观念下产生。中国人只想把他的德行、事业、教训永远留存在这个世界这个社会上。中国人不想超世界超社会之外,还有一个天国。因此在西方发展为宗教的,在中国只发展成“伦理”。中国人对世界对人生的“义务”观念,反更重于“自由”观念。在西方常以义务与权利相对立,在中国则常以义务与白由相融和。义务与自由之融和,在中国便是“性”(自由)与“命”(义务)之合一,也便是“天人合一”。

    西方人不仅看世界常是两体对立,即其看自己个人,亦常是两体对立的。西方古代观念,认人有“灵魂”“肉体”两部分,灵魂部分接触的是理性的“精神世界”,肉体部分接触的是感官的“物质世界”。从此推衍,便有西方传统的“二元论”的哲学思想。而同时因为西方人认为物质世界是超然独立的,因此他们才能用纯客观的态度来探究宇宙而走上科学思想的大园地。中国人则较为倾向“身心一致”的观念,并不信有灵肉对立。他看世界,亦不认为对我而超然独立,他依然不是向外看,而是向内看。他认为我与世界还是息息相通,融为一体。

    儒家思想完全以“伦理观”来融化了“宇宙观”,这种态度是最为明显了。即在道家,他们是要摆脱儒家的人本主义,而从宇宙万物的更广大的立场来观察真理的,但他们也依然保留中国人天人合一的观点,他们并不曾从纯客观的心情上来考察宇宙。因此在中国道家思想里,虽有许多接近西方科学精神的端倪,但到底还发展不出严格的西方科学来。

    以上所述,只在指出中西双方的人生观念、文化精神和历史大流,有些处是完全各走了一条不同的路。我们要想了解中国文化和中国历史,我们先应该习得中国人的观点,再循之推寻。否则若从另一观点来观察和批评中国史和中国文化,则终必有搔不着痛痒之苦。

    本文原载《中国文化史导论》,商务印书馆,2023年版

  • 三星堆

    三星堆遗址位于四川省广汉市三星堆镇鸭子河南岸。
    根据文物分析得出三星堆文化遗存的年代是距今3600年到距今3000年之间。
    目前发现的三星堆遗址近12平方公里,明代挖掘的运河马牧河穿过三星堆的西城墙。遗址三面环墙,北临鸭子河。沿着马牧河,依次分布有祭祀区、宫殿区、居民区、手工业作坊区。河北岸有一片台地,像一轮弯月,是三星堆遗址的核心区域,约3.6平方公里。

    1929年当地农民燕道诚家院子旁发现过一个埋葬有400余件玉石器的长方形土坑,为三星堆考古之开始。

    青关山1号大房子,。
    青关山1号建筑平面结构图:面积超过1000平方米,8个埋葬坑出土的器物与建筑中各功能区分布契合,建筑内通道两侧126个柱洞可以用于安装青铜人像和青铜面具
    八号坑神坛:方形基座上为献祭平台,共有13个青铜人像。第一组人像4人,各自面向斜外侧,跪于献祭平台四角。第二组人像4人,可见獠牙,坐在献祭平台各侧边中部的镂空小凳上。第三组人像4人,体型大于其他小人,各自跪在一个小型柱状台基上,4人共同扛起一个由铜杆构成的抬架。第四组人像1人,跪在献祭平台中心的一座山形台基上,并背负一件有盖圆罍
    金面罩铜人头像
    戴金面罩青铜人头像
    戴尖脊帽铜小立人像
    铜扭身跪坐人像
    青铜眼形器
    金面罩
    铜太阳形器
    铜鸟
    铜神树
  • 下岗记述

    一、概念、数字与由来

    1990年代初期,有些地方出现“停薪留职”,有些地方“厂内待业”,有些地方出现“放长假”“两不找”等现象。90年代中后期,下岗职工问题作为一种社会经济现象开始突显。

    据2000年、2003年《中国统计年鉴》,1995-1997年国有单位职工人数变化不大,分别为10955万人、10949万人,10766万人。从1998年开始,人数就开始暴跌:1998年,8809万人;1999年,8336万人;到2002年,这项数据已落至6924万人。 6年间,国有单位职工减少将近4000万。与此同时,城镇集体单位职工人数也呈大幅下降之势,从1995年的3076万人变为2002年的1071万人,减少2000万。下岗人员增多,城镇登记失业率攀升,1995年2.9%,2002年4.0%。

    1996年,中国社会科学院研究院冯兰瑞认为:如果再加上城镇新增长劳动力、农民工等,“九五期间”,失业率可能达到21.4%。

    据2002年2月劳动和社会保障部发布的全国下岗职工报告:1998-2000年,全国国有企业共发生下岗职工2137万人。其中1998年年初为691.8万人,当年新增562.2万人;1999年上年结转610万人,当年新增618.6万人;2000年上年结转652万人,当年新增444.6万人。

    从总量上看,三年间年度下岗职工总量呈下降趋势。从地域分布看,下岗职工主要集中在老工业基地和经济欠发达地区,东北三省占25%;从行业分布看,主要集中在煤炭、纺织、机械、军工等行业。

    2001年初,国有企业(含国有联营企业、国有独资公司)下岗职工为657.3万人,当年新增234.3万人,减少376.2万人,增减相抵,2001年末实有下岗职工515.4万人。2001年底,国有企业再就业服务中心实有下岗职工463.6万人,进再就业服务中心比例为89.9%。

    1996年中央经济工作会议,朱镕基做总结讲话分析了三大行业的冗员情况。首先是煤矿行业: “我记得统配煤矿是360万人,顶多有120万人就足够啦,多了240万人,人工成本占吨煤成本的三分之一。” 然后是铁路系统: “铁路系统现在也是亏损的一塌糊涂,去年亏损100亿元,也是因为300多万人有100多万人就够了。” 最后是粮食系统:“粮食系统更不得了,现在有400多万人。前几天,我请了国家粮食局的一些老同志来座谈当前的粮食问题。大家都感到过去300万人,现在400万人,减一半都没有问题。”

    改制“三转”(政府转换产权,职工转换身份,企业转换机制)中,国企职工转换为失业人员,通过“买断工龄”下岗,签订《解除劳动合同书》后领取当地上一年平均工资三倍安置费(1~2万元)。

    二、社会影响

    1997年10月-11月,江西省社科院社会学所对南昌市下岗职工进行调查发现,490多人中,45.7%月收入低于120元,30.4%在120-200元之间,14.9%在201-400元之间,大部分属于低收入人群。

    1999年,一份对多地下岗职工的调查显示:“(下岗工人)80%-90% 是进入社会职业声望较低的传统零售、服务业,以及过去是以农民工为主体的苦脏累岗位或临时就业岗位。”

    2003年《中国改革》刊载了一篇李培林对东北抚顺、本溪等四座城市下岗职工的调查报告,其中42.6%的人认为当前社会很不公平,37.7%的人认为太不公平,至于家中的主要困难,除了吃饭,依次是:子女教育费太重,有病看不起,养老没着落,以及住房、冬天取暖、穿衣等问题。

    2006年,吉林大学一个团队走访了200多位东北下岗职工,以下是部分受访者的回答:
    “我们现在要求不高,能买米能买油,粗茶淡饭饿不死,就行了。”
    “能实实在在给点钱就啥也不说了。”
    “我们这好多两人都下岗,孩子都像你们这么大都上学呢。……我,晚上出去捡破烂去,白天卖了五块钱,够吗?这孩子生活费在哪出?我们家姑娘上大学呢,爷们还有病。白天出去捡怕人笑话,晚上出去,…欠水费,卡我电。……这法律应该欠水卡水,欠电卡电,现在卡着你老百姓,一个礼拜没有电了。”
    “……现在富的太富了,轻轻松松好几百万,穷的太穷了。以前大米八毛一斤,现在一块多一斤,什么买时都得寻思寻思。买菜一个月真是啥菜便宜买啥菜,肉是半个月吃一回……。”
    “有时都不想活了啊。没有生活来源呀……你就等着饿死,……这下岗那下岗,饭都没有吃的。孩子大了上学你能不愁吗?现在供个大学多少钱,挣多钱的孩子能上学,当父母的没能耐的,孩子大了将来上学不就完了吗?现在孩子上学交学费咱父母都累,别说上大学了。对不起孩子啊,孩子以后咋整?……”
    “现在招工,要去一个月400到500块钱,真都不如低保,采暖啥的还能给你免了,农民工干这活行,他不用考虑(采暖费,水电费,子女教育费等)……现在老板太黑,人有的是,400一个月你干不干?你不干,农民工干。就这价,农民工干也不容易。”
    “现在吧,有门道不下岗,一个月好几千,咱这样一分钱没有,你有本事,有技术没有用,老子英雄儿好汉,你爸爸厉害,你啥也不用干。”

    三、重点区域

    铁西区密布着沈阳市80%以上国有工业企业,是著名的核心工业区,拥有各类钢厂、水泵厂、电缆厂、新华印刷厂、东北制药厂等,绝对数超过1000家。保工街、卫工街、轻工街、重工街……老铁西区的大部分街道,都以“工”命名。1995年,铁西区停产、半停产企业增加到三分之一, 30多万工人中有13万人下岗。工厂因为没有太多加工订单,为保持运作,只能放一部分工人回家“休假”。

    据不完全统计,从二十世纪九十年代开始,上海先后有100多万国营工人加入下岗潮行列,庞大的工人群体因为所在工厂的关闭、转产和改制,纷纷下岗。2003年,上海市劳动和社会保障局宣布:上海市已没有下岗职工,“再就业服务中心”已全部关闭,成为首个“下岗工人”消失的城市。

    四、其他表现

    电视剧《抉择》:以山西中阳市纺织印染厂工人下岗为背景

    沈阳有一路公交车是202路,从和平区沙山发车,穿越和平、铁西、进入皇姑区段的塔湾站。据当地人讲,司机师傅都不爱跑这条线,因为这条线附近的老百姓穷,所谓“素质低”,有上车不给钱的,有骂人的,有抢座位的,有小偷小摸的……这条线路跑的是铁西区与皇姑区交界的地方,这里的下岗工人最多,那个时候有人还给202线公交命名为“下岗车”

    铁西区一时间变成了“休假”职工的“工人度假村”,铁西的应昌市场每天都有挂着牌子的下岗工人在找零工,擦玻璃、刷油漆等,人才市场最大的特点就是众多求职者都戴着白色的口罩。这些曾经的国有工人觉得丢人,不愿被人认出来。
    当时在铁西颇为流行一首诗歌——《下岗工人》:“习惯了接订单的手,今天的指间流出彷徨,装工资的口袋,今天写满空荡……”

    2011年,电影《钢的琴》上映,男主角陈桂林是位钢铁工人,下岗后组建了一支婚丧乐队,奔波于各红白喜事现场。

    期间,中国大量引进白羽鸡,养鸡业快速成长,一跃成为了世界三大白羽肉鸡生产国之一。市场上的鲜鸡供应由此达到一个可观的规模,便宜的鸡架成为下岗工人热爱的食物——由此创造了一道独特的地方菜肴,“沈阳鸡架”。

  • 杨阳:从“受命于天”到“内圣外王”——先秦时期君权合法性的理论建构

    一、正统与合法性:概念解读与问题意识

    中国历史上的王朝,有些“其兴也勃,其亡也速”,有些则江山牢固,国祚绵长,还有些在遭遇重大危机后,仍能长期偏安一隅,甚至再度“中兴”。对这种现象,古人论之,多以君主特别是开国之君的“德行”“恩泽”“政绩”等加以解释 。武力强大如秦朝,之所以“一夫作难而七庙堕,身死人手,为天下笑者”,根本原因就是“仁义不施,而攻守之势异也”(《贾谊集·过秦论上》)。周王朝国祚长达800年,是因为它以仁德立国,汉王朝“君天下四百年”,是因为它政绩斐然、“恩泽深渥” 。总之, “帝王之受命,其上以德,商周是也;其次以功,汉唐是已”,而“德”“功”之所以重要,是因为“德足以绥万邦,功足以勘大乱,皆足以莫民也”。也就是说,在他们看来,能否安抚民心、赢得民众的支持,是决定国祚长短的关键。

    罗素在讨论统治和权力的时候,曾敏锐地发现“作为有效力的法律,它依赖舆论甚至要比依赖警察的权力为多”,指出“不以传统或赞同为根据”的“赤裸裸的权力”,不仅缺乏效力,且难以长久 。国家意志和政策目标只有与人们自我实现的需要高度耦合,才能将实现和实施过程的阻力降低到最低程度——这一点,即便是高度迷信权力的法家也曾清晰洞见 ,而政治统治需要得到被统治者的认可和自愿服从才能稳定长久,也是中国古代政治思想家们的共识。

    这就是对合法性的认识,尽管他们都没有使用合法性这一用语。在中国历史上,人们更习惯使用“正统”或“正闰”来指称具有合法性的政权,而将“夷狄行中国之事”和“人臣篡君之位”称为“僭”和“逆”(《郑思肖集·杂文》),也将“贼后篡政”建立的政权排除在正统之外,认为“有天下不可比于正统者三:篡臣也,贼后也,夷狄也”(方孝孺《逊志集》卷2)。这显然都是基于儒家“华夷”“君臣”“夫妇”等道德原则形成的判断,也表明儒家伦理原则在古人的合法性认知中占据着支配地位。正如饶宗颐所言:“正统”一词,“正之为义尤重于统”,“得国之正”的道德性要求优先于世袭制下对血缘统序的要求 。

    合法性认知以儒家伦理为基础,当然属于道德判断。若撇开这种判断所依托的具体价值信念,就能发现这与当代学界对合法性的认识存在明显的契合之处。合法性(Legitimacy)概念,自马克斯·韦伯(Max Weber)提出以来,已成为“现代政治生活的核心问题”和“政治思考与政治实践的中心议题”。按通常理解,它是指一种政治统治能够被被统治者认为是正当的、合乎道义的,从而自愿服从或认可的能力与属性 。这一定义大体反映了韦伯的本义,只是韦伯更强调“任何统治都企图唤起并维持对它的‘合法性’信仰” 。

    统治者怎样才能唤起并维持合法性信仰呢?恩格斯注意到提供公共产品的重要性,指出:“政治统治到处都是以执行某种社会职能为基础,而且政治统治只有在它执行了它的这种社会职能时才能持续下去。” 李普塞特(Seymour Martin Lipset)强调统治的“长期持续的有效性”,并认为这种有效性主要表现为“持续不断的经济发展” 。公共服务供给和政绩当然有助于赢得被统治者的支持,但似乎都还不足以“唤起”社会的“信仰”。越来越多的学者将合法性视为一种道德判断,如伊斯顿(David Easton)就将人们的支持区分为“特定支持”和“散布性支持”,认为散布性支持不是来自个人对自身利益的考量,而是基于内心的“道义原则和是非感” 。

    个人的道义原则和是非感是在社会价值系统的平台上孕育生长出来的,会体现出与该系统相同或相近的价值倾向。在马克思看来,这个价值系统本身就是意识形态的核心部分,它在国家公开或隐形暴力的支持下,通过系统的制度化机制,控制着人们的精神世界,形塑了人们的世界观、价值观和政治情感取向,以至于“通过传统和教育承受了这些情感和观点的个人,会以为这些情感和观点就是他的行为的真实动机和出发点”。

    社会价值系统、认知模式和情感偏好的形成,是一个复杂而漫长的过程。中国传统合法性观念的生成,也是持续的理论建构、特定思想的意识形态化、上升为意识形态的特定思想通过制度激励机制充分社会化这三种历史进程累积叠加的结果。对这个问题的深度发掘和系统清理,远非一人之力可以完成。1990年代初,笔者与友人合作完成了《顺天应人:圣人革命》 ,以“汤武革命”为中心,对先秦儒家的君权合法性理论进行了初步探讨;此后的多篇论文,也曾涉及相关问题,特别是2017年的《“受命于天”与中国古代施政观念的形成——商周政治思想述论》,对中国传统合法性观念的起源和早期形态做了较为系统的梳理 。2018年,张星久推出的《“圣王”的想象与实践——古代中国的君权合法性研究》 ,堪称是迄今为止关于中国传统合法性问题最有分量的研究著作,但其探讨偏重于“实践”,更像是政治文化传统的研究之作,对前文所言三个历史过程中的“理论建构”用力较少,这也为本文的探讨留下了空间。

    张星久认为,虽然合法性问题的讨论会涉及政体、政权和统治者(权威人物)三个层次,但由于古人大多怀有“普遍王权”的信念,从未对君主专制政体的合理性提出过真正的质疑,所以中国历史上的合法性问题就只能“是对一个具体政权的认同与支持与否的问题” 。这是很有见地的认识,在坚信以王权为中心的政治秩序是唯一合理的秩序模式的情况下,这种秩序形式本身就是思想立论和逻辑推理的前提,又怎能成为思想界质疑和讨论的对象 ?因此,中国古代对合法性问题的讨论,就只能集中在政体以外的其他两个层次,尤其是王朝政权上。由于古人更习惯用君主指代“王朝政权”,本文也将使用“君权合法性”的概念。

    如前文所述,本文讨论的是君权合法性的理论建构,故采取政治思想史的研究路径。就个人研究所见,在中国历史上,对君权合法性的理论建构大致始于殷商中后期,经过春秋时期的发展,到战国确定了主要内容和整体框架,至西汉引入天人感应和五德始终之说后大体完成。其间历时近1500余年,涉及众多的思想家和不同的思想流派,时间跨度之大,包含内容之多,绝非一篇论文所能详述。好在商周时期有关合法性观念的萌生、发展和基本形态,笔者已有专文讨论,故本文可以将讨论的重点集中于春秋和战国时期,至于对秦汉时期的进一步发展与定型的讨论,只能留待以后了。

    二、受命于天与生民立君:上帝与受命王者的神秘契约

    “受命于天”观念的起源很早,其产生和存在的基础是“交感思维”和“宇宙生命一体化”信念。远古时期,人们笃信巫术,相信万物有灵,以为万物之灵可以相互流转,故认为人可以通过特定的方式与神灵沟通,了解神灵的意志。在普遍相信神灵主宰人类命运、决定人类生死祸福的社会里,掌握和垄断与神灵沟通、传达神灵意志的权力,就成为实现和巩固政治统治的先决条件,也是统治者证成其政权合法性的主要途径。

    在中国早期国家起源阶段,颛顼就以“绝地天通”的方式结束了“家有巫史”的传统,以政治首领兼领了宗教领袖的权力。那些传说中的圣王——尧、舜、禹等,也大多是身兼大巫师的政治领袖。这表明章学诚所说的“官师治教合”(《文史通义·原道中》)——政权与教权一元化的权威结构,在中华文明的起源阶段业已形成 。既然在远古时代,“许多酋长和国王所拥有的权力,在很大程度上归之于他们兼任巫师所获得的声誉” ,也就不难理解自颛顼以后的早期政治领袖为何要把持巫术权力了。殷商时期,“王者虽为政治领袖,同时仍是群巫之长” 。商王通过垄断最重要的巫术法器(青铜器)、亲自主持重要的祭祀仪式、掌握占卜结果的最终解释权等方式,向他的臣民反复宣示对神—人沟通权力的独占。为更好地昭示自己作为“众王之王”的特殊性 ,商王还修改了作为斧钺象形的“王”字的字形,这与西周初年出现的“天子”称谓一样,都是为了更好地宣示“王”才是代天施政的唯一代表 。

    一些商王也意识到,上帝的授命不是无条件的,需要受命者承担起相应的政治责任。正如盘庚公开申明的那样:“予迓续乃命于天,予岂汝威,用奉畜汝众”(《尚书·盘庚中》)。虽然并非所有的商王都有这样的认识 ,但在盘庚、祖尹等看来,芸芸众生,“罔非天胤”(《尚书·高宗彤日》),都是上帝所生,上帝将他们交付给商王,就是希望这些民众能够更好地生息繁衍,受命的商王必须担当起“畜众”的责任,否则就会失去天命。

    虽然“畜众”责任与商王拥有的巨大权力相比,显得不成比例,但将政治责任的担当视为王者受命的条件,不论是对当时的政治实践,还是对合法性等政治观念的创新和发展,都具有重要意义。“受命于天”的信念起源于巫术时代,本有王命天定之意,将“畜众”责任引入其中,不仅淡化了其原有的命定论色彩,也厘定了这一观念的基本框架,使其内含的神—王之间的契约特征开始清晰可见。“受命于天”,就是上帝与王者之间的一个神秘约定,王者以担当某些特定政治责任的承诺,赢得了上帝的授命与庇护。

    商灭夏和周克商两次天命转移的历史经验,使周公产生了强烈的“忧患意识” ,形成了“惟命不于常”(《尚书·康诰》)的认识。这使周公对“受命于天”观念做出了创造性的发展。首先,周公将天(上帝)设想为公正无私的存在,授命与否或天命是否转移,完全取决于受命者的作为,所谓“天非虐,惟民自速辜”(《尚书·酒诰》)。这极大地凸显了受命者的主体责任,在一定程度上弱化了天的主宰能力,使“受命于天”观念框架中的授命和受命的双方关系发生了一定的变化。其次,周公将商人的“畜众”责任,拓展为一整套以“敬天保民”为纲领的政治行为和政策规范体系。从“往敬用治”(《尚书·君奭》)、“治民袛惧,不敢荒宁”(《尚书·无逸》)的施政态度,到“明德慎罚”(《尚书·康诰》)、“尚克用文王教”(《尚书·酒诰》)等施政原则,再到“无淫于观,于逸、于游,于田”(《尚书·无逸》)等生活自律,直至“爽惟民迪吉康”“作新民”(《尚书·康诰》)等施政目标,差不多涵盖了君主治国理政的所有方面,极大地强化了合法性理论对现实政治的指导和规范作用。最后,周公突破了天命只能通过巫术活动才能探知的传统认识,提出“天畏棐忱,民情大可见”(《尚书·康诰》),通过观察民情就可以了解天命的思想。

    徐复观认为,“周人建立了一个由‘敬’所贯注的‘敬德’‘明德’的观念世界,来照察、指导自己的行为,对自己的行为负责,这正是中国人文精神的最早出现。” 这一说法虽然没有错误,却有轻视天命信仰和巫术传统影响力的嫌疑。周公的确极大地凸显了受命者的主体责任,但这并不足以撼动天的主导地位,他提出了因民情而知天命的新途径,却无法改变西周社会遇事则行卜筮的行为习惯。事实上,当武庚和三监之乱爆发引起西周社会的普遍恐慌时,周公为了说服贵族,仍不得不搬出“宁王遗我大宝龟” (《尚书·大诰》)——用传统的占卜之法来证明周人的天命并没有转移 。周公只是祛除了“受命于天”观念中的命定论因素,通过强调受命者的主体责任,将对该问题的认识由神秘主义导向理性主义,并通过对这种责任的系统归纳,为后世施政确立了基本规范。周公的思想和实践所带来的人文之光,即便曾经耀动一时,也不足以照亮整个西周社会;它只是将中华早期文明推向了辉煌的顶峰,并没有推开新时代的大门。

    三、生民立君到立君为民:春秋时期“受命于天”观念的演进

    商周宗教是依托巫术信仰建立起来的,不可避免地带有巫术的“信仰的肉身化”和“信仰的仪式化”这两大特征,先天就内含向世俗世界挪移和转变的动能。这种转变,在殷商中后期已经出现,到西周初年达到了高峰。周公制定了大量的规制王室和贵族的行为规范,并将其塞入“礼”的范畴,使原本主要作为宗教仪式规范的“礼”,转变为融“神法”和“世俗法”于一体的制度规范体系。宗教规范和政治规范的互渗,一方面,会强调在政治生活中要保持在宗教活动中才有的“敬惧”态度,另一方面,也会降低宗教活动的神圣性,宗教活动最终流变成为形式主义的例行公事,只是时间的问题。

    西周末年,持续的政治衰败和社会动荡,引发了强烈的不满情绪,诗人的吟唱,由最初的“念我独兮,忧心殷殷”(《小雅·正月》)和“何辜于天,我罪伊何”(《小雅·小弁》)的自怨自艾,演变成“先祖匪人,胡宁忍予”(《小雅·四月》)“家父作诵,以究王訩”(《小雅·节南山》)的对祖先和周王的强烈质疑。虽然仍有诗人坚持“下民之孽,匪降自天”(《小雅·十月之交》),但既然天主宰一切,自然灾害、人间不公、“我王不宁”等不幸的发生,它就难辞其咎,最终得出“昊天不惠”“昊天不平”(《小雅·节南山》)和“浩浩昊天,不骏其德”(《小雅·雨无正》)等结论就是顺理成章的。这些被后人称为“怨天尤人”的情绪汇聚成对天的善意和公正性的质疑,虽没有摧毁天命信仰,却足以促使人们重新思考自己与神灵的关系。

    许倬云分析了《左传》中20多例春秋人对天的看法,认为大致可分为两类:“一类是人事決于天命的畏天论,一类是天命因于人事的修德论。” 将一切委诸天命的畏天论,在中国历史上从未绝迹,出现在“祛魅”大幕徐徐拉开的春秋时期,更是不足为怪,值得注意的是后面这类认识。虽然当时人们仍习惯将祭祀等活动作为国家的头等大事(《左传·鲁成公十三年》:“国之大事,在祀与戎”),保留着“天事恒象”(《国语·周语》)的信念,每遇重大行动,仍要借助卜筮做出决策,但是复杂的政治环境、日趋激烈的列国竞争,都促使人们越来越多地依据现实需要采取行动。他们循着周公因民情而知天命的认识,发展出了“民为神主”的思想,将长期隐身在“受命于天”观念中的“民”拉出了水面,以重塑天、王、民三者关系的方式,开启了君权合法性理论建构的新阶段。

    公元前706年,楚国入侵随国。随侯认为自己一向恭谨侍神,神灵一定会保佑随国。贵族季梁指出:“夫民,神之主也。是以圣王先成民而后致力于神”,随侯虽恭谨侍神,却无视民众的疾苦,造成了“民各有心,而鬼神乏主”(《左传·鲁桓公六年》)的局面,想获得神灵佑护是不可能的。公元前644年,宋襄公要用活人祭祀,司马子鱼反对,理由是:“民,神之主也。用人,其谁飨之?”(《左传·鲁僖公十六年》)这里的“民为神主”,当然不是在说民是神的主人,“主”是“主使”之意,表达的意思是神意由民意所决定。

    这种认识,当时也表述为神“依人而行”“惟德是依”和“惟德是辅”。公元前672年,虢公命祝史祈求神灵赐给自己土地,史嚣由此断定虢国必将灭亡。在他看来,“神聪明正直而一也,依人而行”,虢公一味迷信神灵,不修内政,民心离散,又怎能不亡?接着,他说出了那句后世习惯引用的名言:“国将兴,听于民;将亡,听于神。” (《左传·鲁庄公二十二年》)公元前655年,晋借道虞国伐虢,宫之奇以唇亡齿寒为由建议拒绝。虞侯以为“吾享祀丰洁,神必据我”。宫之奇反驳说:“鬼神非人实亲,惟德是依。故《周书》曰:‘皇天无亲,惟德是辅。’……如是,则非德,民不和,神不享矣。神之凭依,将在德矣。”(《左传·鲁僖公五年》)

    神明察秋毫、公正无私,其意志完全由民众的意愿所决定,民众意愿又完全取决于统治者的现实作为,而神所嘉许的现实作为就是符合“德”的行为。据张铉根考察,“德”字起源很早,在《诗经》中出现70多次,在《尚书》中出现200次以上 。虽然剔除普遍被怀疑是伪作的《古文尚书》的相关篇章,“德”字在《尚书》中出现的次数会打些折扣,但其在商周时已被广泛使用应是没有问题的。在当时,“德”字或单独使用,或与“行”连用 ,主要是用来概括和赞美统治者的正确行为,多与政策、政绩相联系。周公在这一意义上大量使用过“德”字,春秋人大多也是这样理解“德”的含义 ,所以诸如“抚民”“亲民”“恤民”“安民”“利民”“惠民”“和民”,乃至“视民如子,辛苦同之”,都属于史嚣、宫之奇所说的“德”的范畴。于是,“受命于天”的观念在春秋人的解读中呈现为统治者施行德政—获得民意支持—上帝授命的理论架构。

    春秋时期,人们对上述问题的探讨大多还属于如何争取民意支持的策略性考量,但也有人开始将目光转向了更根本的问题:既然上帝公正无私、不偏不倚,那么它到底会把君主与民众哪一方的利益放在首位?公元前614年,邾文公要迁都,命史官占卜,结果是“利于民而不利于君”,邾文公仍坚持迁都,理由是:“苟利于民,君之利也。天生民而树之君,以利之也。民既利也,孤必与焉。”(《左传·鲁文公十三年》)设立君主,目的就是“利民”,民众的利益就是君主制度存在的目的 。在中国历史上,这堪称是石破天惊的论断!然而,春秋人的思想突破并未就此止步。公元前559年,卫国贵族发动政变,追杀出逃的卫献公,晋侯认为这种行为实在过分。师旷不以为然,反驳说:“天生民而立之君,使司牧之”,目的是让民众更好地生息繁衍,倘若君主无道,致使“困民之主(生),匮神乏祀,百姓绝望,社稷无主”,这样的君主,“将安用之,弗去何为”?师旷进一步解释说:“天之爱民甚矣,岂其使一人肆于民上,以从其淫,而弃天地之性?必不然矣。”(《左传·鲁襄公十四年》)

    春秋时期,贵族发动政变杀掉或驱逐君主,是很常见的事情,时人在讨论原因时,也不乏“民弃其上,不亡何待”(《左传·鲁昭公二十三年》)之类的看法。但这些看法多是对某位君主丧失权位原因的客观分析,并没有表达出对这种行为正当性的认可。但是师旷上述之论则完全不同,他已明确反抗暴君行为的正当性。他不再将天看作是不偏不倚、全无情感偏向的客观存在,而是将天想象为具有“爱民”情怀的慈父。在民众与君主之间,师旷让天站在了民众一边,结果是在“受命于天”的合法性理论中,嵌入了有权反抗暴政的内容,尽管他并没有明确将这种反抗的权利交付给民众 。

    四、天志与明鬼:墨家学派对君权合法性理论的建构

    如前文所揭,春秋时期,“受命于天”的合法性理论出现了三个重要变化。首先,主宰之天被明显虚化,变成了“依人而行”“惟德是辅”的被动之天。其次,民意决定和支配天的意志,“人民同意”成为君权赢得合法性的决定性因素。第三,暴力推翻严重悖逆君主行为规范的暴君,是上天爱民意志的呈现。

    徐复观认为,春秋时期是“礼的世纪,也是人文的世纪” ,天命信仰的消解、礼的隆起、重民论的滥觞,都支持这一看法。然而,春秋毕竟只是中国轴心时代理性革命的起始阶段,刚刚走出传统的观念茧房,面对剧烈变动的世界,人们的思考不免表现出明显的过渡性特征;创新性观念很多,时常火花四射,但缺乏逻辑连贯性和理论系统性。经由他们的重新阐释,“受命于天”的观念结构发生了重大变化,也留下了许多理论盲点。将民意作为君主受命的根本依据,但民众的意愿要通过什么方式才能准确表达出来?将君主政绩(德)看作是决定民意的根本力量,政绩与君主内在品质又是何种关系?将主宰之天转变为虚化之天,传统的“天谴”“天罚”信念弱化之后,又以何种力量约制君权使之规范运转?反抗和推翻暴政是正义之举,那么谁有权成为这种行为的主体?

    承接春秋的战国时期,“礼仪中的亲亲精神一天天地稀薄,并演变向权谋术数、凌弱暴寡的方向” 。当道德考量被逐出政治生活,合法性问题便很难进入政治思想的论域。法家认为政治角力实力为王,道德不过是权力意志的呈现 ,他们和沉醉于权谋诈术的纵横家一样,关心的是通过何种策略和方法获取权力,并不在意这种策略和方法的正当与否。杨朱、庄子一系的道家将国家和文明看作是“匡人之形”的大牢笼,大体秉持无政府立场,当然也不会讨论这一问题。故而战国时期,虽是“百家蜂起”,但真正参与君权合法性理论建构的主要是儒家和墨家。

    墨子基本接受了“受命于天”的理论框架。他驳斥命定论,讥讽那些失去政权的“暴王”将亡国的原因归于命运的做法 ,这使他特别强调统治者政治责任的重要性,认定政权存亡,完全“存乎上之为政也”(《墨子·非命下》)。在他看来,人本生来“自爱”“自利”(《墨子·兼爱上》),人之间的相处,必定会出现“人是其义,以非人之义”的“交相非”的局面,甚至会发生“天下之百姓,皆以水火毒药相亏害”的惨剧(《墨子·尚同上》)。面对这种情况,孔子等儒者认为通过道德感召式的教化可以使人“有耻且格”,但墨子认为这是缘木求鱼,因为人世间,父母、读书人、君主众多,但“仁者寡”,提倡效法他们,以他们为榜样,与其说是让人们学好,毋宁说是让人们学坏(《墨子·法仪》)。在这一情况下,也就只有“天”作为样板可以让人们去效法了。为此,他重拾了天的权威。

    在墨子这里,“天”是“欲义而恶不义”(《墨子·天志上》)的爱憎分明的情感之天,是“爱天下之百姓”的“至仁”之天,是带有“兼相爱”“交相利”等特定意志取向的天。在一个天逐渐被虚化、圣人日益凸显的时代,墨子逆流而动,从破败的西周观念殿堂中抬出神性之天,填充进自己的政治主张,当然是希望放大“兼相爱”“交相利”等主张的权威性。正如杨庆堃所言,把“道德意义注入原本和道德无关的自然力之中”,可以“赋予道德的秩序以先天的公正性”。但是思想主张对现实的干预能力,往往需要以现世的赏罚为基础。《诗经·大雅·抑》中原有“投我以桃,报之以李”的诗句,墨子从中引申出“爱人者必见爱也,而恶人者必见恶也”的人际交往互动逻辑(《墨子·兼爱下》),并将这种逻辑推演至天人之间。这虽然使天看上去像是“根据明确的道德计算行动的神” ,但却能很自然地得出从天所欲必得“天赏”,逆天而行定受“天罚”的结论。

    墨子要求统治者“动作有为,必度于天,天之所欲则为之,天所不欲则止”(《墨子·法仪》),称之为“顺天意”。在他看来,“顺天意者,兼相爱,交相利,必得赏。反天意者,别相恶,交相贼,必得罚”(《墨子·天志上》)。然而,谁来执行赏罚呢?墨子搬出了鬼神充当天志的执行者。伍非百以为“墨子以鬼神二灵,皆能作祸福于人间,而助天行志” 。当时社会普遍相信鬼神可以控制人的生死祸福,墨子选择鬼神来承担代天赏罚的责任,就是想利用这种普遍信仰来增强“天赏”和“天罚”的现实感和威慑力。他批评儒家扬圣抑天而黜鬼神的理性态度 ,以极其夸张的修辞赋予鬼神以圣人无法企及的无限能力。他说:“故鬼神之明,不可为幽闲广泽,山林深谷,鬼神之明必知之。鬼神之罚,不可为富贵众强,勇力强武,坚甲利兵,鬼神之罚必胜之。”(《墨子·明鬼下》)

    正如赖伯然所言:“墨家将逐渐褪去神性的‘天’拉回圣坛,于超验世界中建构起一个具有无限权威的神圣意志;‘天志’的所欲所恶,决定了人间的所赏所罚。” 在政治生活日益走向现实主义,观念世界日趋理性主义的战国时代,墨子上述尊天明鬼借助神秘的外在权威做支撑的思想主张 ,即便能盛极一时,也不免昙花一现的命运。时代没有站在墨子一边,但时间可以终结这个时代。当理性主义的思想潮流逐渐消退之后,以董仲舒为代表的汉儒终于重回墨子的思维路径,再度将充分儒家化的“天”抬升到至尊地位,并以天人感应理论充分吸纳了“天赏”“天罚”观念。这也清楚地表明,在君权合法性理论建构中,墨家学派虽处配角位置,但其思想贡献仍是不应被埋没的。

    五、天与人归与内圣外王:儒家学派对君权合法性理论的再发展

    据张铉根统计,“天命”一词在《论语》中共出现三次 ,但都与“受命于天”无关。《论语》所录孔子言论,直接与君权合法性相关的大概也只有《八佾》中的一条 。孔子将政治理解为正己正人的道德感召过程(《论语·颜渊》:“政者,正也”),认为统治者“其身正,不令而行;其身不正,虽令不从”(《论语·子路》),遂更多地聚焦为政者的道德品质,强调“克己复礼”(《论语·颜渊》)“学礼”“知礼”“约之以礼”,做到“自讼”“自省”,要求为政者通过“修己”,以榜样的力量,达到“安人”“安百姓”的目的。这些思想看似与君权合法性无关,但孔子对为政者内在品质首要性的强调,却有力地推动了外在的功德之“德”向内在的品德之“德”的转变。孔子之后,上天授命凭依的“德”,主要所指逐渐由外在的政绩转变为统治者的内在品质,这为“内圣外王”命题的最终出现奠定了基础。

    孟子将“受命于天”的观念表述为“天与人归”。所谓“天与”,其背后的逻辑是天子之位是上天授命所得,在位天子可以向上天推荐自己的继承人,但需要得到上天的认可。因为“天不言”,它表达意见的方式是“以行与事示之”,所以天的意见如何,必须通过观察研判方能得知,方法是:“使之主祭而百神享之,是天受之;使之主事而事治,百姓安之,是民受之。”(《孟子·万章上》)在孟子看来,天向人们传达的意见,会在被举荐人职务活动中反映出来,若被举荐人主持各种祭祀活动,都取得了“百神享之”的效果 ,就说明天已经接受;在主持政务过程中,他处理得井井有条,百姓都安居乐业,就说明百姓也已接受。

    本来万章所问是尧是否曾将天子之位传授给舜,孟子最初的回答也是“天子不能以天下与人”,舜继天子位是“天与之”,但在进一步解释天的意见如何呈现时,却引出了民意是否接受的问题。这看起来似乎答非所问,但背后自有其逻辑。孟子的“天与之”,本来就内含“民受之”。所以天意固然需要以“百神享之”的方式呈现,也需要诉诸民意的表达,孟子最后引《泰誓》中“天视自我民视,天听自我民听”来结束本次问答也说明了这一点。

    “民安之”只是“民受之”的被动表达,孟子还提到了多种民意的主动表达形式,如以歌谣等表达支持意见,“讴歌者,不讴歌尧之子而讴歌舜”(《孟子·万章上》);出现各种纠纷争讼,双方会不约而同地找其裁决,“讼狱者,不之尧之子而之舜”(《孟子·万章上》);民众携其子弟投奔归附,“民归之,犹水之就下”(《孟子·梁惠王上》);在举兵征伐过程中,每到一地,民众都自发走上街头,“其君子实玄黄于篚以迎其君子,其小人箪食壶浆以迎其小人”(《孟子·梁惠王上》)。若将民意扩展理解为人心,将社会上层纳入,还包括“天下诸侯朝觐者,不之尧之子而之舜”(《孟子·万章上》)。就这样,孟子通过对民意表达形式的归纳,将春秋以来反复强调的“人民同意”主张具体化了。

    在孟子这里,民意也表述为民心。“桀纣之失天下也,失其民也;失其民者,失其心也。得天下有道,得其民,斯得天下矣。” (《孟子·离娄上》)那又如何争取民心呢?孟子说:“得其心有道,所欲与之聚之,所恶勿施,尔也。”(同上)关键要“与民同之”“乐民之乐,民亦乐其乐;忧民之忧,民亦忧其忧”(《孟子·梁惠王下》)。为争取民心,孟子还设计出系统的制度方案,这就是他的仁政之说。

    但是仁政等举措能否实施,则取决于君主是否有“不忍人之心”。“有不忍之心,斯有不忍人之政矣。”(《孟子·公孙丑上》)所谓“不忍人之心”,又称“恻隐之心”,是“四端”之一,是人特有的趋向于善的内在因素或能力。孟子认为,人只要在生活中努力养护和扩展这种善端,就能实现做人成圣的使命 。强调君主要具备“不忍人之心”,也就是对君主德性提出了要求。孟子认为,德性是“天爵” ,有德之人理应身在高位,所谓“唯有德者宜在高位”。这不仅是因为“不仁而在高位,是播其恶于众也”,更重要的是还会导致国家政权的倾覆,“三代其得天下也以仁,其失天下也以不仁”(《孟子·离娄上》)。

    得民心须行仁政,行仁政又要求君主必须有“不忍人之心”,“天与之”的前置条件最终聚集于君主的德性,“圣”与“王”终于走到了一起。正如我曾多次指出的,儒家以“人必须为尧舜”赋芸芸众生以做人成圣的宗教性义务,再以“人皆可以为尧舜”将“成圣”大门向全社会开放,最终将个体做人成圣的业绩作为政治分层的依据,而圣人自然会被放置在权力的顶层 。但是这种“内圣外王”的想象与实际经验存在明显反差,孟子还需要作出相应的解释。

    首先,孔子何以不王?孟子的解释是:“匹夫而有天下者,德必若尧舜,而又有天子荐之者,故仲尼不有天下。”(《孟子·万章上》)在另一处,孟子又说,孔子、伯夷、伊尹这样的圣人,“得百里之地而君之,皆能以朝诸侯有天下”(《孟子·公孙丑上》)。在这里,内圣外王被设置了前提条件,若是有“百里之地”的实力派,命题自然成立,若是无权无势的平民,就需要现任天子的举荐。

    其次,商汤文武固然都是内圣而外王,但他们的子孙却显然达不到内圣的标准。对这种情况,孟子解释说:“苟为善,后世子孙必有王者矣。君子创业垂统,为可继也,若夫成功,则天也。”(《孟子·梁惠王下》)在儒家的“亲亲”原则下,先祖功业所带来的恩泽在血脉相传中传递给子孙,子孙即便达不到内圣,也可以合法地君临天下。因此,内圣外王的逻辑,按孟子本意,仅适用于创业之君。

    基于内圣外王的想象,孟子创造出儒家特色的“圣王—暴君”结构,为革命、变置、反抗等理论的提出奠定了基础。如前文所揭,春秋时已出现了肯定推翻暴君统治正当性的观念,孟子通过对这种行为较为仔细地分析,发展出了儒家的反抗暴政的理论。

    孟子以“出乎尔者,反乎尔者也”的理由,肯定了民众背弃君主和官员的行为。邹国与鲁国发生边境冲突,邹国官吏33人被杀,邹穆公抱怨民众见死不救。孟子指出,边境发生灾荒的时候,百姓流离失所,这些官吏却无动于衷,瞒报灾情,现在这些官吏被杀,民众见死不救,岂不是合情合理的行为(《孟子·梁惠王下》)?

    本着“民为贵,社稷次之,君为轻”的原则,孟子提出“诸侯危社稷,则变置”的主张(《孟子·尽心下》)。所谓“变置”,是指同姓重臣以政变形式废立君主。他提出当君主“有大过”时,“贵戚之卿”要反复进谏,若君主拒不纳谏,“贵戚之卿”可以行废立之事。这里的“贵戚之卿”,是指君主的同姓重臣。至于其他大臣,面对“有大过”的君主,在反复进谏君主不听的情况下,可以选择辞职,但无权擅行废立(《孟子·万章下》)。

    尧舜以禅让得位,汤武则以革命登基,孔子更推崇前者,认为不仅“尽美”,而且“尽善”(《论语·八佾》)。这种认识,在孟子这里已经消失。自夏以后,改朝换代都是通过暴力手段完成的,赋予暴力革命的正当性,是成功构建合法性理论、构造圣王—暴君理论解释框架的关键。因此,在孟子这里,汤武革命被赋予更崇高的道德意义。他认为失去民心的桀纣已失去了君主身份,变成了“一夫”,而汤武因“海内归心”已成为实际的君主,他们的革命行为不过是行使天子的诛杀之权,根本不存在“弑君”的问题(《孟子·梁惠王下》)。

    当然,手段也不是不重要。尧舜以禅让得位,孔子之所认为更可取,是因为这种方式不会造成生命损失。孟子认为因为作为革命主体的圣人具有“行一不义,杀一不辜而得天下,皆不为也”的悲悯情怀,革命目的也只是“诛一夫纣”(《孟子·公孙丑上》),其过程也不会造成更多的流血,而“解民倒悬”“救民水火”的最终结果,反而更能突显暴力革命的正义性。

    变置、禅让和革命,不论采取哪一种方式取得政权,其合法与否关键在于行为动机。在解释舜继尧位为何属于“天与之”时,孟子指出舜并无代替尧之子成为天子的主观动机。他说尧三年丧期结束后,舜躲避到了南河之南,真诚地希望尧之子继位,只是在社会的一致吁求之下,才最终继位为天子。若尧一死,舜就“居尧之宫,逼尧之子”,那就不再是“天与”,而是“篡”了(《孟子·万章上》)。在谈到伊尹放太甲的故事时,他说:“有伊尹之志则可,无伊尹之志,则篡也。”(《孟子·尽心下》)所谓“伊尹之志”,朱熹解释就是“公天下无一毫之私者”。在谈到汤武革命的动机时,孟子也一再强调不是为了“富有天下”,而是“为匹夫匹妇复仇”“救民于水火之中”(《孟子·滕文公下》),是“一怒而安天下之民”(《孟子·梁惠王下》)。

    荀子是孟子之后又一位儒学大师。他反对孟子的性善论,认为承认人先天具有为善的道德资源,在逻辑上会将做人成圣看作是不假外求的过程,会否定外在权威存在的必要性。为此,他以性恶论在人性与人本质之间制造出对立关系,将做人成圣看作是通过改造人性而不断“积伪”的过程。因人的内在道德资源已被性恶说所褫夺,则礼仪、君师、圣王等外在权威就成为帮助人们实现存在意义的根本保障 。虽然荀子通过上述理论有力地论证了君主专制制度存在的必然性和合理性,但他很少直接论及君权合法性问题。在偶尔谈到相关问题时,也多是在重申孟子天与人归的观点。

    《荀子·大略》曾言:“天之生民,非为君也。天之立君,以为民也。故古者列地建国,非以贵诸侯而已,列官职,差爵禄,非以尊大夫而已。”但并未进一步展开论述。荀子将“常有天下之籍”“亲有天下之籍”、能否号令民众作为判断是否具有君主身份的标准,提出在汤武革命前,汤武与桀纣的身份已经逆转,汤武因“海内之民莫不愿得以为君师”——已成为实际上君主,桀纣因已“不足以执天下之籍”,无法再号令民众,已经变成事实上的臣民。在这种情况下,汤武革命的本质就是君主诛杀独夫的行为,根本不存在“篡逆”的问题。他明确提出“天下归之之谓王”和“能用天下之谓王”的结论(《荀子·正论》),前者是对孟子“得人心者得天下”的重申,后者则是从“得民心”的结果(能用天下)所做出的反推。从结果反推原因,存在着将道德评判庸俗化的风险——真正的道德评判总是着眼动机而不是结果。不过,对荀子来说,强调革命者动机的纯洁性是困难的。因为在他看来,有欲望,甚至想成为天子都是“人情之所同欲也” ,所以除了强调过程中的“必不伤害无辜之民”外,也只能更多地将对君权合法性评判交给最终的结果了。

    六、结语

    来自远古的天命信仰,在商周时演化出“受命于天”的观念,它以上帝与王者之间要约的形式,将畜众和保民的责任作为统治合法性的承诺,深深嵌入了中国早期君主政治的运行过程。春秋时期,出现了“民为神主”“惟德是辅”的社会共识,伴随天主宰能力的下降,民由周公思想中传递上帝意愿的“灵媒”,转变为决定上帝意志的最终力量。邾文公、师旷等提出“立君为民”说,重置了“受命于天”理论框架中的君民关系,是春秋时期最具现代意义的创见,并由此引导出反抗暴政行为正当性问题的讨论。

    神一旦只能“依人而行”,其权威性就会显著弱化。墨子将自己的政治主张抬升为“天志”,作为评判王权合法与否的基本准则,又祭出鬼神去执行“天罚”,可谓用心良苦,但在战国理性主义大潮面前,也只能昙花一现。仅就君权合法性理论建构而言,孟子是先秦用力最多、贡献最突出的思想家。他以“天与人归”的命题整合了春秋以来的天意即民意、立君为民、有权反抗暴政等思想,归纳出合法获得君权的多种形式。他将天意所属和民心所归引向了君主的德性,强调行为背后的动机,最终以内圣外王之说,确定了儒家君权合法性理论在本质上属于道德判断这一主体特征。

    政绩或功德之“德”转变为品德之“德”,孔子以来周公的外在之“德”的内在化,是孟子提出内圣外王说的前提。与孟子的其他主张一样,内圣外王之说,在实际政治生活中,也未免“迂远而阔于事情”。首要的问题是,在失去天的威慑之后,儒家倡导的纲常规范如何能为社会,特别是大权在握的君主所遵守?君主政治会不会变成脱缰的野马?数百年后,以董仲舒为代表的汉儒通过再造天的权威,将内圣外王理论重新纳入天的主宰之下,最终完成对儒家合法性理论的再构建。其次的问题是可操作性。内在圣性本难测量,主观动机,外人也无法知晓。荀子尝试抛开动机,采取结果主义,将内在品德之德再度理解为政绩或功德之“德”。虽然“能用天下之谓王”,不过是成王败寇的另一种说法,但却成为后世最常见的评判方式。饶宗颐注意到,宋朝人讨论正统问题,就多偏重以是否实现江山一统作为判断依据。在现实生活中,动机和圣性,都远不如结果和功德那样清晰可见,“统一天下的功业就具有了反证统治者‘有德’与‘得民心’‘得天命’的功能” 。但是这样一来,儒家君权合法性理论道德判断的属性就会被严重削弱,以至于在政治实践中蜕变成对当权王朝和当今天子的道德粉饰。

    然而,对专制政治残酷现实的粉饰,虽不是孟子的本意,却是统治者制造自身合法性的重要手段。内圣外王的政治想象,随着儒学的意识形态化,逐渐转变为一种普遍的政治信念和认知模式。它不仅有通过追认王朝存续的正当性进而唤起整个社会服从的功能,对王朝政治也有规范和矫正的作用。它内含的君德要求和理政原则,制约和规范着君主的个人行为和朝廷的政策选择,也赋予精英社会评点朝政、裁量是非、批评君主的道德勇气和理论依据。此外,孟子论述的尧舜禅让、汤武革命和贵戚之卿变置等政权更迭方式,都成为合法取得政权的典型范式,在他身后中国历史上一再出现。

    本文来源:《政治学研究》2024年第2期

  • 张功耀:朱砂与垃圾科学[节编]

    文章首发于2013年,原名《朱砂、垃圾科学与年薪》

    2013年,网上爆出某仁堂生产的40多种中成药含朱砂,被香港卫生当局责令召回,禁止销售。

    朱砂入药始于晋代。最初是道家把它入药的。据葛洪写的《抱朴子》记载,有人发现临沅县(今湖南省常德市)境内一户姓廖的人家,世世代代都长寿。究其原因,居然是这户人家的井里埋了十斛朱砂。由此,道家突发奇想,把朱砂炼出来,一定比水泡出来,更加有效。从那以后,道家就开始用朱砂炼丹,以图帮助人们延年益寿了。

    朱砂入药之后,出现了朱砂到底有没有毒的分歧。魏朝的吴普说,朱砂无毒。梁朝的甄权说,朱砂有大毒。与李时珍同时代的何孟春则说,“朱砂性寒、无毒,入火则热而有毒,能杀人,物性随火而变”。何孟春的这个话,北宋的沈括在《梦溪笔谈》(卷24)中也说过。沈括的一位表兄的徒弟,曾经将一块冲洗过的朱砂服进了肚里,当晚就出现了“懵冒”,终于“一夕而毙”。对此,沈括很不理解。一种婴儿都可以服用的养心安神之药,何以会把一个大男人给毒死了?于是,沈括便把它记在了《梦溪笔谈》里。

    现在的人比沈括进步多了。沈括当年不理解的,现在的人已经能够理解了。这就是汞中毒。有人说,天然的朱砂无毒。这是错误的。事实上,已经有人将朱砂列为三大最毒中药(朱砂、雄黄、乌药)之首,也有人将朱砂列为十大最毒矿物(朱砂、雌黄、辉锑矿、铜铀云母、雄黄、石棉、方铅矿、硫砷铊铅矿、胆矾、碲化汞)之首。足见朱砂的毒性是如何了得。

    朱砂,在说英语的国家叫cinnabar。如果用古汉语直译,它也可以被翻译成朱砂。因为这个词的基本含义就是“砖红色”。

    据查,在世界范围内,只有印度和中国将朱砂入药了。古代南美洲人民曾经用朱砂做过颜料,但没有入过药。有趣的是,他们对朱砂有毒可谓了若指掌,如数家珍。他们发现的朱砂中毒症状包括,肌无力、身体颤抖、意识不清,严重的可以致死。这些认识与沈括的记载不谋而合。

    除南美洲之外,在西班牙离马德里225千米的一个地方,叫阿尔马登(Almadén),那里的朱砂矿曾经裸露在外。古罗马人占领那个地方的时候,发现那里的人普遍容易患上与呼吸系统或神经系统相关的怪病,寿年也特别短。当地人非常害怕,都纷纷迁走了。罗马人占领西班牙以后,竟然把那个地方用来处死奴隶和死囚。足见,古代西班牙人和罗马人知道朱砂有毒。

    在知识爆炸的时代,要特别警惕“垃圾科学”。

    “垃圾科学”这个术语不是科学哲学家最早提出来的,而是美国司法部1985年第一次使用的。它原指那些违背基本的科学常识、实验规范和逻辑要求,却以科学的名义流行起来的虚假的科学。大约这样的科学经常进入法庭,惹得美国的司法部门生气了,所以就得了这样一个诨名。这个概念提出来以后,震动了科学哲学界。进入90年代以后,科学哲学家把“垃圾科学”的概念扩大了不少。凡是用虚假实验数据做出来的科学,社会学家认为可能危害人类生存环境和生命安全与健康的科学,还有在历史的进步中遭遇淘汰的科学,都被列入了垃圾科学。

    为了让更多的人从垃圾科学的欺骗当中觉醒起来,下面我再一次地对某些专家鼓吹的陈词滥调做些分析。

    第一个陈词滥调:游离汞有毒,但朱砂入药自古有之,如果炮制好,配伍好,遵照医嘱合理用药,不仅无毒无害,更可以治愈疾病

    刚才我已经列举了甄权、沈括和何孟春这三位古中国人对朱砂有毒的论述,也列举了南美洲和欧洲历史上关于朱砂有毒的记载。它表明,历史上对朱砂有毒早就有了一定的认识。所谓“自古有之”,只能作为古人认识水平落后的标志,却不能作为可以继续愚昧下去的理由。至于“炮制好”和“配伍好”以什么做标准,用什么方法来评价,到目前为止,中毉药界连“混账的糊涂话”都还没有说出来。实实在在是一句毫无辩解价值的空话。

    事实上,朱砂是不可能被“炮制”和“配伍”到无毒状态的。

    朱砂的基本化学成分是硫化汞。由于朱砂含汞,西方国家早就禁止在内服药当中掺入汞元素。加上在他们国家的历史上从来没有过朱砂入药的记载,所以,他们对朱砂入药的安全性和有效性问题的讨论,完全没有兴趣。这种状况,给中毉药界“填补空白”和取得“领先于世界的科研成果”创造了不可多得的机会和条件。

    当今世界,汞中毒已为全世界所重视。这种重视是从化工工业延伸出来的,不是从朱砂入药延伸出来的。汞中毒与马兜铃科植物中毒相比,更加令人不安。马兜铃科植物中毒仅限于泌尿系统,或致膀胱癌,或致肾功能衰竭和肾癌,目前还没有发现影响其它系统的报道。汞中毒则不同,它可以侵害神经系统、呼吸系统、消化系统、循环系统、肝脏、肾脏、骨骼,等,多个系统和器官。而且,中毒途径也比马兜铃科植物更广。肠胃消化,呼吸,膜吸收,均可致汞中毒。其中,最严重的汞中毒莫过于水俣病(Minamata disease)。这是1956年在日本发现的一种汞中毒疾病。它的基本症状是肌无力、意识障碍、视野变窄(我国俗称“对子眼”)、失聪、语言表述不清,严重的甚至可致精神失常、昏迷,数周内即可死亡。今年的1月19日,全世界147个国家的医疗卫生专家联合起草了《预防汞水俣病公约》,期望在今年10月正式签署为国际性的约束文件。

    在如此严重的汞中毒流行的背景下,中国政府不重视朱砂入药的安全性问题已经不行了。于是,朱砂入药的安全性问题终于成了我国医学科学界的一个热门话题。

    遗憾的是,几乎所有中国人做出来的有关朱砂入药安全性问题的研究,全都是带着“弘扬中毉药文化”的理论义务做的。稍微有点科学哲学常识的人知道,任何带着预期的理论义务所做的研究,都可能陷入“围绕结论找证据”的哲学陷阱之中。所以,目前我们所能看到的有关朱砂入药安全性科研成果,无一例外地都是垃圾。

    在我的学术视野中,只有一篇有关朱砂入药的综述性论文勉强可读。这篇文章是三位中国人和两位外国人合写的。它发表在《皇家医学杂志》上,标题是《传统医学中的汞:朱砂的毒性与通常的汞一样吗》(Mercury in Traditional Medicines: Is Cinnabar Toxicologically Similar to Common Mercurials)。其中的三位中国作者来自贵州中毉药大学和遵义医学院,署名在前;两位外国人来自美国国家癌症研究中心,署名在后。他们声明自己的研究动机是,为朱砂入药提供科学的咨询意见。可是,他们所得出的结论却十分滑稽:朱砂入药比单质汞入药安全得多。

    在我看来,这个话等于没说!

    我曾经在一次学术会议上听到过类似的“高见”,作者经过洋洋洒洒“理论和实践相结合”的论述之后,得出一个结论说:“制约我国西部经济发展的最大问题是水的问题”。还有中国科学院的某个研究机构,“经过实地调查”之后也得出一个结论说,“北京市交通最通畅的时间段是凌晨两点到四点”。这样的科学研究成果,在我看来,都与没有说一样。

    值得注意的是,在这五位先生发表这篇综述性的论文之后不久,两位意大利药物学家,泡罗(Paolo D. Pigatto)和吉安泡罗(Gianpaolo Guzzi),联合发表了一篇“微评论”文章,题目是《朱砂的临床毒性》(The Clinical Toxicity of Cinnabar)。他们提出了五点批评意见。其中最值得我们重视的批评意见是,没有提醒读者注意,人体对朱砂中的汞元素存在与剂量无关的特异性非过敏反应;有关“依地酸钙钠可以解汞中毒”的结论,没有证据。

    记得我当年学习中毉的时候,医书上介绍使用朱砂的方法是“另包,水飞服”。其做法是,将朱砂用白布另外包好,不要放在其它药物一起去煎熬,而是放在备好的碗里,把熬好的汤药冲进碗里,待汤药冷却到一定程度之后,再把朱砂拿出来,喝那个药汤就行了。这样做,朱砂当然是无毒的。因为常温下朱砂的水解度小于0.001克/升。此时的朱砂根本就没有进到身体里边去,而是原原本本地从白布包里边被拿出来了。

    “现代中毉”使用朱砂的方法,仿佛不是我当年学中毉的那种做法了。倒像沈括的表兄所带的那位徒弟,直截了当地把朱砂吃进肚里去了。这就是找死了。事实上,朱砂在被加工成粉末的过程中,已经有部分汞元素变成游离汞了。汞在常温下是液态的,357℃即可气化。气化后的汞很容易被吸入人体,从而引起呼吸系统的汞中毒。此外,经过物理加工和加热煎煮之后产生出来的游离汞,一旦与盐酸或甲烷发生化学反应,就可以生成升汞,甲基汞或二甲基汞。这些物质所具有的毒性比单质汞高许多倍。其中,甲基汞的毒性比单质汞高1000倍以上。身体里边累积甲基汞达到10毫克,足以使一个70公斤的健壮男人在两周之内毙命。这就是明朝何孟春所讲的朱砂“入火则热而有毒,能杀人”的奥秘所在。值得注意的是,中毉药界至今蔑视这样的毒理,声称这个毒理属于“西方科学”。一群蔑视汞中毒病理的人,自我标榜说可以找到“合理的”配伍和炮制方法,除去朱砂的毒性,我实在想象不出,有哪些愚不可及的人会相信这样的鬼话。

    第二个陈词滥调:用毒药是中毉的一大特色。西药当中有许多药也是有毒的。所以,中毉用毒药也是正常的。

    这是比任何垃圾科学还要垃圾科学的垃圾科学。

    众所周知,在很长一段时间内,中毉中药一直标榜“纯中药制剂,无毒副作用”,现在他们终于承认“用毒药是中毉的一大特色”了。

    是的,西药当中也有许多是有毒的。但是,中毉用毒药和西医用毒药有着本质上的不同。这一点,我不知道说过多少遍了。概括地说,西医当中的毒药是可控制的,可代谢的,结果明了的,甚至是可预知的。中毉使用毒药全都是不可控制的,不可代谢的,结果不明了的,甚至,对中药到底有没有毒,哪些中药有毒,哪些中药没有毒,中毉界至今茫无所知的。正由于这样,鲁迅先生才把中毉叫做“有意或无意的骗子”。

    下面这个例子也是我多次举过的。

    发明抗生素是医学史上一次重大的进步。如果没有抗生素的发明,现在的人类恐怕依然平均活不过40岁。绝大部分人还会因为无法控制的细菌或寄生虫感染,而失去得老年病的机会。现在有那么多的人得“老年病”,从某种意义上说还是抗生素对人类做出的一大贡献。但是,抗生素是有毒的。并且,“抗生素有毒”是医生和药物学家从一开始就知道了的。

    西医药物学家明明知道抗生素有毒,为什么还要用呢?中毉界的人完全不懂医学史,也不懂西药研究与制造的药理,多次借这个话题给西医西药“扣屎盆子”。事实上,西医安全使用毒药的信心首先源自“艾尔利希魔弹实验”。20世纪初,德国药物学家艾尔利希通过605次的失败之后,找到了利用毒药治病而不严重伤害人体的方法,也就是,在使毒药毒死细菌和寄生虫的同时,可以确保人体自身的安全。这些药物仿佛可以分清敌我的“魔弹”,能够有选择性地杀伤敌人,而不杀伤自己人。药物代谢动力学兴起之后,人们又进一步地发现,可以通过药物的代谢动力学实验去判断药物的安全性。这样,使用毒药治病就已经有了“双保险”了。加上这些毒药的分子式在进行药物代谢实验之前就已经弄清楚了,它又为在极端情况下实现对药物安全性的人工控制,奠定了可靠的基础。如此,被批准入市的西医毒药,就有了第三副保险带了。遗憾的是,像西药研制中的“艾尔利希魔弹实验”,药物的代谢动力学实验和分子结构测定,在中药研究中是无法进行的。所以,到目前为止,还没有任何中药的毒副作用是事先可预知的,也没有任何有毒中药的毒理是事先就清楚了的,因而,没有任何中草药的毒副作用具有人工可控制性。一个是可以控制的,一个是不可控制的,二者怎么可以同日而语,相提并论呢?

    第三个陈词滥调:中毉用药讲药性,西医用药讲成分。用西医方法评价中毉方法是不尊重中毉。

    坦率地说,谁尊重我们的生命,我们就尊重谁;谁蔑视我们的生命,我们就蔑视谁。这是没有商量的余地的。

    至于“中药讲药性”,从来就不聪明,而今更显愚昧。对此,宋朝的寇宗奭早就批评过了。现在的中毉泰斗还拿它当宝贝,实属不智。比如说,寒、凉、温、热之间存在着递进关系。可是,有哪一位中毉药教授或中毉药泰斗可以把它们之间的递进关系界说清楚?中毉生连这个最基本的问题都说不清楚,凭什么要我们相信他们“依据药性用药”是合理的呢?

    中毉生只是朦朦胧胧地知道“西药讲成分”,却完全不知道西医如何讲成分。其实,“西药讲成分”,首先就要求进入人体的成分尽可能地单纯,以便于在极端情况下进行人工控制。其次,西药不但要讲药物成分如何进去,更重要的还要讲这些成分如何出来。按照“西医讲成分”的规则,凡是只能进去不能出来的东西,都不能被批准入药。可是,到目前为止,中药的绝大部分成分进出人体的关系至今不明。比如说,重金属进入人体之后,需要找到恰当的螯合剂,才能使重金属从人体当中被拿出来。可是,螯合剂并不是那么容易找的。没有找到螯合剂,这些重金属就是只进不出的东西。中毉生用药,只讲进去,不讲出来,这怎么可以与“西药讲成分”相比呢?

    “中毉讲药性”至今讲得含含糊糊,所以在临床上也是混乱不堪。假若一个人因为吃寒药而中毒,中毉生绝没有把握说,可以用热药急救。反之亦然。更何况中药的药性从来都是中毉生自己胡乱标榜出来的。它既没有确定性的经验描述,更没有丝毫的实验依据。木通曾经被标榜出来的药性是“辛、平、无毒”,中毉生一直拿它来为产育期妇女催奶。但它现在却被暴出可致肾癌和膀胱癌。如果“中毉讲药性”是有效的,则我们不禁要问,沈括的表兄手下的那位徒弟,在服用朱砂之后出现“懵冒”,是因为凉得太厉害呢,还是因为热的太厉害呢?假若是热得太厉害,中毉生可以用寒药拯救么?假若是凉得太厉害,中毉生又可以用热药去拯救么?千百年来,中毉生一直就是这样糊里糊涂地跟我们“讲药性”,一天到晚拿着我们的生命开玩笑,叫我们如何尊重他们呢?

    非常巧合,我在拿到2013年5月25日的《科技日报》的同时,也拿到了当天的《潇湘晨报》。后者在B1版刊出了一则《九芝堂前高管讨薪》的消息。

    它的原委是这样的:

    2013年4月17日,九芝堂召开了第五届董事会,接受了总经理程继忠的辞职。原因是,程继忠在2012年的任内,企业的净利润持续下滑。一季度下滑11.87%,半年报再下滑13.8%,三季度再下滑19.47%。四季度更邪门,继续下滑了73.12%。在程继忠的任上,程总聘用了两名高管负责销售。一个是王道东,负责非处方药销售;另一个是李玉荣,担任媒介总监。中药通过媒介吹牛扩大销售,这是大家都看见了的秘密。大约为了调动这两位高管的积极性,程继忠曾经口头答应他们分别60万元和50万元的年薪。其中的一半,以工资形式发放;另一半则以绩效奖励形式发放。由于绩效下滑,他们都没有拿全程继忠口头许诺过的年薪。于是,就有了“九芝堂前高管讨薪”的那一幕。

    我已经多次讲过,医药消费属于“不得已消费”。医药消费高,并不意味着生活质量高。这与其它生活消费是不同的。我在课堂上谈到医药经济的时候,也多次讲述过一个基本的医药经济学原理:药市场依赖于医市场,医市场依赖于病市场。

    遗憾的是,我国的病市场、医市场和药市场往往是相互背离的。在国际上,普通感冒是最小的医药市场。在我们中国,它竟然是最大的医药市场。在我看来,中药市场是最不正常的药市场。这个市场仿佛与疾病市场毫无关系。它似乎既不依赖于病,也不依赖于医。说白了,它只依赖于人民群众的愚昧程度和药品销售回扣不正之风的严重程度。民众越愚昧,药品回扣不正之风越严重,中药市场就越景气;民众越开化,中央打击药品销售回扣不正之风的力度越大,中药市场就越衰落。如果全中国有十分之一的人能够从被中毉药的愚弄中觉醒起来,恐怕所有中毉药企业都得关门。这就是那些中毉药高官和高管对我恨之入骨、咬牙蹙额的原因所在。所以,在我看来,2012年九芝堂的销售业绩下滑,实在怪不得程继忠,更怪不得王道东和李玉荣,而实在是社会的进步和人民群众的觉醒所使然。

    九芝堂是以陆九芝的名字命名的中成药生产企业。它与北京的某仁堂是同辈兄弟。某仁堂被爆出“朱砂门”,势必要影响到九芝堂。反之亦然。在我看来,九芝堂的高管讨薪在某仁堂上演不是不可能的。所以,为中毉药企业高官和高管几十万元的年薪计,才有中国中毉药协会的高官和高管粉墨登场,公开散布垃圾科学的闹剧出台。说得更展开一点,中毉药界的高管或高官频繁地活跃于政界、新闻界、科技界、教育界,都是为了那“每一个毛孔都滴着血和肮脏的东西”的年薪。所谓“优秀的中毉药文化”和“民族利益”,无非他们经常使用的遮羞布而已。

    朱砂是有毒的。垃圾科学是害人的。我们的生命是我们的。为避免受朱砂一类的毒药的危害,我们没有任何义务盲从于中毉药界的宣传伎俩,把我们的生命去做“弘扬中毉药文化”的牺牲品。

  • 马亮:政策实验室、政府创新与新文科建设

    文章略有节编。

    新文科实验室建设对于服务公共管理的教学、研究和实践,实现官产学研用的打通和融合意义重大。在公共管理实践中涌现出的政策实验室,与新文科实验室在很多方面有异曲同工之妙,二者有进一步融合的潜在空间。本文将从比较政策创新中政策与政策试点的差异切入,进而回顾政策实验室兴起和发展的历程,探讨新文科实验室建设对于政策实验和政府创新的价值,并提出进一步建设新文科实验室的对策建议。

    一、政府创新中的政策实验与政策试点

    (一)实验何以重要

    实验(experiment)往往被视为现代科学研究的黄金标准,不少人认为它是进行因果推断的不二法门。比如,2019年和2021年两届诺贝尔经济学奖都颁发给了在实地实验和因果推断方面作出贡献的经济学家。在实验中,随机对照实验(RCTs)被认为是进行政策评估的金科玉律,因为它满足了一些基本条件,使因果推断更为可靠。实验背后的思想反映在“反事实”逻辑,即我们可以通过实验来回答“假如……怎样”的问题。

    实验的主要特征包括随机化(randomization)、干预或处理(treatments)、实验环境的可控性(controllability)、“双盲”(double blind)、实验组与对照组的匹配与比较。为什么要这样设计?原因就在于要确保实验组与对照组的可比性,确保实验效应的一致性,排除环境和条件因素的干扰,避免实验人员的偏见,并确保结果归因的唯一性。

    首先,在控制组(即反事实)与实验组的分组方面要确保随机分组,即每个政策干预对象都有同等机会被选入实验或作为对照。其次,对实验组的各个政策干预对象都施加同样的干预或采取同样的处理,这样可以更好地比较其影响。再次,为了避免其他因素干扰,通常在完全受控的实验室中进行实验,而这在现实政策实验中往往难以达到,但是人们要尽可能地模拟实验室环境。复次,为了避免心理作用等带来的影响,需要确保实验对象之间独立且互不干扰,避免信息沟通带来的实验对象污染。最后,通过将实验组和对照组的前测与后测进行比较,就可以获得政策干预带来的净效应。

    不少发达国家开展的政策实验(policy experiment),就是基于行为科学与循证决策思想,通过随机对照实验和因果推断来进行政策实验,为政策决策提供有效支撑。在政策评估中,实验设计或准实验设计有助于排除其他干扰因素,评估某项政策的净效应。同时,循证实践的崛起,推动政府决策更加科学,特别是基于科学证据进行决策更加依赖实地实验提供的严谨证据。比如,行为公共管理学的发展使越来越多的公共管理研究使用实验设计,包括实验室实验、调查实验、实地实验等。

    (二)政策试点与政策实验的差异

    在政策创新中,一直存在政策试点、示范与政策实验的路径选择问题。实验主义更多强调大胆探索,而不限于科学严谨的实验法,包括实验性治理(experimentalist governance)、分级制实验(experimentation under hierarchy)、引导创变(directed improvisation)等概念。从中国改革开放来看,国家治理的优秀经验集中表现在公共政策的试点、试验与示范。

    政策试点(policy pilot)是对某项政策方案进行测试和评估,确定其是否可行有效和可以推广。政策试点既可以是试对(预设政策方案是合适的,通过试点来为其提供证据支持),也可以是试错(政策方案的可行性是不确定的,通过排除错误方案来确保正确方案脱颖而出),但其预期结果应是开放的。政策示范(policy model/demonstration)是通过树典型、立榜样、设标杆和遴选最佳实践,通过示范项目来引领其他地区和部门。一些地区被列为“先行先试区”,更多是带有探索色彩的试验;一些地区则被冠以“先行示范区”,意味着其在引领改革方面扮演重要角色。

    政策试点、示范和实验之间有何不同?不少学者对此进行了探讨,明确了试点、示范与实验的异同。从方法论角度分析,差别在于分组与抽样、干预、环境三个方面,这使政策试点和示范都是准实验,缺失严格实验的某些关键要素,进而导致对其效果进行评估时面临的方法论挑战(如表1所示)。

    1.分组与抽样

    实验设计需要设立实验组和控制组,一般采取随机分组方式,确保二者是可比的,避免选择偏差带来的实验结果偏误。实验分组需要遵循随机抽样原则,确保分组的随机性,但在代表性和典型性方面未必合理。相对来说,政策试点强调能够代表各类地区,使试点在未来推广时有更强的普适性。为了使政策试点达到预期效果,往往会采取试点地区差异最大化的原则,有目的地选择基础条件不同的若干个地方或部门,而地域多样性可以尽可能保证试点在环境不同的地区都能够同步检验。

    实验是实验人员确定谁进入何组,“被试”是被动接受实验、处理或干预的。但在选择试点单位方面,则是双向选择而非单方面分组。试点单位的选择往往是多轮次开展而非一次性进行的,一般先是上级政府部门公开征集,然后是地方政府部门自愿申报,经过上级评审遴选后公示试点单位名单。这样一来,谁参与试点都是公开的,也就难以避免试点地区之间的相互影响,以及非试点地区的学习和模仿,进而使试点效应评估面临难题。同时,选择试点的地区数量多少、是竞争性还是非竞争性的遴选、是限额还是不限额等,都存在较大差异,也使政策试点有较强的多样性。比如,2019年中央政法委启动的“全国市域社会治理现代化试点合格城市”就属于不限额且非竞争的,只要达标即可入围。

    政策示范是由点到面或以点带面地树立典型,选取的都是成功的榜样,因此没有代表性,也无随机性可言。从中央政府部门设立各种示范城市项目的初衷来看,也有较大差异,使政策示范同实验的差异各不相同。值得注意的是,在政策实验中面临的问题是,试点地区为了得到好评,导致试点很少不成功的,示范没有不成功的。在预定结果确定的情况下,地方政府可能为了实现试点成功而使试点偏离初衷,导致试点结果不可信。

    2.干预

    在实验中,实验人员对同组被试做出相同的干预(实验、操控或处理),确保实验效应是可以精准度量和比较的。但在政策试点中,政策干预的主体和方式都有所不同,因此同实验相去甚远。试点单位不是被动接受政策干预,而是主动开展有创造性的政策创设和检验。试点地区可能同时开展多种不同政策试点,有时候是加挂多块政策试点牌子。试点一个政策选项还是同时试点多个政策选项,显然会影响每项政策试点的效果。

    试点的目的是解决不知道怎么做才对的问题,带有探索性和不确定性,所以需要“摸着石头过河”。试点单位不是作为“被试”被动接受实验,而是要强调自主性和主动性。试点单位作为实验的策划者、组织者和参与者,要积极主动地争取、组织和开展实验,而这是有别于实验逻辑的。

    试点的核心是为了试验尽可能多的不同政策方案,而不是验证一种政策方案。所以,各地都是“八仙过海各显神通”,试点的方式各不相同,并尽可能突出地方的鲜明特色。实验的干预方式是预先设定的,而试点的政策方案是即兴发挥的,各地在试点过程中也会不断调整和改变。从政策试点来看,政策目标是清晰的,但政策手段是模糊的,无论是中央还是地方都不清楚,所以政策方案往往是在试点启动后才逐渐清晰和确定的。这往往是中央指导,地方创设,互学互鉴,地方成功后即全国复制推广。比如,2013年在上海试点自由贸易试验区,很快在全国其他地区设立,进而推动一大批制度改革成果复制推广。

    3.环境

    实验要求在同样的环境条件下进行,这样可以确保实验组和控制组的差异归因为实验本身。但是,各地在资源禀赋、领导重视程度、试点单位的自主性和能力等方面都有很大不同,导致政策试点效果的归因困难,一些地区甚至出现刻意栽培的“政策盆景”。因此,试点成败不仅同试点本身有关,也同试点环境有关。比如,上海浦东试点法院员额制改革,其本身效果较好,但在全国推广时却在不少地区出现“水土不服”情况。

    实验要求“双盲”,即实验人员不知道谁进入实验组,被试也不知道自己是否属于实验组。这有助于排除传染效应,而设立安慰剂组则可以进一步排除人们接受干预的心理作用。政策试点往往是地方竞争、中央遴选和完全公开,所有地区和部门都知道谁是试点单位,试点单位相互之间也心知肚明。试点单位之间相互竞争,都希望成为中央中意的政策方案,所以往往暗中较劲,甚至“只做不说”。

    同时,政策试点可能会在多层级政府中同时或序贯进行,导致政策试点的嵌套性较强。比如,政策试点不仅有国家级试点,还有省级试点和市级试点,而这种地方试点往往是在未入选国家级试点的地区进行。地方政府也会率先培育政策试点,并为申报国家级试点打好基础,由此导致试点先行带来的评估难题。

    (三)政策试点还是政策实验?

    政策创制可以使用试点、示范或实验等模式,因此,需要关注不同政策创制方式有何异同,以及为什么不同国家采取不同路径。政策创制究竟采取政策试点或示范,还是推行政策实验,取决于政治体制与行政文化,也同决策者的观念和意识有关。当然,也不应刻意将试点与实验视为二元对立的模式,它们可能从属于一个更大的连续统一体或共同谱系,只不过采取了不同的逻辑与评判标准。基于方法论的比较显示,实验可谓演绎法指导的定量研究,而试点是归纳法指导的定性研究。实验是对一个理论假设进行检验,而试点是对多个理论假设进行比较。实验生产普遍知识,而试点生产实用知识。

    西方政治体制中,不同政策方案的选择与论证需要强有力的支持,才能避免被政党利益和意识形态干扰。比如,吸烟有害健康在今天来看是确凿无疑的,人类活动毫无疑问是造成气候变化的重要原因,但要想让人接受这些观点,却需要大量坚实的科学证据基础。因此,通过政策实验获得选民支持,对于西方民主体制尤为重要。当然,严格意义上的真实验往往可遇不可求,在很多情况下也无法在实际情形中推行。相对来说,大量政策实验都是准实验(quasi-experiment),在某些方面不满足基本条件,但是可以通过一些补救措施来对实验效应进行评估。

    近年来,我国特别强调“顶层设计”,但政策试点依然有很强的生命力。如果是需要突破既有法律法规框架的创新,就需要中央授权;如果是在既定制度框架内的创新,则可以由地方自主开展。政策试点的多样性较强,不能一概而论或“一刀切”。因此,有必要建立政策试点的分类体系,并解释中央政府部门为何设立各种试点和示范项目。比如,在为什么试点方面,试点的目标不同;在谁来试点方面,发起试点的部门不同;在谁被试点方面,选取的点不同;在试点什么方面,试的方案不同;在哪里试点方面,试点的环境不同;在何时试点方面,试点的轮次和批次不同;在如何试点方面,试点的含金量不同。

    政策过程复杂而微妙,涉及许多行动者、资源和解决方案,很难通过简单的实验加以评估。政策试点的重点不在于政策试点项目本身成功与否,或者说为了产生规模效应,而是要通过政策试点结果来了解不同行动者对政策的反应。对于中国这样的大国而言,政策创制与推广之间存在很强的张力。为了避免某项政策出台后带来的大规模失败风险,往往需要采取分级制的政策试点。政策试点推广的不同阶段也遵循不同逻辑,导致试点的推广带来不同的影响。比如,中国公立医院改革试点从2010年开始分四批进行,遴选了103个试点城市。通过和106个非试点城市比较,可以识别五种不同试点组态:再接再厉型的挑战“破冰者”、取长补短型的短板“主攻手”、一马当先型的创新“排头兵”、背水一战型的逆势“突围者”、以身作则型的区域“领头羊”。

    上述讨论表明,政策试点和标准的实验设计有很多区别,而之所以不少国家和地区的政府选择政策试点而不是开展政策实验,同许多现实考虑有关。但是,鉴于实验设计在科学研究与政策创制方面的双重价值,有必要探讨如何使其融合式发展,从而既产生科学知识也提供政策知识。在某种意义上来说,政策实验室的诞生与发展部分回应了这一诉求。

    二、政策实验室的兴起与发展

    (一)政策实验室因何而起

    近年来,越来越多的国家和城市建设政策实验室(policy lab)、城市实验室、政策创新实验室或公共部门创新实验室,对政策创新进行设计、实验、评估和推广。政策实验室日益兴起,既同行为科学(behavioral science)的崛起有关,也同循证决策(evidencebased policymaking)的流行离不开。同时,政策实验室也推动智库转型,使其从思想库走向证据库,除了促进思想创新和传播,也在开发和积累科学证据。当然,以ChatGPT为代表的新一代生成式人工智能(AIGC)技术也有利于推动循证治理,通过科学证据的加速扩散和利用来提升政府决策科学化。

    政策实验室是一种对政策进行实验的组织,主要开展政策实验与政策评估,帮助决策者回答“什么奏效”(what works)和为什么。一方面,政策实验室突出创新思维,聚焦存在突出问题与严峻挑战的政策领域,通过头脑风暴、集思广益来征集政策创意和创造性解决方案。另一方面,政策实验室遵循行为科学,采用严谨的实验设计与政策评估方法,对已有政策和政策方案进行检验、评估和完善,为政府部门修改、使用和推广政策提供专家咨询与决策辅助。

    政策实验室的先锋当属英国的行为洞察力团队(Behavioral Insights Team, BIT)。BIT在2010年由英国政府成立,后发展成为社会企业,2021年完全并入创新慈善机构Nesta。BIT最早为英国首相办公室和内阁部门服务,总结了让政策奏效的四大法宝,即EAST法则——通过让公共政策简单易懂(easy)、富有吸引力(attractive)、与己相关(social)和及时速效(timely),为很多政府部门提供了政策决策支持。此外,BIT还总结了MINDSPACE法则,提出一套通过政策干预来改变人们行为的组合拳,包括信使(Messenger)、激励(Incentives)、规范(Norms)、默认(Defaults)、显著(Salience)、启动(Priming)、情感(Affect)、承诺(Commitments)、自我(Ego),每个方面都可以形成富有价值的政策工具。BIT推动的政策创新典型案例是电子烟政策。不少国家将电子烟视为百害而无一利的“毒品”或需要严加监管的新型烟草产品,英国政府却率先将电子烟作为禁烟办法,消费者可以在药店通过医疗保险来购买电子烟。虽然电子烟可能诱导未成年人吸烟,但在前期政策实验的基础上,英国政府审慎地推动了这项政策,使人口吸烟率明显下降。

    从政策实验室的崛起与扩散来看,全球迄今已有上百个政策实验室,其中大约有65个在欧盟成员国。就澳大利亚和新西兰的调查来看,也有52家之多。政策实验室是政策创新的未来吗?从政策扩散的驱动机制来看,府际竞争与学习造就的“民主实验室”孵化政策创意并检验政策效果。政策实验室将这些政策扩散机制制度化,使其能够更加有效地推动政策创新的实验和推广,毫无疑问有利于增强政府部门的创新能力。

    (二)政策实验室的不同模式

    虽然各国政策实验室有不同名称,如创新实验室、政策创新实验室、政策实验室等,但都可以归类概括为几种主要模式。有学者从研究方法和路径的角度划分为设计实验室、开放政府数据实验室、循证实验室、混合实验室。本文从实验方法(行为科学、设计思维)以及与政府的关系(部门民营化、政府主导、政府完全控制)两个方面,对政策实验室进行分类。政策实验室的核心使命是影响政府决策,其与政府的关系会影响政策方案是否得到政府认可、重视和推广。但是,如果政策实验室缺乏独立性,也会影响其对政府的影响力。政策实验室既可以注重设计思维,聚焦政策如何设计;也可以关注行为科学,注重如何对政策进行实验与推广。

    对丹麦心智实验室(Danish MindLab)、英国行为洞察力团队(BIT)、新加坡人类体验实验室(THE Lab)这三个政策实验室的比较案例研究显示,政策实验室在实验方法(行为科学、设计思维)及其与政府的关系(部门民营化、政府主导、政府完全控制)方面有明显不同,可以采取不同模式与策略(如表2所示)。

    丹麦心智实验室是政府主导,采取设计思维对政策进行创新。设计思维在建筑设计、艺术设计等领域应用广泛,意味着要从设计的角度来重新审视政策过程,并通过设计来不断优化政策。BIT一开始是政府完全控制,但是随着迅速的发展,独立成为咨询公司,主要采取行为科学指导下的政策实验模式。新加坡人类体验实验室是完全由政府控制的机构,基于设计思维来对政策进行实验。

    政策实验室的核心职能是知识转移,而知识转移能否成功则取决于许多因素,比如政策实验室同政府部门的相互信任与合作动机,政策实验室的开放性、专家资源与制度建设等。从这些政策实验室的运作来看,它们采取了不同模式与策略。这使政策实验室在如何发现和发明政策以及如何验证和推广政策方面有显著差异,进而可能影响政策实验室的运营和绩效。

    值得注意的是,政策实验多种多样,政策实验室的运作模式也各不相同。在设计与管理政策实验室方面,需要结合各地实际情况与资源禀赋,采取符合本地实际的运作模式。比如,中国不少地方政府依托政策研究室、大数据管理局、科技产业园等载体,建立类似于政策实验室的政策研发、创新与推广平台,也是顺应和合乎全球政策实验室发展趋势的。

    三、如何建设新文科实验室

    (一)新文科实验室的定位与方向

    新文科建设需要新文科实验室的支撑,而如何建设新文科实验室就成为值得关注的问题。从新文科实验室的建设定位来看,需要明确其同传统实验室和大学智库的不同,以及其在支持政策实验方面的潜力。传统实验室往往是服务单一学科的,且主要是以科学研究为服务对象,在多学科交叉融合和服务政策创新实践方面的作用较弱。智库也称思想库,更多发挥的是决策支撑,而在科学研究和知识创新方面的能力较弱。新文科实验室建设毫无疑问要有别于传统实验室和智库,并需要探索新的发展道路。新文科实验室与政策实验室服务目的不同,却有很强的融合式发展潜力。新文科实验室建设要加强同政策实验室的对话,借鉴政策实验室的创新模式,推动二者融合式发展。

    实验主义的内涵丰富,科学严谨的政策实验不是唯一路径。政策实验室有不同模式,科学实验也同样需要因地制宜。中国政策试点的多样性与特征意味着要坚持实用主义的政策实验理念,坚持“验以致用”,通过打通官产学研用来使政策实验发挥效果。

    从政策实验室与政府创新的关系来看,政策实验室可以扮演不同角色。它们既可以服务于科学研究与学术发表,也可以服务于政府决策与政策评估。一个重要问题就是,如何实现二者兼容。从具体分工来看,实验室开展原创性实验研究、证据库汇总和编译科学证据,而试验田则复制推广政策创新。这意味着要打造政策实验平台,通过科普和服务输出政策经验,推动研究证据向政策实践转化。因此,新文科实验室可以借鉴政策实验室的经验做法,更多关注如何对政策实践进行评估,如何开展政策实验,如何创造性设计新政策等。

    近年来,行为公共管理学发展迅猛,通过共同设计与合作开展实验研究,揭示公共管理行为的内在逻辑。从新文科实验室建设的需求来看,这意味着要打通试点与实验,加强政策试点与政策实验对话,使决策者和研究人员可以谋求共同语言。因此,加快新文科实验室建设,推动实验知识的翻译与转化,加快实验得到认同与推广,将有助于实现这些目标。

    同时,新文科实验室建设也需要朝着数字化与智能化方向发展。在数字时代,借助大数据分析和人工智能技术,政策实验将会加快发展。比如,互联网企业流行AB测试,可以在大规模用户中快速进行不同方案的实验。类似地,敏捷监管强调要建立“监管沙箱”,在金融科技等前沿领域进行政策实验。诸如此类的发展都意味着,要推动新文科实验室与政策实验室的融合式发展,既为发展和检验社会科学理论提供平台,也为推动政策实验和创新实践提供支撑。

    (二)研究启示与未来展望

    政策实验室和新文科实验室有深化合作乃至合二为一的潜力,因此需要关注二者如何进一步加强合作并推动实验室服务科学、研究和实践。展望未来,可以重点研究如下问题。

    首先,当前各地建设的新文科实验室都有很强的本地特色,往往同本地政府、大学和学科等有较强的联系,也在很大程度上塑造了不同实验室的独有特征。有必要对这些新文科实验室进行类型学研究,将其进行分类和比较,推动新文科实验室发展方向规划。从试点、完善到推广,新文科实验室建设也遵循我国一般政策试点的逻辑,有必要对试点实验室进行跟踪监测、考核评估和优化完善,为其他高校和机构建设新文科实验室提供启示。

    其次,新文科实验室的发展进行到了一定阶段,但是如何对其评估还缺乏较好的指标和方法。各个新文科实验室都有不同的历史条件、前期基础、学科渊源等,并非在同一起跑线上,也难以用一把尺子“一刀切”。这类似于对政策试点的评估,需要避免试点必成功的陷阱,也要避免偏颇的考核“指挥棒”导致的行为扭曲和弄虚作假。因此,既要明确少数共性和通用指标,又要允许各个新文科实验室特色化发展。

    最后,需要对其他国家哲学社会科学实验室的建设与运行进行分析和比较,探讨其对中国新文科实验室建设的启示。与此同时,也要进一步提炼我国新文科实验室的特征,凸显其在自主知识体系建构方面的作用,加强同其他国家哲学社会科学实验室的交流合作,并为其他国家提供可资借鉴的经验,比如推动跨国实验室数据开放、算法共享等。

    本文转自《中国社会科学评价》2024年第2期

  • 徐义华:甲骨学研究的基本领域及主要成就

    甲骨文发现已有120多年,经过数代学者的努力,甲骨学已成为一门成熟的国际性学科。甲骨学研究可以分为四大领域:第一个是基础性领域,主要是资料整理;第二个是研究性领域,主要包括文字考释、甲骨自身规律研究、商周史研究、商周汉语研究四个部分,另外,甲骨的考古学考察、西周甲骨、甲骨学术史、甲骨文书法艺术等领域也值得关注;第三个领域是工具书的编纂;第四个领域是近年来兴起的新领域——甲骨文研究的电子化和智能化。本文主要介绍甲骨学研究性领域和新领域。

    甲骨学研究性领域

    甲骨文的研究性领域,情况大致如下。

    一、文字考释

    文字考释是甲骨研究的第一步。1904年孙诒让写成《契文举例》,这是第一部甲骨文字考释书。此后最有影响力的学者是罗振玉和王国维,尤其是罗振玉,释读常用甲骨文字数百多个,使甲骨刻辞基本得以通读。继罗、王之后,贡献最大的是唐兰和于省吾,不仅考释甲骨文字众多,还发展了文字考释理论。

    70年代以后,以裘锡圭在文字考释方面的成就最高,其著作《古文字论集》影响很大。此后,大批文字学著作也涌现出来,如刘钊《古文字考释丛稿》、陈剑《甲骨金文考释论集》、蒋玉斌《殷墟子卜辞的整理与研究》等。经过学者努力,已有一千多个甲骨文字得到释读,甲骨刻辞已可通读。

    二、甲骨自身规律

    甲骨自身规律主要是指甲骨的整治、卜辞的刻写、刻辞的种类等知识。

    1.甲骨文例

    甲骨文例主要是指刻辞在甲骨上的格式、行款、布局等。1928年胡光炜《甲骨文例》首先对卜辞行款进行研究,此后董作宾《骨文例》、胡厚宣《卜辞同文例》、张秉权《论成套卜辞》等文发表,对卜辞的分布、行款等作了更深入的探讨。80年代以后,学者的研究更为细致,如李旼姈《甲骨文例研究》、孙亚冰《殷墟花园庄东地甲骨文例研究》等对文例进行了综合性研究。

    2.分期断代研究

    分期断代是确定卜辞时代,将其转化为史料的前提。1930年,董作宾发表《甲骨文断代研究例》,提出了世系、称谓、贞人等十项标准,并把甲骨卜辞划分为五期。从此,甲骨文的大致时代得以确定。

    董作宾的五期分法还存在一些不足,为此,有学者提出了分组分类的方法。陈梦家在《殷虚卜辞综述》中提出“组”的概念,影响巨大。后来,李学勤、彭裕商、林沄等学者撰文讨论,黄天树《殷墟王卜辞的分类与断代》和彭裕商《殷墟甲骨断代》等书出版,新的甲骨分期理论逐渐完善。

    3.甲骨整治与卜法制度

    在利用甲骨占卜之前,商人会对甲骨进行剖解、钻凿等一系列整治,这是重要的研究内容。

    罗振玉就将钻凿作为重要内容,到20世纪60年代,贝冢茂树最先用钻凿形态论证卜辞的时代,后来许进雄撰成《甲骨上钻凿形态的研究》等论著,全面地分析钻凿形态及其与分期断代的关系。80年代后,钻凿的价值被进一步认识,周忠兵《甲骨钻凿形态研究》、赵鹏《殷墟有字甲骨钻凿布局与占卜形式探研》等书出版。

    卜法很早就被学者关注,1929年,董作宾写成《商代龟卜之推测》,该文介绍了商人从取龟到契刻再到收藏的整个过程,后来宋镇豪《殷墟甲骨占卜程序的追索》、朴载福《先秦卜法研究》等也做了研究。近来,李雪山《嬗变与重构:商代甲骨占卜流程与卜法制度研究》出版,对卜法研究起到推动作用。

    4.甲骨缀合

    甲骨容易破碎,把破碎的甲骨拼合起来即缀合。1917年,王国维缀合了一版甲骨,首开先河。第一部专门的甲骨缀合著作是曾毅公于1939年出版的《甲骨叕存》,此后郭若愚、张秉权、严一萍等都出版了缀合著作。2000年以后,甲骨缀合获得更大进展,蔡哲茂《甲骨缀合集》《甲骨缀合续集》、林宏明《醉古集》、黄天树《甲骨拼合集》第一至五集等先后出版。

    缀合方法不断改进,蔡哲茂提出关注同文例,黄天树提出甲骨形态学,都极大促进了甲骨缀合工作的进展。

    三、甲骨文与殷商史

    甲骨文研究的一个重要方面是认识商周史。最早利用甲骨文解读历史的是王国维,他于1917年发表的《殷卜辞中所见商先公先王考》及《续考》,标志着甲骨文研究进入“史料时期”。

    此后,甲骨文的史料价值被发掘,胡厚宣《甲骨文商史论丛》、丁山《商周史料考证》、李学勤《殷代地理简论》等著作都产生了很大影响。20世纪80年代以后,利用甲骨文研究商史的著作大量涌现,彭邦炯《商史探微》、朱凤瀚《商周家庭形态研究》、杨升南《商代经济史》、宋镇豪《夏商社会生活史》、刘源《商周祭祖礼研究》等著作问世。最值得关注的是宋镇豪主编的《商代史》,该书从政治、军事、经济等多个方面对商代历史做了综合研究。

    四、甲骨文与汉语史

    甲骨文用于商代汉语研究,主要包括两个方面,一方面是语法研究,另一方面是语音研究。

    1953年管夑初出版的《殷虚甲骨刻辞的语法研究》,是第一部甲骨文语法著作。西方学者探讨甲骨文语法较多,如吉德伟、倪德卫、高岛谦一等都发表了论文。20世纪80年代以后,甲骨文语法研究蓬勃发展,沈培、张玉金、喻遂生、杨逢彬等都出版了相关著作。用甲骨文研究语音出现较晚,1984年赵诚发表《商代音系探索》,初步拟出商代音系。此后,陈振寰、管夑初、郭锡良、邹晓丽等也都做了这方面的研究,取得了一定成果。

    除了上述几个方面,甲骨学的考古学考察、西周甲骨、通论性和学术史研究、甲骨文艺术化转化等方面也是值得关注的。

    甲骨文研究的信息化

    甲骨文研究的电子化表现在三个方面:一是成果发布的网络化,二是甲骨资料库的建立,三是甲骨文研究的智能化。

    许多甲骨文研究成果选择快捷的网络发布,例如,中国社会科学院先秦史网站成为甲骨缀合成果集中发布的网站。

    甲骨文资料库不断建立,多地学者都参与其中。在港台地区,香港中文大学建立了“汉达文库”,成功大学建立了“甲骨文全文影像数据库”,“中研院”史语所建立了“甲骨文数位典藏数据库”等,都提供甲骨图片和释文,并有检索服务。在大陆地区,有中国国家图书馆的“甲骨世界数据库”、中华书局的“殷墟甲骨文数据库”、安阳师范学院的“殷契文渊”,其中“殷契文渊”不仅将甲骨原始资料上网,而且建立了字形库和文献库。

    早在20世纪60年代,美国学者周鸿祥即试验计算机缀合甲骨,并取得一定成果。随着人工智能的发展,计算机在甲骨缀合、文字识别和校对重片方面取得很大进展,有学者开始尝试用计算机考释文字。莫伯峰《人工智能在古文字研究中的应用及展望》、李春桃《基于人工智能技术的古文字研究》等成果,揭示了新技术在甲骨文研究中的前景。

    本文转自中国社会科学报

  • 何立波:迦太基的兴衰及其多元文明特征

    迦太基是腓尼基城邦的推罗人在非洲建立的一个殖民地,后成为古代地中海世界一个著名的商业民族。迦太基据说是推罗人在柏萨(Byrsa)向非洲土著借来的一块“牛皮之地”,经过不断扩张拓展,成为和希腊并列的西地中海两殖民帝国,也是与罗马并列的西地中海世界的强国。迦太基在商业领域和希腊人有激烈竞争,也和新兴的罗马在西西里出现了冲突,与希腊人和罗马人均兵戎相见。迦太基在公元前146年被罗马所灭,作为一个国家不复存在。但是迦太基文明的混合型特征,以及带来的文明交流和传播的意义却不容忽视。对于迦太基的兴衰及其文明特征,学界已经取得了一些成果,但还有很大的提升空间。①

    一、迦太基海上商业帝国的建立与早期西地中海世界的文明交流

    从民族上说,迦太基人属于古代地中海东岸腓尼基人的一支。迦太基人代表了古代东方民族殖民、商业和航海的高峰,在某种意义上进行了古代国家“重商主义”文明的最早探索。

    (一)迦太基的建立和对外贸易扩张

    一般认为,腓尼基人中的推罗人于公元前814年在非洲建立了迦太基。“迦太基”在腓尼基语中为“”,在希腊语中为“Karchēdon”,在拉丁语中为“Carthago”,意思是“新城”。②

    关于迦太基的建城史,一直有“一张牛皮”的传说。古希腊史学家提迈欧(Timaeus,约前352-前256)最早提出迦太基由推罗妇女狄多(Dido,亦称Elissa)在第一次奥林匹克运动会前的第38年(前814年)所建之说。③罗马帝国史学家阿庇安(Appian,约95-165)提供了更详细的记载,称狄多带领族人来到迦太基所在地,求土著酋长赐一块牛皮之地。在征得土著酋长的同意后,狄多将一张牛皮裁剪成条,圈出一块城镇大小的地皮,即后来迦太基的卫城柏萨。④“柏萨”的腓尼基语为“Bozra”,意思是围城、堡垒;而在希腊语中为“Byrsa”,意思是“藏牛之处”。⑤柏萨位于迦太基城的中央,是一个险峻之地。阿庇安在叙述公元前146年迦太基毁灭时,提到迦太基已繁荣了700年⑥,显然指的是她建立于公元前9世纪晚期的说法。到公元前l世纪末希腊地理学家斯特拉波(Strabo,前64-公元23)到访非洲时,迦太基城仍有方圆360斯塔德(stades)的规模。⑦

    古典学家迈尔斯(Richard Miles)认为,目前迦太基发现的最早的考古文化层的回溯,仅仅能追溯到公元前760年左右。⑧考古学家辛塔斯(Cintas)试图通过对迦太基遗址出土的陶器进行分析,建立迦太基早期历史的时间表,但是罗马人在毁灭迦太基的时候造成较大损毁。1922年,辛塔斯终于在13个陶罐中确认有9个属于希腊陶罐,认定迦太基建城不太可能早于公元前725年。⑨推罗人建立迦太基可能是希腊罗马人的一种想象,斯特拉波、阿庇安等都持这种观点。而实际上,迦太基的早期居民还包括腓尼基各城邦及众多非洲居民。关于早期迦太基,目前发现的最重要的古典档案是“闪米特铭文集”(Corpus inscriptionum semiticarum)第1卷第5684—5号,一般被认为是出现在公元前7世纪,仍保留着明显的推罗语言的特点。⑩

    目前看来,腓尼基人在西地中海的扩张,不会早于公元前750年。(11)在公元前8世纪之前,迦太基在西地中海世界的主要活动是建立殖民地。韦尔(Benjamin W.Wells)认为,殖民地是迦太基国家的支柱,迦太基殖民体系是古典时代的殖民典范。(12)青铜时代晚期的殖民地有两种模式:一种是腓尼基的贸易殖民,专注于奢侈品贸易和经济利益,在地中海西部沿海建立了稳定的定居点;另一种模式是希腊人的殖民运动,在海外建立城邦,带有政治目的。(13)早期迦太基人的贸易和希腊人有所不同,他们更多是以贸易者的身份在海外殖民和经商。他们在迦太基以外的定居地与其说是城市,不如说是“贸易港口”更为合适。

    迦太基从希腊世界和东方输入油、酒等食物及纺织品、陶器、青铜器等手工业品,满足本国居民和殖民地居民需要。目前考古材料证实的迦太基从国外进口商品,最早可以追溯到公元前650年。(14)西西里岛的阿克拉加斯城(Acragas)、意大利的坎佩尼亚,以及爱琴海的罗德斯岛,都是迦太基酒类来源地。油是从阿克拉加斯城运来的。迦太基从希腊大陆、意大利的坎佩尼亚、西西里得到了青铜器物、宝石制品和花瓶。考古学家在一座献给迦太基女神塔妮特(Tanit)的神庙中发现了东方化时代(前720-前580)的6个科林斯式希腊陶罐,时间在公元前740年到前710年间。(15)在迦太基人墓地还发现了从塞浦路斯运来的陶俑和青铜水瓶。埃及也提供了受迦太基人欢迎的装饰品。法国史学家杜丹(Jules Toutain)认为,迦太基的非洲领土及其殖民帝国能出口的商品有限,只有奴隶、矿砂和金属,尤其是西班牙南部的铅和银。(16)迦太基沉船的船骸提供了重要的物质信息,如铜锭、锡锭、玻璃、金银首饰、彩陶、酒和油等。(17)作为一个以商业贸易闻名的古典民族,迦太基人缺乏具有自己民族特色的手工业品,经常仿制和改造从希腊和埃及进口的商品,如他们仿制的希腊陶器就几乎达到了以假乱真的地步。

    (二)迦太基地中海贸易航线的建立和大西洋航海的探索

    迦太基在地中海参与和拓展的贸易航线,大体可分为东西航线和南北航线。东西航线亦称“黎凡特—西班牙线路”,是从希腊、小亚、西亚到西班牙、直布罗陀海峡的航线。伊比利亚半岛和萨丁尼亚岛,是地中海世界主要的铜、锡、银的产地。推罗人首先开辟了从西班牙南部的加底斯(Gades)到推罗的“金属航线”。西西里的希腊史学家狄奥多鲁斯(Diodorus of Sicily,前80-前21)称,迦太基海疆西达直布罗陀的“赫拉克勒斯(Heracles)之柱”、加底斯和大西洋,而加底斯位于有人居住的世界的最远边界。(18)有利的地理位置、肥沃的内陆土地和更优良的港口,让迦太基人很快控制了这条航线,成为地中海最大的铜、锡、银的贸易商。(19)

    南北航线对迦太基来说尤为重要,它将迦太基与西西里、科西嘉、撒丁岛、意大利联系起来,成为第勒尼安海贸易圈的重要坐标。迦太基还位于非洲通往希腊、爱琴海地区的海上航线的通道,有利于它从海外获得粮食、原材料和手工艺品。迦太基早期的粮食依赖进口,主要来自西班牙、意大利、西西里、希腊、黎凡特等地。在迦太基早期居民的陪葬品中,发现了大量希腊陶器(包括爱奥尼亚式和科林斯式陶器)。公元前6世纪初,银价暴跌,金属航线随之衰落,南北航线变得愈加重要起来。迦太基位于东西航线和南北航线的交会点,很快发展成为地中海世界贸易中心。在迦太基国家收入的来源中,关税占主要地位。(20)迦太基在希腊化时期成为地中海世界的一座国际都市,吸引了大量移民。

    迦太基人的贸易具有居间商的性质,航线广泛分布在欧洲、非洲的大西洋沿岸和西地中海地区,形成一个地中海世界贸易网络,迦太基货币也成为地中海西部地区的硬通货币。为保护贸易航线和商业利益,迦太基建立起一支强大的海军舰队,使用先进的三桨座战舰。在内陆殖民地农业的支持、海上贸易的商业支撑和海军舰队的保护下,迦太基建立了一个商业殖民帝国。杜丹强调,迦太基商业的真正范围是海洋,尤其是西部。他们从撒哈拉、苏丹获得了黑奴、象牙、兽皮、黄金、鸵鸟,从高卢获得锡、铅,其买卖都是在沿海或交通要道进行的。杜丹指出,迦太基人在地中海世界贸易中所起的作用,很像17世纪的荷兰人。(21)

    迦太基能够发展成为一个海上商业殖民帝国,是和他们高超的造船技术和航海水平分不开的。古罗马学者老普林尼(Pliny the Elder,23—79)告诉我们,腓尼基人是一个擅长航海、通晓天文并发明字母文字的民族。(22)早在公元前3000年左右,腓尼基人城市比布鲁斯(Byblos)就已制造出拥有弧形船体、能够经受大海考验的船只。古希腊史学家波里比乌斯(Polybius,前204-前122)在谈到迦太基和罗马军队的对比时指出,航海技术长期以来是迦太基人的一种特殊技能,他们比其他任何民族都熟悉大海。(23)迦太基商队以拥有大船而著名,用帆航行。迦太基水手不仅能够沿岸航行,而且能够进行深海航行,在航行中依靠太阳和星辰的位置、熟悉的海岸地形和地貌来辨别航行的方向。

    古希腊罗马作家提到了迦太基人在公元前5世纪的两次大西洋航行,比希腊航海家皮西亚斯(Pytheas of Massalia,前320-前285)在公元前4世纪晚期的首次大西洋航行早了1个世纪。公元4世纪的罗马学者阿维努斯(Avienus)提到,迦太基将军希米尔科(Himilco,前460-前410)船队向北沿着伊比利亚半岛和高卢的西海岸航行数月,在西欧寻找矿石,可能到达了不列颠。(24)老普林尼记录了迦太基将军汉诺(Hanno)的《航行记》,称汉诺船队沿非洲西海岸进行了探险。(25)汉诺船队大致在公元前520年进行远航(26),远达摩洛哥、毛里塔尼亚、冈比亚等地,并在摩洛哥建立了殖民地。据说最远达几内亚湾,接近了赤道。大西洋航行探索将迦太基人的贸易圈扩展到了大西洋海域。公元前460年,迦太基人开始将摩洛哥的咸鱼运往希腊的科林斯(Corinth)。位于摩洛哥大西洋沿岸的殖民地的建立,与迦太基殖民活动的扩展路径是一致的。迦太基人拥有当时最好的船长和水手,他们能够安全地航行在地中海西部海域和大西洋沿岸。(27)

    二、迦太基人与希腊人、罗马人在海上商业贸易中的冲突和结果

    在迦太基人所开展的对外殖民和贸易活动中,他们和古希腊人、罗马人发生了碰撞和交流。迦太基人在对希腊人、罗马人的战争中失败,奠定了古代西方叙述中迦太基人的失败者形象。

    (一)迦太基和希腊人在西西里的争夺和较量

    在迦太基的对外殖民活动中,西西里是联系地中海东西航道和南北航道的重要交通要冲,地位举足轻重。希腊史学家修昔底德(Thucydides,前460-前400/396)说腓尼基人在希腊之前就来到西西里殖民,居住在沿海,占据岬角和沿海的岛屿。(28)但实际情况可能并非如此。希腊人在东方化时代开始大殖民,在西西里和意大利南部(希腊人称为“大希腊”地区)建立殖民地。希腊人在南意大利的库玛(Cumae)建立殖民地是公元前750年,到达西西里是在公元前8世纪。(29)腓尼基人在西西里建立殖民地不会早于公元前8世纪。(30)在地中海西部,迦太基充当了腓尼基人保护者的角色。迦太基人在西西里西部、科西嘉和撒丁岛建立了很多殖民地,在西西里建立的殖民地有摩提亚(Motya)、帕诺慕斯(Panormus)、索罗伊斯(Soloeis)等。在科西嘉,为对抗共同的敌人希腊人,迦太基人和意大利北部的伊特鲁里亚人(Etruscans)建立了同盟关系。

    狄奥多鲁斯认为,财富是引起人类竞争的主要因素。(31)迦太基人征服一个地方后,通常会征收一大笔贡金。(32)迦太基人坚持要控制通向西班牙的航线,与公元前5世纪的雅典坚持垄断通往黑海的航线颇为相似,二者争夺的对象分别是金属和谷物。迦太基人和希腊人的西西里战争虽有商业和贸易的动机,但无法与17世纪欧洲的商业战争相提并论。

    西西里是迦太基通向西班牙的交通要冲和中转站。从公元前580年开始,腓尼基人和希腊人在西西里西部发生了冲突。希腊人试图进入西西里最西端的利利俾(Lilybaeum),遭到迦太基人的驱赶。在公元前535年的阿拉里亚(Alalia)战役中,迦太基人和伊特鲁里亚人联合起来击败了希腊人,将第勒尼安海毗邻的海域变为了迦太基的水域。从公元前5世纪开始,迦太基在西西里的主要竞争者是希腊城邦阿克拉伽斯和叙拉古。公元前580年开始的一个多世纪里,为与希腊人争夺西西里岛,迦太基、希腊之间发生了三次战争(布匿—希腊战争)。迦太基在公元前5世纪才有20多万人口,无法建立像其他国家那种公民兵体制,只能是从被征服者和商人中征募雇佣兵,由迦太基人充任将军。(33)也有人认为,精明的迦太基人不肯亲自参加战争,宁愿花钱来雇所谓的“蛮族人”来当兵。(34)

    随着公元前525年波斯帝国征服推罗,日益强大的迦太基与母邦推罗间的联系仅剩下向腓尼基神灵献祭和坚守推罗宗教传统了。公元前4世纪初,伯罗奔尼撒战争后的希腊人停止在西西里殖民,迦太基与希腊双方殖民争霸告一段落。迦太基控制西西里岛的西部,希腊科林斯人在西西里所建立的城邦叙拉古占据了西西里岛东部。西西里的殖民者带来了外来的文化,从多利亚人的农业文化到希腊的陶器和迦太基人的布匿文字都在西西里长期存在。在西西里的外来文化中,希腊文化的影响最大。在人类历史上,最早的洲际划分的理念是由希腊人提出的。波里比乌斯指出,希腊人把“有人居住的世界”()分为欧罗巴、亚细亚和利比亚三大洲。(35)老普林尼说,非洲被希腊人称为“利比亚”,它不包括埃及。(36)斯特拉波强调利比亚疆域很小,那里只有沙漠和野兽;(37)有人居住的世界只有希腊人和野蛮人两种人。(38)狄奥多鲁斯在叙述西西里的历史时,将西西里岛上叙拉古之外的居民称为“蛮族人”,称“除了叙拉古以外的蛮族人拥有了整个西西里”。在书写希腊人和迦太基人在西西里的冲突的时候,狄奥多鲁斯时而使用“迦太基人”的称呼,时而使用“蛮族人”的说法(39),表达了希腊世界对于迦太基人的看法。

    西西里的外来文化包括布匿文化、希腊文化等,二者分别分布在西部和东部。在迦太基占主导的西西里西部,土著的伊利米人(Elymians)和外来的迦太基人都没有建立起政治组织,更多地表现为一种文化上的传播和交流。在公元前6世纪的西西里西部,迦太基人更注重加强对其他腓尼基人城市的控制,他们在两个世纪后彻底控制腓尼基城邦的货币发行权。到公元前6世纪末,迦太基加强了对西西里西部的统治,在岛屿南部的势力也有了明显提升。迦太基宗教在西西里和撒丁岛的传播有了明显的加强,如献祭和葬仪及托非特(tophet)祭坛。(40)

    与迦太基人相比,希腊人在西西里政治上的存在感更强,建立了叙拉古等城邦国家。叙拉古实行僭主政治,以文化发达而著称。狄奥多鲁斯认为,在公元前5世纪早期的希波战争中,迦太基曾经和波斯帝国结盟,从西部和东部同时对希腊人开战,其中迦太基人负责对西西里和意大利的希腊人作战,波斯人对希腊本土作战。(41)发生在西西里的希梅拉(Himera)战役和希腊大陆的萨拉美斯(Salamis)海战,是在公元前480年的同一天发生的。希罗多德(Herodotus,约前480-前425)、亚里士多德均认为,这有可能只是时间上的巧合而已,波斯人和迦太基人并未在战前协商一致。(42)迦太基对和波斯人结盟未必有兴趣,因为母邦推罗人被迫成为波斯人的仆从国。雅典曾试图联合迦太基人,对抗与他们有矛盾的叙拉古,遭到迦太基的拒绝。在与阿克拉伽斯—叙拉古联军的希梅拉战役中,迦太基的30万军队损失了15万人。(43)迦太基在战后70年间逐渐将注意力转向非洲,还不断向内陆地区推进,试图将利比亚腓尼基化,并将自身非洲化,以巩固其非洲帝国。(44)

    在希波战争后,希腊人加强了对西西里的控制,希腊文化在西西里的影响超过了迦太基。从公元前4世纪起,迦太基人、坎佩尼亚人(Campanians)、奥斯其人(Oscans)和罗马人都活跃在西西里,使得西西里成为一个地中海世界不同文明交汇的大熔炉。公元前2世纪以来,随着罗马人在西西里统治的开始,西西里的政治日益罗马化。但新的政治秩序并未带来立竿见影的效果,希腊文化仍是西西里的强势文化。在西西里,迦太基人的布匿文化并未在第一次布匿战争结束后消失。在公元前1世纪的西西里,布匿文化的影响仍无处不在,陶罐、花瓶等物品上大量出现了布匿语和希腊语的双语铭文。(45)布匿铭文在西西里至少存在到了公元1世纪,布匿口语存在的时间更长。(46)在西部西西里,希腊人和“蛮族人”之间有密切的来往,存在着语言的融合。希腊化时期利利俾出土的铭文记载的两个人的名字,就是集希腊语、拉丁语、布匿语于一身的混合式名字。

    (二)迦太基人和罗马人商业霸权的争夺以及最终的失败

    迦太基在其统治区域内实行贸易垄断制度,达到控制商贸航线和征收关税的目的。据波里比乌斯记载,迦太基与罗马在公元前509年签订了一个条约,对罗马人及其盟邦船只航行到“菲尔角”(Fair Promontory,亦译“美丽岬角”)以西海域进行了严格限制(47),保持了他们对地中海西部的海上商业霸权。但学者卡列(M.Cary)认为,波里比乌斯所说的签订条约的时间是错误的,正确的时间应是公元前308年。(48)这个条约表明,迦太基在公元前6世纪或4世纪末已有能力对所有外国商船封锁直布罗陀海峡。没有迦太基人的同意,希腊航海家皮西亚斯从加底斯出发经过直布罗陀海峡的大西洋航行就不可能发生。(49)从马耳他(Malta)到西西里、撒丁岛、巴利阿里(Balearics)岛、西班牙的航线,是由迦太基人严密控制的。从公元前6世纪起,希腊人不能直接从西班牙南部获得锡、铜和银。公元前540年,迦太基人联合伊特鲁里亚人将希腊人赶出了科西嘉。(50)

    到公元前6世纪晚期,迦太基的领土从昔兰尼(Cyrene)延伸到了大西洋。(51)在迦太基人看来,撒丁岛和利比亚是自己的领土,而西西里只涉及他们所统治的一部分区域。波里比乌斯告诉我们,迦太基人和罗马人在公元前306年又签订了一个条约,迦太基人明确将利比亚和撒丁岛视为自己的私有财产,将推罗和乌提卡(Utica)也纳入了自己的领土范围,不许罗马人涉足。(52)迦太基人严厉打击海盗,成为古代文明早期和平贸易协约的倡导者。

    迦太基并不缺乏金银,他们从中非得到黄金,从西班牙获取白银。公元前6世纪,钱币在希腊世界已经普及。到公元前6世纪最后25年,希腊人主要居住区都已使用了金属货币。(53)迦太基人重视商业,他们教育青年的指导思想就是要证明商业在迦太基公私生活中的优越地位。迦太基人早期采取的是物物交换或使用希腊等外币。希罗多德记述了迦太基商人同利比亚土著居民“无声的贸易”的交易细节,迦太基人以商品换取土著(可能是摩洛哥土著)的黄金。(54)公元前5世纪末,为了给西西里的雇佣兵开支薪酬,迦太基人用西班牙的银锭来铸造自己的银币。

    公元前3世纪早期是迦太基商业帝国的黄金时期。希腊人海上的势力日益衰落,亚历山大帝国一分为三,罗马人忙于征服中部和北部的意大利人,迦太基人几乎垄断了整个地中海的商业贸易。在罗马人发起挑战之前,迦太基人保持了他们对海洋的控制权,而且把注意力转向通过侵略建立一个陆上帝国,采取了一种更有侵略性的帝国主义政策。随着罗马在公元前3世纪介入西西里事务,迦太基和罗马人的矛盾逐渐激化。从公元前264年争夺西西里开始,罗马和迦太基先后发生了三次战争(布匿战争)。布匿战争揭开了整个地中海世界国家间关系转型的序幕,标志着一个新时代的开始。罗马人开始从陆地向海洋拓展,对迦太基人在西地中海世界的利益构成了严重的挑战。

    罗马是一个传统的陆军强国,早期并未设置海军。第一次布匿战争爆发后,罗马迅速组建一支由公民兵组成的海军,军队士气高。迦太基海军强大但是公民兵不足,主要依靠西班牙人、柏柏尔人(Berbers)及后来的努米底亚人(Numidians)等外族人所组成的雇佣军,军队缺乏凝聚力和爱国精神。波里比乌斯清楚地看到了这一点,认为迦太基军队使用的是外国人和雇佣军,而罗马军队士兵都是本国的公民和土著人,罗马人是在为自己的国家和儿女而战。(55)公元前242年,在第一次布匿战争中战败的迦太基人退出西西里,罗马人获得了除了叙拉古之外的整个岛屿(56),西西里由此在公元前227年成为罗马的第一个行省。迦太基在第二次布匿战争中在本土战败,被迫和罗马人签订了几乎交出一切的苛刻条约。在反贵族寡头的选民的支持下,汉尼拔于公元前196年当选为迦太基的首席行政长官苏菲特(sufetes),推行民主改革,重整军队,迦太基逐渐从战败的阴影中走出,元气得以恢复。

    迦太基的复兴让罗马人产生了一种恐惧之情,贵族加图(Marcus Cato,前234-前149)提出:“迦太基必须被毁灭。”(57)罗马在公元前149年发起第三次布匿战争。阿庇安、狄奥多鲁斯等希腊罗马史学家提供了公元前146年迦太基战败被毁的画面,称罗马军队纵火烧毁迦太基城。(58)这次战争让迦太基的人口丧失了三分之一,一个世纪以后也未能得到完全的恢复。(59)罗马人为了抹去迦太基的历史记忆,几乎将迦太基图书馆的所有藏书都转赠给了他们在非洲的盟友努米底亚,企图只留下罗马人的历史书写。阿庇安告诉我们,罗马元老院派往迦太基的10名元老组成的代表团下令将迦太基夷为平地,任何人不得在迦太基居住,否则会受到诅咒。(60)与迦太基人斗争的这段波澜壮阔的经历,成为罗马神话不可或缺的一页。第二次布匿战争激发了罗马人的民族意识和爱国热情,罗马人开始首次书写罗马和迦太基的历史,建构了失败的迦太基和胜利的罗马的民族形象。

    斯特拉波告诉我们,迦太基于公元前146年的最后时刻在利比亚本土仍然拥有300座城市和70万城市人口的实力。(61)古代知识精英有夸大数字的习惯,斯特拉波这个数字可能有些夸张了。《剑桥古代史》提出,此时迦太基城约有20万—30万人。(62)韦尔认为,迦太基城在公元前146年毁灭之际的人口不会超过13万人,因为该城没有那么多人口的居住空间。(63)毁灭迦太基,让罗马商人和意大利商人填补了远距离贸易的空白。他们取代了迦太基商人,来到努米底亚从事贸易活动,获利颇丰。

    (三)古希腊罗马作家对迦太基人的负面形象建构

    在古希腊罗马作家的眼中,迦太基人是一群好斗的恶毒的东方入侵者,是暴虐的、虚伪的和贪婪的。波里比乌斯指出:在迦太基,任何能产生利益的东西都不会被视为可耻的,竞选官职可进行公开的贿赂;而罗马人会谴责不择手段赚钱的方式,竞选行贿是死罪。(64)狄奥多鲁斯对迦太基人和罗马人的文明冲突有详细的报道,他说迦太基人只要失败就会导致精神和意志的崩溃,这在别的民族是很难想象的。(65)他意味深长地说了一句话:“我认为有的民族对人类社会危害甚烈是理所当然的事情,如严霜和冰雪会摧毁刚刚生长的作物。”(66)狄奥多鲁斯的言外之意,迦太基人是一个有害的民族。他批评迦太基当局在国家危机之际提拔新将领,危机过后却又让他们身败名裂的做法。(67)但狄奥多鲁斯也客观地指出了罗马人的残酷、失信和傲慢,指出了罗马人在毁灭迦太基后的问题,如忧患意识的缺乏、行政官员的贪婪和对法律的无视、民意煽动家所造成的危险、内战延长带来的恐惧等问题(68),这成为罗马共和晚期诸多矛盾之源。

    地中海西部的希腊人和迦太基人之间,存在着一道不可逾越的鸿沟。希腊人表现出一种与生俱来的优越性和激烈的排外情绪,对希腊人与其他民族在政治、文化、宗教等领域日益增强的融合趋势表现出一种对抗的态度。古希腊作家对迦太基人的态度,直接影响了罗马的知识精英。罗马人从未将自己看作希腊人,但他们已经认识到自己在民族文化的分水岭中与希腊文化同属一个阵营。这个分水岭将文明的希腊世界与野蛮人的世界区分开了,而迦太基人显然属于野蛮人世界。罗马史学家李维在关于布匿战争的记载中,一直在将罗马人的美德与迦太基人的恶习进行对比,认为汉尼拔的背信弃义超过了任何一位迦太基人。西里乌斯·伊塔利库斯(Silius Italicus,28—103)是一位罗马元老,写下以第二次布匿战争为主题的史诗《布匿战记》(Punica),使用了“迦太基人是残忍的”这样的表述。(69)希腊罗马人对迦太基人的“他者”想象——胡言乱语的、贪婪的、不守信用的、残忍的、傲慢的、不敬神的迦太基人,成为古典史学叙述的主流。“迦太基式的信用”(fides Punica),亦成为背信弃义的同义语。(70)

    在希腊罗马人看来,迦太基的失败与他们对待盟邦的态度有直接的关系。狄奥多鲁斯指出,迦太基人“过于残酷和苛刻”(71),对待被征服地区经常采取掠夺、索取和高压的政策,因而盟邦痛恨迦太基的高压统治。(72)狄奥多鲁斯还说,迦太基军队在占领西西里阿克拉伽斯城之后,将神庙财产和城市的雕像、物品都运到迦太基,然后焚烧了寺庙,洗劫了全城。在占领另一座城市杰拉(Gela)之后,官兵都发了财。(73)狄奥多鲁斯强调,长期以来,迦太基的利比亚盟邦憎恨迦太基人的压榨和索取,他们一有机会就发起叛乱。(74)在叙述和希腊人争夺西西里挫败原因的时候,狄奥多鲁斯指出这是迦太基人不尊重推罗神灵的结果,认为迦太基人一开始对在推罗受到顶礼膜拜的赫拉克勒斯是尊重的,向其奉献年收入的十分之一。而在后来,他们疏忽了,所以神的恩泽就减少了。狄奥多鲁斯还强调了神对迦太基贵族用别人的孩子冒充自己的孩子献给神灵的愤怒之情。(75)

    古希腊作家普鲁塔克(Plutarchus,约46-120)和狄奥多鲁斯都提到迦太基人用儿童献祭,后者还说迦太基贵族曾用200位贵族儿童作为牺牲品献给神灵。(76)威尔·杜兰(Will Durant)赞同古典作家的这种说法,认为直到2世纪仍存在献给麦勒卡特(Melkart,推罗主神)的活童祭品。(77)《牛津古典辞书》认为从公元前8世纪晚期到公元前146年,童祭在迦太基一直存在,但在实践中一开始就用动物代替了儿童。(78)泽内达·A.拉戈金(Zenaide A.Ragozin)提出,是在与耶稣同时期的罗马皇帝提比略(Tiberius,37-41年在位)时期才彻底废除了迦太基的这个习俗。(79)《剑桥古代史》指出迦太基托非特祭坛的骨灰瓮中发现了儿童烧焦的骨头,还有两篇提到杀婴的铭文。(80)但奥贝特(M.Aubet)不同意这种杀婴献祭的观点,他认为考古学家对托非特祭坛中的人类骨殖和灰烬的分析表明,他们大多属于死婴和新生儿的,这明显带有自然死亡的味道(81),并非来自儿童献祭所产生的非正常死亡。撒里贝姆特(Andrea Salimbeti)提出,目前的证据无法证实迦太基存在火焚活童的祭祀。(82)本文亦认为,古希腊罗马作家的记载经常存在夸大的成分和道听途说的习惯。根据目前的史料和考古材料,我们对普鲁塔克和狄奥多鲁斯关于迦太基存在活童祭观点的可靠性还无法确认,还有待于进一步的材料和研究。

    三、迦太基的文明特征和文明贡献

    作为东方闪米特人的一支,迦太基人创建了一个庞大的海上殖民和商业帝国,将文明扩展到了地中海西部广大区域甚至是大西洋沿岸,同时带动了北部非洲本土的开发,推动了东西方文明、非洲文明和外来文明的交流和融合。

    (一)迦太基对非洲的开发和文明传播

    迦太基海上商业帝国的辉煌,很容易掩盖其农业的成就。当推罗人和西顿人在非洲沿岸的巴巴里(Barbara)建立最初的停靠港时,当地居民还处于新石器时代,金属器具是从外国输入的。古代非洲(不包括希腊罗马人所说的不属于利比亚的埃及)居民也栽种过某些作物和饲养过某些牲畜,但只有到了迦太基成立和崛起以后,非洲才在迦太基的引领下有了真正的农业。从某种意义上说,迦太基也是非洲的首都。迦太基人的农业开发是成功的,他们大量使用奴隶劳动,对土著的示范和对非洲的物质繁荣都做出了贡献。它把文明传播到它所兼并的国家,超出自己的领土,传播到其附属国和同盟国中去。(83)迦太基人从西亚带来了葡萄、橄榄、无花果、石榴等,在土质和气候适宜的非洲广泛种植,迦太基的石榴被罗马人称为“迦太基苹果”。

    公元前5世纪在西西里的殖民活动受挫后,迦太基加快了开发非洲本土农业的步伐,将撒哈拉最肥沃的土地和大流沙区纳入其领土,变为农业区。从公元前3世纪末开始,谷物和葡萄园的收入就已经成为迦太基统治阶层的主要收入来源了。(84)古希腊罗马作家共66次提及迦太基著名农学家马戈(Mago)的耕作法。(85)柏柏尔人同迦太基人的接触比其他非洲民族都早,受迦太基人的影响也最大。柏柏尔人向迦太基人学会了农业,迈入了农耕文明。菲利普C.内勒(Phillip C.Naylor)认为,迦太基人懂得欣赏不同民族的文化,其复杂的文化是通过万神殿和语言来表达的,给善于接受文化的柏柏尔人留下深刻印象。(86)迦太基文化对柏柏尔人的思想和习俗的影响,成为跨文化交往的典范。努米底亚人在公元前3世纪向迦太基人学会了建设城市,引入了马戈农书所介绍的耕作方法,还吸取了迦太基的文化及宗教。(87)柏柏尔人在公元前202年建立了努米底亚王国,他们所建的城市科塔(Cirta)也出现了迦太基式的托非特祭坛。(88)努米底亚贵族热衷于同迦太基上层通婚,给子女取布匿人的名字。(89)努米底亚和迦太基的铜币设计都是一样的,二者出现了文化同化的现象。

    20世纪以来的考古挖掘表明,迦太基人对北部非洲内陆进行了深度渗透,建立了很多殖民地和临时性商业居留地,创建了一系列城市,使得迦太基文明得以在非洲广泛传播。1970年代,考古学家在非洲卡本半岛(Cap Bon Peninsula,今突尼斯东北)发现了迦太基人所建的殖民地盖赫库阿勒(Kerkouane)的遗迹。该城可能建于公元前6世纪,提供了迦太基人和利比亚土著人交往的线索。盖赫库阿勒居民敬拜的神是腓尼基人和迦太基人的神灵麦勒卡特和其子锡德(Sid)、塔尼特等。该城大部分房屋都有中央庭院,院中建有迦太基人用作洁净仪式的浴室。盖赫库阿勒居民使用迦太基语,但利比亚的元素随处可见,如利比亚土著居民的殡葬仪式。盖赫库阿勒城还发现了描写希腊英雄奥德修斯形象的雅典式黑彩陶制酒壶和爱奥尼亚式杯子,以及具有希腊风格的建筑,反映了该城多元文化的特点。(90)

    斯特拉波告诉我们,迦太基被毁后在很长一段时期内和希腊的科林斯一样,一直处于荒无人烟的状态。(91)罗马有识之士试图推出重建迦太基的计划。据阿庇安记载,在迦太基荒芜了30年后,罗马保民官盖约·格拉古(Gaius Gracchus,前154-前121)在公元前123年提出移民6000人到迦太基的计划,而不顾西庇阿在毁灭迦太基的时候诅咒迦太基将永作牧场的事实。(92)恺撒(Gaius Julius Caesar,前100-前44)曾在公元前44年重提移民迦太基的计划,以解决贫民的土地问题,但未来得及实施便遇刺身亡。奥古斯都(Augustus,前63-公元14)在公元前29年派遣移民重建迦太基,并将非洲行省(亦称阿非利加行省)的首府从乌提卡迁到迦太基。(93)罗马退役军人、殖民者、商人和工程人员接踵而来,将新迦太基建设成为罗马帝国西部最大的城市,也是罗马帝国著名的文化、教育和学术中心。斯特拉波指出,和利比亚境内的其他人和城市一样,迦太基是一座繁荣兴旺的城市。(94)在迦太基以东的港口大莱普提斯(Leptis Magna),诞生了罗马帝国首位出身非洲的元首塞普提米乌斯·塞维鲁(Septimius Severus,146-211),塞维鲁王朝也成为罗马帝国历史上的第一个非洲王朝。

    (二)迦太基文明与希腊文明、埃及文明和罗马文明的交流和交融

    迦太基文化受到希腊文明、埃及文明的明显影响,表现出混合性或者折中主义的特点,体现在艺术、宗教和物质文化等方面。至少从公元前6世纪起,迦太基人就将希腊和埃及的建筑风格与迦太基的建筑风格结合起来,形成了混合式的迦太基建筑艺术。

    迦太基与西西里希腊人的战争带来了文化宗教的交流,导致了二者文化融合的范围远远超出了西西里海岸地区而深入内陆,这是西西里殖民化以来的显著特征之一。从公元前4世纪起,希腊文化对迦太基的影响越来越明显,但是迦太基的文化并未失去自身传统,尤其是在语言和宗教方面。公元前4世纪早期,迦太基当局禁止迦太基人学希腊语,但却未见成效,希腊语在北部非洲成了仅次于布匿语的第二语言。(95)大量证据显示,迦太基人会说希腊语,阅读希腊著作,着希腊服饰,崇拜希腊神祇。在西西里,出现了很多具有希腊文化和腓尼基文化特色的器具和艺术品,如西西里出现的铸有六头金牛肖像的酒杯。迦太基人的墓葬中出现了大量希腊钱币,主要是希腊铜币,这可能是对外贸易的需要。斯特拉波说,在迦太基卫城柏萨的高处,有一座阿斯克勒庇俄斯(Asclepius)神庙。(96)阿斯克勒庇俄斯是希腊的医神,它在迦太基出现说明迦太基人对于希腊医神的崇拜和对祛除疾病、战胜瘟疫的渴求,亦反映希腊宗教对迦太基人的影响。

    另外,迦太基人对希腊神赫拉克勒斯的崇拜,亦反映了希腊文化对迦太基文化的影响。从某种意义上说,赫拉克勒斯崇拜具有文化的多样性与跨文化的相互关联性,比其他任何一位神都更适合古代地中海居民。从公元前6世纪起,在地中海中西部,赫拉克勒斯开始越来越多地被与迦太基的麦勒卡特联系在一起。当汉尼拔试图寻找一位天国的精神领袖以将西方的人们联合起来抗衡罗马之际,他选中的就是赫拉克勒斯-麦勒卡特。(97)泽内达·A.拉戈金强调腓尼基宗教亦对希腊(尤其是希腊大陆的东海岸)的宗教有影响(98),但未提供有力的证据。

    在希腊人的影响下,迦太基人的丧葬方式也发生了改变,从土葬转为了火葬。在奥古斯都重建迦太基城之后,这里到处都有希腊风格的雕像。20世纪以来蓬勃发展的考古学,让我们重构古代迦太基的文化成为可能。考古学家在突尼斯圣路易山丘的斜坡上,挖掘出了为罗马人纵火毁城的灰烬层所覆盖的房屋以及迦太基的一个街区,都反映出这里具有希腊化时代的特点,如迦太基街区的房屋规格较小、由各个房间所环绕着的中央庭院作为建筑物的光源,等等。

    埃及人的宗教和来世观也对迦太基人产生了影响。迦太基人崇拜埃及的奥里西斯(Osiris)神,将迦太基主神巴尔-哈蒙(Baal Hammon)和埃及的阿蒙神相提并论。受埃及人的影响,迦太基人认为人死后会有两个灵魂。迦太基人还向埃及人学会了制作木乃伊的技术。公元前4世纪的迦太基墓葬出土的剃刀上的宗教主题图案,集中反映了埃及和迦太基的神灵和神圣符号。在迦太基人的护身符上,出现了埃及的神灵、动物形象和神圣符号。从埃及进口的圣甲虫和首饰(99),在迦太基大受欢迎。在迦太基人的陪葬品中,出现了描绘埃及神祇和法老形象的祭品,这被认为有助于驱邪。

    迦太基公元前146年亡国后的这段历史,被称为晚期布匿或新布匿时期。虽然罗马文化成为官方文化,但迦太基人的文化并没有中断,在语言、建筑、绘画、雕刻、美术、教育等领域都有体现。迦太基艺术有其自身的特点,表现为对细节和对称的过度关注。非洲很多地方官员都使用了迦太基式的“苏菲特”的名称。迦太基神庙保留了下来,但祭司取罗马的名字,着罗马的托袈。(100)迦太基的神名改成了罗马的神名,巴尔-哈蒙变成了罗马农神萨图尔(Saturn),塔尼特变成为了罗马天后朱诺(Juno)。巴尔-萨图尔被视为罗马的丰收之神,成为罗马非洲行省农业发展在宗教文化领域的反映。(101)罗马人将腓尼基人语言称为布匿语,迦太基战争被罗马人称为布匿战争。迦太基被罗马征服之后,拉丁语的重要性超过希腊语,成为迦太基人从小必须学习的语言,城市中受教育者两种语言都会。但在整个罗马帝国的非洲行省,布匿语仍是官方通行语言。公元前8年,非洲大莱普提斯城的迦太基籍城市长官捐助了一座纪念碑,使用了拉丁语和迦太基语的双语铭文。捐助者的名字“汉尼拔·塔帕皮乌斯·鲁福斯”,就是罗马姓氏和迦太基本名的结合。(102)在罗马帝国基督教神学家圣奥古斯丁(St.Aurelius Augustinus,354-430)写作的时代,布匿语仍出现在非洲的拉丁书信和铭文中,并一直使用到了5世纪早期,显示了语言传统强大的惯性和生命力。439年,汪达尔首领盖萨里克(Geiseric,389-477)占领了迦太基,建立了汪达尔王国,罗马-迦太基的时代结束了。(103)

    结语

    迦太基研究专家沃明顿指出:“作为城邦国家的迦太基试图统治一个帝国,并能够维持3个世纪之久的统治,比雅典人的统治还长了三分之一。”(104)在古代世界,迦太基的成功之处不仅体现在拥有的巨额财富,而且包括它统治的长久的稳定和持久,这甚至赢得了其对手的尊敬。西塞罗强调,迦太基如果没有使用智慧和政治谋略的话,就不可能维持一个帝国达600年之久。(105)亚里士多德将迦太基的政体和斯巴达、克里特的政体归为一类,认为它们是最接近理想的混合政体,高度评价迦太基设施优良、政治稳定和制度修明。(106)迦太基也是亚里士多德所研究的唯一非希腊国家。古希腊作家伊索克拉底(Isocrates,前436-前338)也称赞迦太基和斯巴达是世界上治理最好的两个民族。(107)

    财富和政体具有密切的关系。迦太基是世界历史上第一个有组织的商业国家(108),进行了地中海商业帝国建设的第一次尝试。迦太基的财富来自它在非洲和西地中海的区域性帝国,它在海上力量保护下的海外贸易,为其维持一支强大的海军提供了财政支持。迦太基商业的成功使之把政治权力给了财富寡头,在公民人口不足的情况下供养了一支雇佣军,维护了迦太基政权的稳定。迦太基在公元前6世纪成为一个强大的国家。到公元前5世纪,随着迦太基人在西西里的布匿—希腊战争中遇挫,迦太基人在西地中海的贸易活动暂时走向了衰落。到公元前4世纪中叶,随着非洲的开发和对非洲属国资源的掠夺,迦太基的商业再次繁荣起来。在丧失西班牙的海外领地之后,迦太基仍被波里比乌斯称为世界上最富有的城市。(109)迦太基对外来文化采取了开放的态度,越来越多地受到了希腊文化的影响。

    迦太基建构了一个地中海世界商贸殖民网,也为各殖民地之间的政治、经济、社会文化的交流和非洲的开发做出了积极的贡献。在杜丹看来,迦太基的历史表现出了一种文明的肤浅和虚弱,其主要推动力是获取财富和扩展商业,其诸多胜利转瞬即逝。迦太基没有依靠经济优势去争取政治和文化的进步,未能在文化领域取得辉煌的成果。这也是迦太基在和罗马的文明较量中失败的重要原因。但我们必须看到,作为西方文明源头的古典世界从来不是希腊—罗马文明的特有成果,而是包括迦太基文明在内的不同文化与民族之间互动、交流和融合的结果。

    注释:

    ①迦太基研究先驱彻什的《非洲帝国之迦太基》(Alfre John Church,Carthage of the Empire of Africa,New York:G.P.Putnam’s Sons,1899),论述了迦太基发展成为非洲帝国的过程。斯密斯的《迦太基和迦太基人》(R.Bosworth Smith,Carthage and the Carthaginians,London:Longmans,Green and Co.,1913)考察了迦太基、迦太基人的发展历程及其对古代世界的影响。沃明顿的《迦太基》(B.H.Warmington,Carthage,London:Robert Hale Limited,1960)从汉诺的非洲航行来追溯迦太基历史。莫斯卡蒂的《迦太基艺术与文明》(Sabatino Moscati,Carthage:Art et Civilization,Milan:Jaca Book,1983)重点介绍了迦太基的文化。皮卡德的《迦太基:从诞生到终结悲剧的布匿历史与文化研究》(Gilbert Charles Picard,Carthage:A Survey of Punic History and Culture from Its Birth to Final Tragedy,London:Sidgwick & Jackson,1987)是一部涉及政治、经济、文化和外交等内容的综合性著作。霍约斯的《迦太基人》(Dexter Hoyos,The Carthaginians,London and New York:Taylor and Francis Group Press,2010)考察了迦太基崛起的进程及商业帝国的建立。迈尔斯的《迦太基必须毁灭:古文明的兴衰》(Richard Miles,Carthage Must Be Destroyed:The Rise and Fall of An Ancient Mediterranean Civilization,London:Allen Lane,2010)借鉴了近现代研究成果,学术价值较高。特里布拉图编的《古代西西里的语言联系》(Olga Tribulato,ed.,Language and Linguistic Contact in Ancient Sicily,Cambridge,U.K.:Cambridge University Press,2012)考察了布匿语等古代西西里语言和文化。国内研究论文有数篇。陈恒的《迦太基建城日期小考》,《常熟高专学报》2001年第1期,对迦太基建城进行了考察;杜建军、刘自强的《论布匿战争爆发的原因》(2002)从政治经济文化进行了探讨。总的来说,关于迦太基的崛起等问题还有进一步研究的较大空间。

    ②Alfred J.Church,Carthage of the Empire of Africa,p.11.

    ③Dionysius of Halicarnassus,The Roman Antiquities,Loeb Classical Library,trans.Earnest Cary,Vol.1,Cambridge,Mass.:Harvard University Press,1937,p.245.本文所引古希腊罗马文献,出自“罗布古典丛书”(Loeb Classical Library)。

    ④Appian,The Punic Wars,I.1,Loeb Classical Library,Vol.1,trans.Horace White,Cambridge,Mass.:Harvard University Press,1912,p.403.

    ⑤Andrea Salimbeti,Raffaele D’ Amato,The Carthaginians,6th-2nd Century BC,Oxford:Osprey Publishing,1991,p.4.

    ⑥Appian,The Punic Wars,XIX 132,Vol.1,p.637.

    ⑦Strabo,Geography,XVII.3.14,Loeb Classical Library,Vol.8,trans.Horace Leonard Jones,Cambridge,Mass.:Harvard University Press,1928,p.183.注:斯塔德(Stadium,复数为Stadia),古希腊长度单位。1斯塔德合625英尺约等于184.97米。

    ⑧理查德·迈尔斯:《迦太基必须灭亡:古文明的兴衰》,孟驰译,北京:社科文献出版社,2016年,第85页。

    ⑨B.H.Warmington,Carthage,London:Robert Hale Limited,1960,p.23.陈恒采用了辛塔斯所提出的公元前725年的说法,参见陈恒:《迦太基建城日期考》,《常熟高专学报》2001年第1期。

    ⑩Maria Giulia Amadasi Guzzo,”Phoenician and Punic in Sicily,” in Olga Tribulato,ed.,Language and Linguistic Contact in Ancient Sicily,Cambridge,U.K.:Cambridge University Press,2012,p.130.

    (11)B.H.Warmington,Carthage,p.22.

    (12)Benjamin W.Wells,”Business and Politics at Carthage,” The Sewanee Review,Vol.28,No.4(Oct 1920),p.507.

    (13)Olga Tribulato,”So Many Siciliies,” in Olga Tribulato,ed.,Language and Linguistic Contact in Ancient Sicily,pp.15-16.

    (14)B.H.Warmington,Carthage,p.23.

    (15)J.N.Coldstream,Geometric Greece,London:Routledge,1977,p.240.

    (16)杜丹:《古代世界经济生活》,志扬译,北京:商务印书馆,1963年,第187页。

    (17)理查德·迈尔斯:《迦太基必须灭亡:古文明的兴衰》,第37页。

    (18)Diodorus of Sicily,Library of History,XXV.10,Loeb Classical Library,Vol.11,trans.Francis R.Walton,Cambridge,Mass.:Harvard University Press,1957,p.155.

    (19)Simon Hornblower,Antony Spawforth & Esther Eidinow,eds.,The Ox ford Classical Dictionary,Oxford:Oxford University Press,2012,p.284.

    (20)Benjamin W.Wells,”Business and Politics at Carthage,” p.505.

    (21)杜丹:《古代世界经济生活》,第188页。

    (22)Pliny the Elder,Natural History,V.67,Vol.2,p.271.

    (23)Polybius,The Histories,VI.52,Loeb Classical Library,Vol.3,trans.W.R.Paton,Cambridge,Mass:Harvard University Press,1923,p.385.

    (24)Alfred J.Church,Carthage of the Empire of Africa,p.100.

    (25)Pliny the Elder,Natural History,V.8,Vol.2,p.223.

    (26)Alfred J.Church,Carthage of the Empire of Africa,p.95.

    (27)Benjamin W.Wells,”Business and Politics at Carthage,” p.503.

    (28)Thucydides,History of the Peloponnesian War,VI.2,Loeb Classical Library,Vol.3,trans.C.Forster Smith,London:W.Heinemann; New York:G.Putnam’s Sons,1919,p.183.

    (29)B.H.Warmington,Carthage,p.34.

    (30)Olga Tribulato,”So Many Siciliies,” p.14.

    (31)Diodorus of Sicily,Library of History,XXXVI.3,Loeb Classical Library,Vol.12,trans.Francis R.Walton,Cambridge,Mass.:Harvard University Press,1967,p.237.

    (32)Diodorus of Sicily,Library of History,XIV.65,Loeb Classical Library,Vol.6,trans.C.H.Oldfather,Cambridge,Mass.:Harvard University Press,1954,p.190.

    (33)B.H.Warmington,Carthage,pp.40-55.

    (34)R.Bosworth Smith,Carthage and the Carthaginians,London:Longmans,Green and Co.,1913,p.57.

    (35)Polybius,The Histories,III.37,Loeb Classical Library,Vol.2,trans.W.R.Paton,Cambridge,Mass.:Harvard University Press,1923,p.87.

    (36)Pliny the Elder,Natural History,V.1,Vol.2,p.219.

    (37)Strabo,Geography,XVII.3.1,Vol.8,p.155.

    (38)Strabo,Geography,XVI.2.38,Loeb Classical Library,Vol.7,trans.Horace Leonard Jones,Cambridge,Mass.:Harvard University Press,1930,p.287.

    (39)Diodorus of Sicily,Library of History,XX 3-6,Loeb Classical Library,Vol.10,trans.Russel M.Geer,Cambridge,Mass.:Harvard University Press,1954,pp.150-157.

    (40)Maria Eugenia Aubet,The Phoenicians and the West:Politics,Colonies and Trade,Cambridge U.K.:Cambridge University Press,2001,p.284.

    (41)Diodorus of Sicily,Library of History,XI.1,Loeb Classical Library,Vol.4,trans.C.H.Oldfather,Cambridge,Mass.:Harvard University Press,1946,p.123.

    (42)B.H.Warmington,Carthage,p.46.

    (43)R.Bosworth Smith,Carthage and the Carthaginians,p.21.

    (44)J.博德曼、N.G.L.哈蒙德等编:《剑桥古代史》,第四卷,张强等译,北京:中国社会科学出版社,2020年,第848页。

    (45)Maria Giulia Amadasi Guzzo,”Phoenician and Punic in Sicily,” p.126.

    (46)Olga Tribulato,”So Many Siciliies,” pp.17-29.

    (47)Polybius,The Histories,III.22-23,Vol.2,pp.53-55.按照波里比乌斯的说法,菲尔角在迦太基的前端,方向朝北,可能是今天的伯恩角,但是也可能是法里纳角(C.Farina)。

    (48)M.Cary,”A Forgotten Treaty between Rome and Carthage,” The Journal of Roman Studies,Vol.9,1919,p.76.

    (49)A.E.阿斯汀等编:《剑桥古代史》,第八卷,陈恒等译,北京:中国社会科学出版社,2020年,第21页。

    (50)J.博德曼、N.G.L.哈蒙德等编:《剑桥古代史》,第四卷,第492页。

    (51)A.E.阿斯汀等编:《剑桥古代史》,第八卷,第20—22页。

    (52)Polybius,The Histories,III.24,Vol.2,p.57.

    (53)J.博德曼、N.G.L.哈蒙德等编:《剑桥古代史》,第四卷,第486页。

    (54)Herodotus,Histories,IV.196,Loeb Classical Library,Vol.2,trans.A.D.Godley,Cambridge,Mass.:Harvard University Press,1926,p.399.

    (55)Polybius,The Histories,VI.52,Vol.3,p.387.

    (56)Polybius,The Histories,II.1,Vol.2,p.241.

    (57)Plutarch,Lives,”Marcus Cato,” LXX 1,Loeb Classical Library,Vol.2,trans.Bernadotte Perrin,Cambridge,Mass.:Harvard University Press,1914,p.383.

    (58)Appian,The Punic Wars,XIX.127-135,Vol.1,pp.627—637; Diodorus of Sicily,Library of History,XXXIII.24,Vol.11,p.435.

    (59)Benjamin W.Wells,”Business and Politics at Carthage,” p.518.

    (60)Appian,The Punic Wars,XX 135,Vol.1,p.643.

    (61)Strabo,Geography,XVII.3.15,Vol.8,p.185.

    (62)A.E.阿斯汀等编:《剑桥古代史》,第八卷,第172页。

    (63)Benjamin W.Wells,”Business and Politics at Carthage,” p.506.

    (64)Polybius,The Histories,VI.56,Vol.3,p.393.

    (65)Diodorus of Sicily,Library of History,XXIII.11,Vol.11,p.95.

    (66)Diodorus of Sicily,Library of History,XXVI.1,Vol.11,p.179.

    (67)Diodorus of Sicily,Library of History,XX 10,Loeb Classical Library,Vol.10,trans.Russel M.Geer,Cambridge,Mass.:Harvard University Press,1954,p.169.

    (68)Diodorus of Sicily,Library of History,XXXIV/XXXV.33,Vol.12,pp.131-133.

    (69)Silius Italicus,Punica,I,171-172,Loeb Classical Library,Vol.1,trans.J.D.Duff,Cambridge,Mass.:Harvard University Press,1927,p.17.

    (70)J.Starks,”Fides Aeneia:The Transference of Punic Stereotypes in the Aeneid,” Classical Journal,Vol.94,No.3(Feb.-Mar.1999),pp.250-260.

    (71)Diodorus of Sicily,Library of History,XIV.46,Vol.6,p.141.

    (72)Diodorus of Sicily,Library of History,XIV.76,Vol.6,p.217.

    (73)Diodorus of Sicily,Library of History,XIII.96; XⅢ,108,Loeb Classical Library,Vol.5,trans.C.H.Oldfather,Cambridge,Mass.:Harvard University Press,1956,pp.395—397; 429.

    (74)Diodorus of Sicily,Library of History,XX.3,Loeb Classical Library,Vol.10,trans.Russel M.Geer,Cambridge,Mass.:Harvard University Press,1954,p.150.

    (75)Diodorus of Sicily,Library of History,XX.14,Vol.10,p.179.

    (76)Plutarch,Moralia,171C-D,Loeb Classical Library,Vol.2,trans.F.C.Babbitt,Cambridge,Mass.:Harvard University Press,1928,p.493; Diodorus of Sicily,Library of History,XX.14,Vol.10,p.179.

    (77)威尔·杜兰:《凯撒与基督》下册,周杰译,幼狮文化公司译,北京:东方出版社,2003年,第601页。麦勒卡特是推罗人的主神,但在迦太基宗教体系中的地位却有了明显的下降,并非主神。

    (78)Simon Hornblower,Antony Spawforth & Esther Eidinow,eds.,The Ox ford Classical Dictionary,p.284.

    (79)泽内达·A.拉戈金:《亚述:从帝国的崛起到尼尼微的陷落》,吴晓真译,北京:商务印书馆,2020年,第137页。

    (80)F.W.沃克班克等编:《剑桥古代史》,第七卷第二分册,胡玉娟等译,北京:中国社会科学出版社,2020年,第567页。

    (81)María Eugenia Aubet,The Phoenicians and the West:Politics,Colonies and Trade,pp.251-252.

    (82)Andrea Salimbeti,Raffaele D’ Amato,The Carthaginians,6th-2nd Century BC,p.10.

    (83)杜丹:《古代世界经济生活》,第181—183页。

    (84)Benjamin W.Wells,”Business and Politics at Carthage,” p.514.

    (85)理查德·迈尔斯:《迦太基必须灭亡:古文明的兴衰》,第132页。

    (86)菲利普C.内勒:《北非史》,韩志斌等译,北京:中国大百科全书出版社,2013年,第25页。

    (87)Simon Hornblower,Antony Spawforth & Esther Eidinow.eds.,The Ox ford Classical Dictionary,p.284.

    (88)B.H.Warmington,Carthage,p.209.

    (89)夏尔·安德烈·朱利安:《北非史》,第一卷上册,上海新闻出版系统“五七干校”翻译组译,上海:上海人民出版社,1974年,第167页。

    (90)理查德·迈尔斯:《迦太基必须灭亡:古文明的兴衰》,第105—107页。

    (91)Strabo,Geography,XVII.3.15,Vol.8,p.185.

    (92)Appian,The Civil Wars,I.24,Loeb Classical Library,Vol.3,trans.Horace White,Cambridge,Mass.:Harvard University Press,1913,p.49.

    (93)Appian,The Punic Wars,XX.136,Vol.1,p.645.

    (94)Strabo,Geography,XVII.3.15,Vol.8,p.185.

    (95)徐晓旭:《“罗马和平”下不同文化的相遇》,《光明日报》2020年9月21日,第14版。

    (96)Strabo,Geography,XVII.3.14,Vol.8,p.185.

    (97)理查德·迈尔斯:《迦太基必须灭亡:古文明的兴衰》,第27页。

    (98)泽内达·A.拉戈金:《亚述:从帝国的崛起到尼尼微的陷落》,第144页。

    (99)F.W.沃克班克等编:《剑桥古代史》第七卷第二分册,第564页。

    (100)夏尔·安德烈·朱利安:《北非史》,第129页。

    (101)B.H.Warmington,Carthage,p.42.

    (102)理查德·迈尔斯:《迦太基必须灭亡:古文明的兴衰》,第370页。

    (103)Mattew Bunson.ed.,A Dictionary of the Roman Empire,Oxford:Oxford University Press,1995,p.98.

    (104)B.H.Warmington,Carthage,p.42.

    (105)Cicero,De Re Publica,I,”Fragments,” 3,Loeb Classical Library,trans.C.W.Keyes,Cambridge,Mass.:Harvard University Press,1928,p.109.

    (106)亚里士多德:《政治学》,吴寿彭译,北京:商务印书馆,1965年,第106页。

    (107)F.W.沃克班克等编:《剑桥古代史》,第七卷第二分册,第537页。

    (108)Benjamin W.Wells,”Business and Politics at Carthage,” p.499.

    (109)Polybius,The Histories,XVIII.35,Loeb Classical Library,Vol.5,trans.W.R.Paton,Cambridge,Mass.:Harvard University Press,1926,p.163.

    本文转自《外国问题研究》2023年第2期

  • Yuval Noah Harari 《Nexus》

    Contents
    PROLOGUE
    PART I: Human Networks
    CHAPTER 1: What Is Information?
    CHAPTER 2: Stories: Unlimited Connections
    CHAPTER 3: Documents: The Bite of the Paper Tigers
    CHAPTER 4: Errors: The Fantasy of Infallibility
    CHAPTER 5: Decisions: A Brief History of Democracy and Totalitarianism
    PART II: The Inorganic Network
    CHAPTER 6: The New Members: How Computers Are Different from Printing Presses
    CHAPTER 7: Relentless: The Network Is Always On
    CHAPTER 8: Fallible: The Network Is Often Wrong
    PART III: Computer Politics
    CHAPTER 9: Democracies: Can We Still Hold a Conversation?
    CHAPTER 10: Totalitarianism: All Power to the Algorithms?
    CHAPTER 11: The Silicon Curtain: Global Empire or Global Split?
    EPILOGUE

    Prologue

    We have named our species Homo sapiens—the wise human. But it is debatable how well we have lived up to the name.
    Over the last 100,000 years, we Sapiens have certainly accumulated enormous power. Just listing all our discoveries, inventions, and conquests would fill volumes. But power isn’t wisdom, and after 100,000 years of discoveries, inventions, and conquests humanity has pushed itself into an existential crisis. We are on the verge of ecological collapse, caused by the misuse of our own power. We are also busy creating new technologies like artificial intelligence (AI) that have the potential to escape our control and enslave or annihilate us. Yet instead of our species uniting to deal with these existential challenges, international tensions are rising, global cooperation is becoming more difficult, countries are stockpiling doomsday weapons, and a new world war does not seem impossible.
    If we Sapiens are so wise, why are we so self-destructive?
    At a deeper level, although we have accumulated so much information about everything from DNA molecules to distant galaxies, it doesn’t seem that all this information has given us an answer to the big questions of life: Who are we? What should we aspire to? What is a good life, and how should we live it? Despite the stupendous amounts of information at our disposal, we are as susceptible as our ancient ancestors to fantasy and delusion. Nazism and Stalinism are but two recent examples of the mass insanity that occasionally engulfs even modern societies. Nobody disputes that humans today have a lot more information and power than in the Stone Age, but it is far from certain that we understand ourselves and our role in the universe much better.

    Why are we so good at accumulating more information and power, but far less successful at acquiring wisdom? Throughout history many traditions have believed that some fatal flaw in our nature tempts us to pursue powers we don’t know how to handle. The Greek myth of Phaethon told of a boy who discovers that he is the son of Helios, the sun god. Wishing to prove his divine origin, Phaethon demands the privilege of driving the chariot of the sun. Helios warns Phaethon that no human can control the celestial horses that pull the solar chariot. But Phaethon insists, until the sun god relents. After rising proudly in the sky, Phaethon indeed loses control of the chariot. The sun veers off course, scorching all vegetation, killing numerous beings, and threatening to burn the earth itself. Zeus intervenes and strikes Phaethon with a thunderbolt. The conceited human drops from the sky like a falling star, himself on fire. The gods reassert control of the sky and save the world.

    Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled “The Sorcerer’s Apprentice.” Goethe’s poem (later popularized as a Walt Disney animation starring Mickey Mouse) tells how an old sorcerer leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, like fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch the water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an ax, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: “The spirits that I summoned, I now cannot rid myself of again.” The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice—and to humanity—is clear: never summon powers you cannot control.

    What do the cautionary fables of the apprentice and of Phaethon tell us in the twenty-first century? We humans have obviously refused to heed their warnings. We have already driven the earth’s climate out of balance and have summoned billions of enchanted brooms, drones, chatbots, and other algorithmic spirits that may escape our control and unleash a flood of unintended consequences.

    What should we do, then? The fables offer no answers, other than to wait for some god or sorcerer to save us. This, of course, is an extremely dangerous message. It encourages people to abdicate responsibility and put their faith in gods and sorcerers instead. Even worse, it fails to appreciate that gods and sorcerers are themselves a human invention—just like chariots, brooms, and algorithms. The tendency to create powerful things with unintended consequences started not with the invention of the steam engine or AI but with the invention of religion. Prophets and theologians have repeatedly summoned powerful spirits that were supposed to bring love and joy but ended up flooding the world with blood.

    The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power. What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans.

    Accordingly, it isn’t our individual psychology that causes us to abuse power. After all, alongside greed, hubris, and cruelty, humans are also capable of love, compassion, humility, and joy. True, among the worst members of our species, greed and cruelty reign supreme and lead bad actors to abuse power. But why would human societies choose to entrust power to their worst members? Most Germans in 1933, for example, were not psychopaths. So why did they vote for Hitler?

    Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. The main argument of this book is that humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes them to use power unwisely. Our problem, then, is a network problem.

    Even more specifically, it is an information problem. Information is the glue that holds networks together. But for tens of thousands of years, Sapiens built and maintained large networks by inventing and spreading fictions, fantasies, and mass delusions—about gods, about enchanted broomsticks, about AI, and about a great many other things. While each individual human is typically interested in knowing the truth about themselves and the world, large networks bind members and create order by relying on fictions and fantasies. That’s how we got, for example, to Nazism and Stalinism. These were exceptionally powerful networks, held together by exceptionally deluded ideas. As George Orwell famously put it, ignorance is strength.

    The fact that the Nazi and Stalinist regimes were founded on cruel fantasies and shameless lies did not make them historically exceptional, nor did it preordain them to collapse. Nazism and Stalinism were two of the strongest networks humans ever created. In late 1941 and early 1942, the Axis powers came within reach of winning World War II. Stalin eventually emerged as the victor of that war,1 and in the 1950s and 1960s he and his heirs also had a reasonable chance of winning the Cold War. By the 1990s liberal democracies had gained the upper hand, but this now seems like a temporary victory. In the twenty-first century, some new totalitarian regime may well succeed where Hitler and Stalin failed, creating an all-powerful network that could prevent future generations from even attempting to expose its lies and fictions. We should not assume that delusional networks are doomed to failure. If we want to prevent their triumph, we will have to do the hard work ourselves.

    THE NAIVE VIEW OF INFORMATION

    It is difficult to appreciate the strength of delusional networks because of a broader misunderstanding about how big information networks—whether delusional or not—operate. This misunderstanding is encapsulated in something I call “the naive view of information.” While fables like the myth of Phaethon and “The Sorcerer’s Apprentice” present an overly pessimistic view of individual human psychology, the naive view of information disseminates an overly optimistic view of large-scale human networks.

    The naive view argues that by gathering and processing much more information than individuals can, big networks achieve a better understanding of medicine, physics, economics, and numerous other fields, which makes the network not only powerful but also wise. For example, by gathering information on pathogens, pharmaceutical companies and health-care services can determine the true causes of many diseases, which enables them to develop more effective medicines and to make wiser decisions about their usage. This view posits that in sufficient quantities information leads to truth, and truth in turn leads to both power and wisdom. Ignorance, in contrast, seems to lead nowhere. While delusional or deceitful networks might occasionally arise in moments of historical crisis, in the long term they are bound to lose to more clear-sighted and honest rivals. A health-care service that ignores information about pathogens, or a pharmaceutical giant that deliberately spreads disinformation, will ultimately lose out to competitors that make wiser use of information. The naive view thus implies that delusional networks must be aberrations and that big networks can usually be trusted to handle power wisely.

    The naive view of information

    Of course, the naive view acknowledges that many things can go wrong on the path from information to truth. We might make honest mistakes in gathering and processing the information. Malicious actors motivated by greed or hate might hide important facts or try to deceive us. As a result, information sometimes leads to error rather than truth. For example, partial information, faulty analysis, or a disinformation campaign might lead even experts to misidentify the true cause of a particular disease.

    However, the naive view assumes that the antidote to most problems we encounter in gathering and processing information is gathering and processing even more information. While we are never completely safe from error, in most cases more information means greater accuracy. A single doctor wishing to identify the cause of an epidemic by examining a single patient is less likely to succeed than thousands of doctors gathering data on millions of patients. And if the doctors themselves conspire to hide the truth, making medical information more freely available to the public and to investigative journalists will eventually reveal the scam. According to this view, the bigger the information network, the closer it must be to the truth.

    Naturally, even if we analyze information accurately and discover important truths, this does not guarantee we will use the resulting capabilities wisely. Wisdom is commonly understood to mean “making right decisions,” but what “right” means depends on value judgments that differ between diverse people, cultures, or ideologies. Scientists who discover a new pathogen may develop a vaccine to protect people. But if the scientists—or their political overlords—believe in a racist ideology that advocates that some races are inferior and should be exterminated, the new medical knowledge might be used to develop a biological weapon that kills millions.

    In this case too, the naive view of information holds that additional information offers at least a partial remedy. The naive view thinks that disagreements about values turn out on closer inspection to be the fault of either the lack of information or deliberate disinformation. According to this view, racists are ill-informed people who just don’t know the facts of biology and history. They think that “race” is a valid biological category, and they have been brainwashed by bogus conspiracy theories. The remedy to racism is therefore to provide people with more biological and historical facts. It may take time, but in a free market of information sooner or later truth will prevail.

    The naive view is of course more nuanced and thoughtful than can be explained in a few paragraphs, but its core tenet is that information is an essentially good thing, and the more we have of it, the better. Given enough information and enough time, we are bound to discover the truth about things ranging from viral infections to racist biases, thereby developing not only our power but also the wisdom necessary to use that power well.

    This naive view justifies the pursuit of ever more powerful information technologies and has been the semiofficial ideology of the computer age and the internet. In June 1989, a few months before the fall of the Berlin Wall and of the Iron Curtain, Ronald Reagan declared that “the Goliath of totalitarian control will rapidly be brought down by the David of the microchip” and that “the biggest of Big Brothers is increasingly helpless against communications technology.… Information is the oxygen of the modern age.… It seeps through the walls topped with barbed wire. It wafts across the electrified, booby-trapped borders. Breezes of electronic beams blow through the Iron Curtain as if it was lace.”2 In November 2009, Barack Obama spoke in the same spirit on a visit to Shanghai, telling his Chinese hosts, “I am a big believer in technology and I’m a big believer in openness when it comes to the flow of information. I think that the more freely information flows, the stronger the society becomes.”3

    Entrepreneurs and corporations have often expressed similarly rosy views of information technology. Already in 1858 an editorial in The New Englander about the invention of the telegraph stated, “It is impossible that old prejudices and hostilities should longer exist, while such an instrument has been created for an exchange of thought between all the nations of the earth.”4 Nearly two centuries and two world wars later, Mark Zuckerberg said that Facebook’s goal “is to help people to share more in order to make the world more open and to help promote understanding between people.”5

    In his 2024 book, The Singularity Is Nearer, the eminent futurologist and entrepreneur Ray Kurzweil surveys the history of information technology and concludes that “the reality is that nearly every aspect of life is getting progressively better as a result of exponentially improving technology.” Looking back at the grand sweep of human history, he cites examples like the invention of the printing press to argue that by its very nature information technology tends to spawn “a virtuous circle advancing nearly every aspect of human well-being, including literacy, education, wealth, sanitation, health, democratization and reduction in violence.”6

    The naive view of information is perhaps most succinctly captured in Google’s mission statement “to organize the world’s information and make it universally accessible and useful.” Google’s answer to Goethe’s warnings is that while a single apprentice pilfering his master’s secret spell book is likely to cause disaster, when a lot of apprentices are given free access to all the world’s information, they will not only create useful enchanted brooms but also learn to handle them wisely.

    GOOGLE VERSUS GOETHE

    It must be stressed that there are numerous cases when having more information has indeed enabled humans to understand the world better and to make wiser use of their power. Consider, for example, the dramatic reduction in child mortality. Johann Wolfgang von Goethe was the eldest of seven siblings, but only he and his sister Cornelia got to celebrate their seventh birthday. Disease carried off their brother Hermann Jacob at age six, their sister Catharina Elisabeth at age four, their sister Johanna Maria at age two, their brother Georg Adolf at age eight months, and a fifth, unnamed brother was stillborn. Cornelia then died from disease aged twenty-six, leaving Johann Wolfgang as the sole survivor from their family.7

    Johann Wolfgang von Goethe went on to have five children of his own, of whom all but the eldest son—August—died within two weeks of their birth. In all probability the cause was incompatibility between the blood groups of Goethe and his wife, Christiane, which after the first successful pregnancy led the mother to develop antibodies to the fetal blood. This condition, known as rhesus disease, is nowadays treated so effectively that the mortality rate is less than 2 percent, but in the 1790s it had an average mortality rate of 50 percent, and for Goethe’s four younger children it was a death sentence.8

    Altogether in the Goethe family—a well-to-do German family in the late eighteenth century—the child survival rate was an abysmal 25 percent. Only three out of twelve children reached adulthood. This horrendous statistic was not exceptional. Around the time Goethe wrote “The Sorcerer’s Apprentice” in 1797, it is estimated that only about 50 percent of German children reached age fifteen,9 and the same was probably true in most other parts of the world.10 By 2020, 95.6 percent of children worldwide lived beyond their fifteenth birthday,11 and in Germany that figure was 99.5 percent.12 This momentous achievement would not have been possible without collecting, analyzing, and sharing massive amounts of medical data about things like blood groups. In this case, then, the naive view of information proved to be correct.

    However, the naive view of information sees only part of the picture, and the history of the modern age was not just about reducing child mortality. In recent generations humanity has experienced the greatest increase ever in both the amount and the speed of our information production. Every smartphone contains more information than the ancient Library of Alexandria13 and enables its owner to instantaneously connect to billions of other people throughout the world. Yet with all this information circulating at breathtaking speeds, humanity is closer than ever to annihilating itself.

    Despite—or perhaps because of—our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardize the ecological foundations of our own species. We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.

    Would having even more information make things better—or worse? We will soon find out. Numerous corporations and governments are in a race to develop the most powerful information technology in history—AI. Some leading entrepreneurs, like the American investor Marc Andreessen, believe that AI will finally solve all of humanity’s problems. On June 6, 2023, Andreessen published an essay titled “Why AI Will Save the World,” peppered with bold statements like “I am here to bring the good news: AI will not destroy the world, and in fact may save it” and “AI can make everything we care about better.” He concluded, “The development and proliferation of AI—far from a risk that we should fear—is a moral obligation that we have to ourselves, to our children, and to our future.”14

    Ray Kurzweil concurs, arguing in The Singularity Is Nearer that “AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. We have a moral imperative to realize this promise of new technologies.” Kurzweil is keenly aware of the technology’s potential perils, and analyzes them at length, but believes they could be mitigated successfully.15

    Others are more skeptical. Not only philosophers and social scientists but also many leading AI experts and entrepreneurs like Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk, and Mustafa Suleyman have warned the public that AI could destroy our civilization.16 A 2024 article co-authored by Bengio, Hinton, and numerous other experts noted that “unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity.”17 In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10 percent chance to advanced AI leading to outcomes as bad as human extinction.18 In 2023 close to thirty governments—including those of China, the United States, and the U.K.—signed the Bletchley Declaration on AI, which acknowledged that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”19 By using such apocalyptic terms, experts and governments have no wish to conjure a Hollywood image of killer robots running in the streets and shooting people. Such a scenario is unlikely, and it merely distracts people from the real dangers. Rather, experts warn about two other scenarios.

    First, the power of AI could supercharge existing human conflicts, dividing humanity against itself. Just as in the twentieth century the Iron Curtain divided the rival powers in the Cold War, so in the twenty-first century the Silicon Curtain—made of silicon chips and computer codes rather than barbed wire—might come to divide rival powers in a new global conflict. Because the AI arms race will produce ever more destructive weapons, even a small spark might ignite a cataclysmic conflagration.

    Second, the Silicon Curtain might come to divide not one group of humans from another but rather all humans from our new AI overlords. No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even reengineer our bodies and minds—while we can no longer comprehend the forces that control us, let alone stop them. If a twenty-first-century totalitarian network succeeds in conquering the world, it may be run by nonhuman intelligence, rather than by a human dictator. People who single out China, Russia, or a post-democratic United States as their main source for totalitarian nightmares misunderstand the danger. In fact, Chinese, Russians, Americans, and all other humans are together threatened by the totalitarian potential of nonhuman intelligence.

    Given the magnitude of the danger, AI should be of interest to all human beings. While not everyone can become an AI expert, we should all keep in mind that AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage always remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI has the required intelligence to process information by itself, and therefore replace humans in decision making.

    Its mastery of information also enables AI to independently generate new ideas, in fields ranging from music to medicine. Gramophones played our music, and microscopes revealed the secrets of our cells, but gramophones couldn’t compose new symphonies, and microscopes couldn’t synthesize new drugs. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will likely gain the ability even to create new life-forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities.

    Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us—whether to give us a mortgage, to hire us for a job, to send us to prison. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water. And it is more than just human lives we are gambling on. AI could alter the course not just of our species’ history but of the evolution of all life-forms.

    WEAPONIZING INFORMATION

    In 2016, I published Homo Deus, a book that highlighted some of the dangers posed to humanity by the new information technologies. That book argued that the real hero of history has always been information, rather than Homo sapiens, and that scientists increasingly understand not just history but also biology, politics, and economics in terms of information flows. Animals, states, and markets are all information networks, absorbing data from the environment, making decisions, and releasing data back. The book warned that while we hope better information technology will give us health, happiness, and power, it may actually take power away from us and destroy both our physical and our mental health. Homo Deus hypothesized that if humans aren’t careful, we might dissolve within the torrent of information like a clump of earth within a gushing river, and that in the grand scheme of things humanity will turn out to have been just a ripple within the cosmic dataflow.

    In the years since Homo Deus was published, the pace of change has only accelerated, and power has indeed been shifting from humans to algorithms. Many of the scenarios that sounded like science fiction in 2016—such as algorithms that can create art, masquerade as human beings, make crucial life decisions about us, and know more about us than we know about ourselves—are everyday realities in 2024.

    Many other things have changed since 2016. The ecological crisis has intensified, international tensions have escalated, and a populist wave has undermined the cohesion of even the most robust democracies. Populism has also mounted a radical challenge to the naive view of information. Populist leaders such as Donald Trump and Jair Bolsonaro, and populist movements and conspiracy theories such as QAnon and the anti-vaxxers, have argued that all traditional institutions that gain authority by claiming to gather information and discover truth are simply lying. Bureaucrats, judges, doctors, mainstream journalists, and academic experts are elite cabals that have no interest in the truth and are deliberately spreading disinformation to gain power and privileges for themselves at the expense of “the people.” The rise of politicians like Trump and movements like QAnon has a specific political context, unique to the conditions of the United States in the late 2010s. But populism as an antiestablishment worldview long predated Trump and is relevant to numerous other historical contexts now and in the future. In a nutshell, populism views information as a weapon.20

    The populist view of information

    In its more extreme versions, populism posits that there is no objective truth at all and that everyone has “their own truth,” which they wield to vanquish rivals. According to this worldview, power is the only reality. All social interactions are power struggles, because humans are interested only in power. The claim to be interested in something else—like truth or justice—is nothing more than a ploy to gain power. Whenever and wherever populism succeeds in disseminating the view of information as a weapon, language itself is undermined. Nouns like “facts” and adjectives like “accurate” and “truthful” become elusive. Such words are not taken as pointing to a common objective reality. Rather, any talk of “facts” or “truth” is bound to prompt at least some people to ask, “Whose facts and whose truth are you referring to?”

    It should be stressed that this power-focused and deeply skeptical view of information isn’t a new phenomenon and it wasn’t invented by anti-vaxxers, flat-earthers, Bolsonaristas, or Trump supporters. Similar views have been propagated long before 2016, including by some of humanity’s brightest minds.21 In the late twentieth century, for example, intellectuals from the radical left like Michel Foucault and Edward Said claimed that scientific institutions like clinics and universities are not pursuing timeless and objective truths but are instead using power to determine what counts as truth, in the service of capitalist and colonialist elites. These radical critiques occasionally went as far as arguing that “scientific facts” are nothing more than a capitalist or colonialist “discourse” and that people in power can never be really interested in truth and can never be trusted to recognize and correct their own mistakes.22

    This particular line of radical leftist thinking goes back to Karl Marx, who argued in the mid-nineteenth century that power is the only reality, that information is a weapon, and that elites who claim to be serving truth and justice are in fact pursuing narrow class privileges. In the words of the 1848 Communist Manifesto, “The history of all hitherto existing societies is the history of class struggles. Freeman and slave, patrician and plebeian, lord and serf, guildmaster and journeyman, in a word, oppressor and oppressed stood in constant opposition to one another, carried on an uninterrupted, now hidden, now open, fight.” This binary interpretation of history implies that every human interaction is a power struggle between oppressors and oppressed. Accordingly, whenever anyone says anything, the question to ask isn’t, “What is being said? Is it true?” but rather, “Who is saying this? Whose privileges does it serve?”

    Of course, right-wing populists such as Trump and Bolsonaro are unlikely to have read Foucault or Marx, and indeed present themselves as fiercely anti-Marxist. They also greatly differ from Marxists in their suggested policies in fields like taxation and welfare. But their basic view of society and of information is surprisingly Marxist, seeing all human interactions as a power struggle between oppressors and oppressed. For example, in his inaugural address in 2017 Trump announced that “a small group in our nation’s capital has reaped the rewards of government while the people have borne the cost.”23 Such rhetoric is a staple of populism, which the political scientist Cas Mudde has described as an “ideology that considers society to be ultimately separated into two homogeneous and antagonistic groups, ‘the pure people’ versus ‘the corrupt elite.’ ”24 Just as Marxists claimed that the media functions as a mouthpiece for the capitalist class, and that scientific institutions like universities spread disinformation in order to perpetuate capitalist control, populists accuse these same institutions of working to advance the interests of the “corrupt elites” at the expense of “the people.”

    Present-day populists also suffer from the same incoherency that plagued radical antiestablishment movements in previous generations. If power is the only reality, and if information is just a weapon, what does it imply about the populists themselves? Are they too interested only in power, and are they too lying to us to gain power?

    Populists have sought to extricate themselves from this conundrum in two different ways. Some populist movements claim adherence to the ideals of modern science and to the traditions of skeptical empiricism. They tell people that indeed you should never trust any institutions or figures of authority—including self-proclaimed populist parties and politicians. Instead, you should “do your own research” and trust only what you can directly observe by yourself.25 This radical empiricist position implies that while large-scale institutions like political parties, courts, newspapers, and universities can never be trusted, individuals who make the effort can still find the truth by themselves.

    This approach may sound scientific and may appeal to free-spirited individuals, but it leaves open the question of how human communities can cooperate to build health-care systems or pass environmental regulations, which demand large-scale institutional organization. Is a single individual capable of doing all the necessary research to decide whether the earth’s climate is heating up and what should be done about it? How would a single person go about collecting climate data from throughout the world, not to mention obtaining reliable records from past centuries? Trusting only “my own research” may sound scientific, but in practice it amounts to believing that there is no objective truth. As we shall see in chapter 4, science is a collaborative institutional effort rather than a personal quest.

    An alternative populist solution is to abandon the modern scientific ideal of finding the truth via “research” and instead go back to relying on divine revelation or mysticism. Traditional religions like Christianity, Islam, and Hinduism have typically characterized humans as untrustworthy power-hungry creatures who can access the truth only thanks to the intervention of a divine intelligence. In the 2010s and early 2020s populist parties from Brazil to Turkey and from the United States to India have aligned themselves with such traditional religions. They have expressed radical doubt about modern institutions while declaring complete faith in ancient scriptures. The populists claim that the articles you read in The New York Times or in Science are just an elitist ploy to gain power, but what you read in the Bible, the Quran, or the Vedas is absolute truth.26

    A variation on this theme calls on people to put their trust in charismatic leaders like Trump and Bolsonaro, who are depicted by their supporters either as the messengers of God27 or as possessing a mystical bond with “the people.” While ordinary politicians lie to the people in order to gain power for themselves, the charismatic leader is the infallible mouthpiece of the people who exposes all the lies.28 One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.

    We will explore populism at greater depth in chapter 5, but at this point it is important to note that populists are eroding trust in large-scale institutions and international cooperation just when humanity confronts the existential challenges of ecological collapse, global war, and out-of-control technology. Instead of trusting complex human institutions, populists give us the same advice as the Phaethon myth and “The Sorcerer’s Apprentice”: “Trust God or the great sorcerer to intervene and make everything right again.” If we take this advice, we’ll likely find ourselves in the short term under the thumb of the worst kind of power-hungry humans, and in the long term under the thumb of new AI overlords. Or we might find ourselves nowhere at all, as Earth becomes inhospitable for human life.

    If we wish to avoid relinquishing power to a charismatic leader or an inscrutable AI, we must first gain a better understanding of what information is, how it helps to build human networks, and how it relates to truth and power. Populists are right to be suspicious of the naive view of information, but they are wrong to think that power is the only reality and that information is always a weapon. Information isn’t the raw material of truth, but it isn’t a mere weapon, either. There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely. This book is dedicated to exploring that middle ground.

    THE ROAD AHEAD

    The first part of this book surveys the historical development of human information networks. It doesn’t attempt to present a comprehensive century-by-century account of information technologies like script, printing presses, and radio. Instead, by studying a few examples, it explores key dilemmas that people in all eras faced when trying to construct information networks, and it examines how different answers to these dilemmas shaped contrasting human societies. What we usually think of as ideological and political conflicts often turn out to be clashes between opposing types of information networks.

    Part 1 begins by examining two principles that have been essential for large-scale human information networks: mythology and bureaucracy. Chapters 2 and 3 describe how large-scale information networks—from ancient kingdoms to present-day states—have relied on both mythmakers and bureaucrats. The stories of the Bible, for example, were essential for the Christian Church, but there would have been no Bible if church bureaucrats hadn’t curated, edited, and disseminated these stories. A difficult dilemma for every human network is that mythmakers and bureaucrats tend to pull in different directions. Institutions and societies are often defined by the balance they manage to find between the conflicting needs of their mythmakers and their bureaucrats. The Christian Church itself split into rival churches, like the Catholic and Protestant churches, which struck different balances between mythology and bureaucracy.

    Chapter 4 then focuses on the problem of erroneous information and on the benefits and drawbacks of maintaining self-correcting mechanisms, such as independent courts or peer-reviewed journals. The chapter contrasts institutions that relied on weak self-correcting mechanisms, like the Catholic Church, with institutions that developed strong self-correcting mechanisms, like scientific disciplines. Weak self-correcting mechanisms sometimes result in historical calamities like the early modern European witch hunts, while strong self-correcting mechanisms sometimes destabilize the network from within. Judged in terms of longevity, spread, and power, the Catholic Church has been perhaps the most successful institution in human history, despite—or perhaps because of—the relative weakness of its self-correcting mechanisms.

    After part 1 surveys the roles of mythology and bureaucracy, and the contrast between strong and weak self-correcting mechanisms, chapter 5 concludes the historical discussion by focusing on another contrast—between distributed and centralized information networks. Democratic systems allow information to flow freely along many independent channels, whereas totalitarian systems strive to concentrate information in one hub. Each choice has both advantages and shortcomings. Understanding political systems like the United States and the U.S.S.R. in terms of information flows can explain much about their differing trajectories.

    This historical part of the book is crucial for understanding present-day developments and future scenarios. The rise of AI is arguably the biggest information revolution in history. But we cannot understand it unless we compare it with its predecessors. History isn’t the study of the past; it is the study of change. History teaches us what remains the same, what changes, and how things change. This is as relevant to information revolutions as to every other kind of historical transformation. Thus, understanding the process through which the allegedly infallible Bible was canonized provides valuable insight about present-day claims for AI infallibility. Similarly, studying the early modern witch hunts and Stalin’s collectivization offers stark warnings about what might go wrong as we give AIs greater control over twenty-first-century societies. A deep knowledge of history is also vital to understand what is new about AI, how it is fundamentally different from printing presses and radio sets, and in what specific ways future AI dictatorship could be very unlike anything we have seen before.

    The book doesn’t argue that studying the past enables us to predict the future. As emphasized repeatedly in the following pages, history is not deterministic, and the future will be shaped by the choices we all make in coming years. The whole point of writing this book is that by making informed choices, we can prevent the worst outcomes. If we cannot change the future, why waste time discussing it?

    Building upon the historical survey in part 1, the book’s second part—“The Inorganic Network”—examines the new information network we are creating today, focusing on the political implications of the rise of AI. Chapters 6–8 discuss recent examples from throughout the world—such as the role of social media algorithms in instigating ethnic violence in Myanmar in 2016–17—to explain in what ways AI is different from all previous information technologies. Examples are taken mostly from the 2010s rather than the 2020s, because we have gained a modicum of historical perspective on events of the 2010s.

    Part 2 argues that we are creating an entirely new kind of information network, without pausing to reckon with its implications. It emphasizes the shift from organic to inorganic information networks. The Roman Empire, the Catholic Church, and the U.S.S.R. all relied on carbon-based brains to process information and make decisions. The silicon-based computers that dominate the new information network function in radically different ways. For better or worse, silicon chips are free from many of the limitations that organic biochemistry imposes on carbon neurons. Silicon chips can create spies that never sleep, financiers that never forget, and despots that never die. How will this change society, economics, and politics?

    The third and final part of the book—“Computer Politics”—examines how different kinds of societies might deal with the threats and promises of the inorganic information network. Will carbon-based life-forms like us have a chance of understanding and controlling the new information network? As noted above, history isn’t deterministic, and for at least a few more years we Sapiens still have the power to shape our future.

    Accordingly, chapter 9 explores how democracies might deal with the inorganic network. How, for example, can flesh-and-blood politicians make financial decisions if the financial system is increasingly controlled by AI and the very meaning of money comes to depend on inscrutable algorithms? How can democracies maintain a public conversation about anything—be it finance or gender—if we can no longer know whether we are talking with another human or with a chatbot masquerading as a human?

    Chapter 10 explores the potential impact of the inorganic network on totalitarianism. While dictators would be happy to get rid of all public conversations, they have their own fears of AI. Autocracies are based on terrorizing and censoring their own agents. But how can a human dictator terrorize an AI, censor its unfathomable processes, or prevent it from seizing power to itself?

    Finally, chapter 11 explores how the new information network could influence the balance of power between democratic and totalitarian societies on the global level. Will AI tilt the balance decisively in favor of one camp? Will the world split into hostile blocs whose rivalry makes all of us easy prey for an out-of-control AI? Or can we unite in defense of our common interests?

    But before we explore the past, present, and possible futures of information networks, we need to start with a deceptively simple question. What exactly is information?

    PART I  Human Networks

    CHAPTER 1 What Is Information?

    It is always tricky to define fundamental concepts. Since they are the basis for everything that follows, they themselves seem to lack any basis of their own. Physicists have a hard time defining matter and energy, biologists have a hard time defining life, and philosophers have a hard time defining reality.

    Information is increasingly seen by many philosophers and biologists, and even by some physicists, as the most basic building block of reality, more elementary than matter and energy.1 No wonder that there are many disputes about how to define information, and how it is related to the evolution of life or to basic ideas in physics such as entropy, the laws of thermodynamics, and the quantum uncertainty principle.2 This book will make no attempt to resolve—or even explain—these disputes, nor will it offer a universal definition of information applicable to physics, biology, and all other fields of knowledge. Since it is a work of history, which studies the past and future development of human societies, it will focus on the definition and role of information in history.

    In everyday usage, information is associated with human-made symbols like spoken or written words. Consider, for example, the story of Cher Ami and the Lost Battalion. In October 1918, when the American Expeditionary Forces was fighting to liberate northern France from the Germans, a battalion of more than five hundred American soldiers was trapped behind enemy lines. American artillery, which was trying to provide them with cover fire, misidentified their location and dropped the barrage directly on them. The battalion’s commander, Major Charles Whittlesey, urgently needed to inform headquarters of his true location, but no runner could break through the German line. According to several accounts, as a last resort Whittlesey turned to Cher Ami, an army carrier pigeon. On a tiny piece of paper, Whittlesey wrote, “We are along the road paralell [sic] 276.4. Our artillery is dropping a barrage directly on us. For heaven’s sake stop it.” The paper was inserted into a canister on Cher Ami’s right leg, and the bird was released into the air. One of the battalion’s soldiers, Private John Nell, recalled years later, “We knew without a doubt this was our last chance. If that one lonely, scared pigeon failed to find its loft, our fate was sealed.”

    Witnesses later described how Cher Ami flew into heavy German fire. A shell exploded directly below the bird, killing five men and severely injuring the pigeon. A splinter tore through Cher Ami’s chest, and his right leg was left hanging by a tendon. But he got through. The wounded pigeon flew the forty kilometers to division headquarters in about forty-five minutes, with the canister containing the crucial message attached to the remnant of his right leg. Though there is some controversy about the exact details, it is clear that the American artillery adjusted its barrage, and an American counterattack rescued the Lost Battalion. Cher Ami was tended by army medics, sent to the United States as a hero, and became the subject of numerous articles, short stories, children’s books, poems, and even movies. The pigeon had no idea what information he was conveying, but the symbols inked on the piece of paper he carried helped save hundreds of men from death and captivity.3

    Information, however, does not have to consist of human-made symbols. According to the biblical myth of the Flood, Noah learned that the water had finally receded because the pigeon he sent out from the ark returned with an olive branch in her mouth. Then God set a rainbow in the clouds as a heavenly record of his promise never to flood the earth again. Pigeons, olive branches, and rainbows have since become iconic symbols of peace and tolerance. Objects that are even more remote than rainbows can also be information. For astronomers the shape and movement of galaxies constitute crucial information about the history of the universe. For navigators the North Star indicates which way is north. For astrologers the stars are a cosmic script, conveying information about the future of individual humans and entire societies.

    Of course, defining something as “information” is a matter of perspective. An astronomer or astrologer might view the Libra constellation as “information,” but these distant stars are far more than just a notice board for human observers. There might be an alien civilization up there, totally oblivious to the information we glean from their home and to the stories we tell about it. Similarly, a piece of paper marked with ink splotches can be crucial information for an army unit, or dinner for a family of termites. Any object can be information—or not. This makes it difficult to define what information is.

    The ambivalence of information has played an important role in the annals of military espionage, when spies needed to communicate information surreptitiously. During World War I, northern France was not the only major battleground. From 1915 to 1918 the British and Ottoman Empires fought for control of the Middle East. After repulsing an Ottoman attack on the Sinai Peninsula and the Suez Canal, the British in turn invaded the Ottoman Empire, but were held at bay until October 1917 by a fortified Ottoman line stretching from Beersheba to Gaza. British attempts to break through were repulsed at the First Battle of Gaza (March 26, 1917) and the Second Battle of Gaza (April 17–19, 1917). Meanwhile, pro-British Jews living in Palestine set up a spy network code-named NILI to inform the British about Ottoman troop movements. One method they developed to communicate with their British operators involved window shutters. Sarah Aaronsohn, a NILI commander, had a house overlooking the Mediterranean. She signaled British ships by closing or opening a particular shutter, according to a predetermined code. Numerous people, including Ottoman soldiers, could obviously see the shutter, but nobody other than NILI agents and their British operators understood it was vital military information.4 So, when is a shutter just a shutter, and when is it information?

    The Ottomans eventually caught the NILI spy ring due in part to a strange mishap. In addition to shutters, NILI used carrier pigeons to convey coded messages. On September 3, 1917, one of the pigeons diverted off course and landed in—of all places—the house of an Ottoman officer. The officer found the coded message but couldn’t decipher it. Nevertheless, the pigeon itself was crucial information. Its existence indicated to the Ottomans that a spy ring was operating under their noses. As Marshall McLuhan might have put it, the pigeon was the message. NILI agents learned about the capture of the pigeon and immediately killed and buried all the remaining birds they had, because the mere possession of carrier pigeons was now incriminating information. But the massacre of the pigeons did not save NILI. Within a month the spy network was uncovered, several of its members were executed, and Sarah Aaronsohn committed suicide to avoid divulging NILI’s secrets under torture.5 When is a pigeon just a pigeon, and when is it information?

    Clearly, then, information cannot be defined as specific types of material objects. Any object—a star, a shutter, a pigeon—can be information in the right context. So exactly what context defines such objects as “information”? The naive view of information argues that objects are defined as information in the context of truth seeking. Something is information if people use it to try to discover the truth. This view links the concept of information with the concept of truth and assumes that the main role of information is to represent reality. There is a reality “out there,” and information is something that represents that reality and that we can therefore use to learn about reality. For example, the information NILI provided the British was meant to represent the reality of Ottoman troop movements. If the Ottomans massed ten thousand soldiers in Gaza—the centerpiece of their defenses—a piece of paper with symbols representing “ten thousand” and “Gaza” was important information that could help the British win the battle. If, on the other hand, there were actually twenty thousand Ottoman troops in Gaza, that piece of paper did not represent reality accurately, and could lead the British to make a disastrous military mistake.

    Put another way, the naive view argues that information is an attempt to represent reality, and when this attempt succeeds, we call it truth. While this book takes many issues with the naive view, it agrees that truth is an accurate representation of reality. But this book also holds that most information is not an attempt to represent reality and that what defines information is something entirely different. Most information in human society, and indeed in other biological and physical systems, does not represent anything.

    I want to spend a little longer on this complex and crucial argument, because it constitutes the theoretical basis of the book.

    WHAT IS TRUTH?

    Throughout this book, “truth” is understood as something that accurately represents certain aspects of reality. Underlying the notion of truth is the premise that there exists one universal reality. Anything that has ever existed or will ever exist in the universe—from the North Star, to the NILI pigeon, to web pages on astrology—is part of this single reality. This is why the search for truth is a universal project. While different people, nations, or cultures may have competing beliefs and feelings, they cannot possess contradictory truths, because they all share a universal reality. Anyone who rejects universalism rejects truth.

    Truth and reality are nevertheless different things, because no matter how truthful an account is, it can never represent reality in all its aspects. If a NILI agent wrote that there are ten thousand Ottoman soldiers in Gaza, and there were indeed ten thousand soldiers there, this accurately pointed to a certain aspect of reality, but it neglected many other aspects. The very act of counting entities—whether apples, oranges, or soldiers—necessarily focuses attention on the similarities between these entities while discounting differences.6 For example, saying only that there were ten thousand Ottoman soldiers in Gaza neglected to specify whether some were experienced veterans and others were green recruits. If there were a thousand recruits and nine thousand old hands, the military reality was quite different from if there were nine thousand rookies and a thousand battle-hardened veterans.

    There were many other differences between the soldiers. Some were healthy; others were sick. Some Ottoman troops were ethnically Turkish, while others were Arabs, Kurds, or Jews. Some were brave, others cowardly. Indeed, each soldier was a unique human being, with different parents and friends and individual fears and hopes. World War I poets like Wilfred Owen famously attempted to represent these latter aspects of military reality, which mere statistics never conveyed accurately. Does this imply that writing “ten thousand soldiers” is always a misrepresentation of reality, and that to describe the military situation around Gaza in 1917, we must specify the unique history and personality of every soldier?

    Another problem with any attempt to represent reality is that reality contains many viewpoints. For example, present-day Israelis, Palestinians, Turks, and Britons have different perspectives on the British invasion of the Ottoman Empire, the NILI underground, and the activities of Sarah Aaronsohn. That does not mean, of course, that there are several entirely separate realities, or that there are no historical facts. There is just one reality, but it is complex.

    Reality includes an objective level with objective facts that don’t depend on people’s beliefs; for example, it is an objective fact that Sarah Aaronsohn died on October 9, 1917, from self-inflicted gunshot wounds. Saying that “Sarah Aaronsohn died in an airplane crash on May 15, 1919,” is an error.

    Reality also includes a subjective level with subjective facts like the beliefs and feelings of various people, but in this case too facts can be separated from errors. For example, it is a fact that Israelis tend to regard Aaronsohn as a patriotic hero. Three weeks after her suicide, the information NILI supplied helped the British finally break the Ottoman line at the Battle of Beersheba (October 31, 1917) and the Third Battle of Gaza (November 1–2, 1917). On November 2, 1917, the British foreign secretary, Arthur Balfour, issued the Balfour Declaration, announcing that the British government “view with favor the establishment in Palestine of a national home for the Jewish people.” Israelis credit this in part to NILI and Sarah Aaronsohn, whom they admire for her sacrifice. It is another fact that Palestinians evaluate things very differently. Rather than admiring Aaronsohn, they regard her—if they’ve heard about her at all—as an imperialist agent. Even though we are dealing here with subjective views and feelings, we can still distinguish truth from falsehood. For views and feelings—just like stars and pigeons—are a part of the universal reality. Saying that “Sarah Aaronsohn is admired by everyone for her role in defeating the Ottoman Empire” is an error, not in line with reality.

    Nationality is not the only thing that affects people’s viewpoint. Israeli men and Israeli women may see Aaronsohn differently, and so do left-wingers and right-wingers, or Orthodox and secular Jews. Since suicide is forbidden by Jewish religious law, Orthodox Jews have difficulty seeing Aaronsohn’s suicide as a heroic act (she was actually denied burial in the hallowed ground of a Jewish cemetery). Ultimately, each individual has a different perspective on the world, shaped by the intersection of different personalities and life histories. Does this imply that when we wish to describe reality, we must always list all the different viewpoints it contains and that a truthful biography of Sarah Aaronsohn, for example, must specify how every single Israeli and Palestinian has felt about her?

    Taken to extremes, such a pursuit of accuracy may lead us to try to represent the world on a one-to-one scale, as in the famous Jorge Luis Borges story “On Exactitude in Science” (1946). In this story Borges tells of a fictitious ancient empire that became obsessed with producing ever more accurate maps of its territory, until eventually it produced a map with a one-to-one scale. The entire empire was covered with a map of the empire. So many resources were wasted on this ambitious representational project that the empire collapsed. Then the map too began to disintegrate, and Borges tells us that only “in the western Deserts, tattered fragments of the map are still to be found, sheltering an occasional beast or beggar.”7 A one-to-one map may look like the ultimate representation of reality, but tellingly it is no longer a representation at all; it is the reality.

    The point is that even the most truthful accounts of reality can never represent it in full. There are always some aspects of reality that are neglected or distorted in every representation. Truth, then, isn’t a one-to-one representation of reality. Rather, truth is something that brings our attention to certain aspects of reality while inevitably ignoring other aspects. No account of reality is 100 percent accurate, but some accounts are nevertheless more truthful than others.

    WHAT INFORMATION DOES

    As noted above, the naive view sees information as an attempt to represent reality. It is aware that some information doesn’t represent reality well, but it dismisses this as unfortunate cases of “misinformation” or “disinformation.” Misinformation is an honest mistake, occurring when someone tries to represent reality but gets it wrong. Disinformation is a deliberate lie, occurring when someone consciously intends to distort our view of reality.

    The naive view further believes that the solution to the problems caused by misinformation and disinformation is more information. This idea, sometimes called the counterspeech doctrine, is associated with the U.S. Supreme Court justice Louis D. Brandeis, who wrote in Whitney v. California (1927) that the remedy to false speech is more speech and that in the long term free discussion is bound to expose falsehoods and fallacies. If all information is an attempt to represent reality, then as the amount of information in the world grows, we can expect the flood of information to expose the occasional lies and errors and to ultimately provide us with a more truthful understanding of the world.

    On this crucial point, this book strongly disagrees with the naive view. There certainly are instances of information that attempt to represent reality and succeed in doing so, but this is not the defining characteristic of information. A few pages ago I referred to stars as information and casually mentioned astrologers alongside astronomers. Adherents of the naive view of information probably squirmed in their chairs when they read it. According to the naive view, astronomers derive “real information” from the stars, while the information that astrologers imagine to read in constellations is either “misinformation” or “disinformation.” If only people were given more information about the universe, surely they would abandon astrology altogether. But the fact is that for thousands of years astrology has had a huge impact on history, and today millions of people still check their star signs before making the most important decisions of their lives, like what to study and whom to marry. As of 2021, the global astrology market was valued at $12.8 billion.8

    No matter what we think about the accuracy of astrological information, we should acknowledge its important role in history. It has connected lovers, and even entire empires. Roman emperors routinely consulted astrologers before making decisions. Indeed, astrology was held in such high esteem that casting the horoscope of a reigning emperor was a capital offense. Presumably, anyone casting such a horoscope could foretell when and how the emperor would die.9 Rulers in some countries still take astrology very seriously. In 2005 the junta of Myanmar allegedly moved the country’s capital from Yangon to Naypyidaw based on astrological advice.10 A theory of information that cannot account for the historical significance of astrology is clearly inadequate.

    What the example of astrology illustrates is that errors, lies, fantasies, and fictions are information, too. Contrary to what the naive view of information says, information has no essential link to truth, and its role in history isn’t to represent a preexisting reality. Rather, what information does is to create new realities by tying together disparate things—whether couples or empires. Its defining feature is connection rather than representation, and information is whatever connects different points into a network. Information doesn’t necessarily inform us about things. Rather, it puts things in formation. Horoscopes put lovers in astrological formations, propaganda broadcasts put voters in political formations, and marching songs put soldiers in military formations.

    As a paradigmatic case, consider music. Most symphonies, melodies, and tunes don’t represent anything, which is why it makes no sense to ask whether they are true or false. Over the years people have created a lot of bad music, but not fake music. Without representing anything, music nevertheless does a remarkable job in connecting large numbers of people and synchronizing their emotions and movements. Music can make soldiers march in formation, clubbers sway together, church congregations clap in rhythm, and sports fans chant in unison.11

    The role of information in connecting things is of course not unique to human history. A case can be made that this is the chief role of information in biology too.12 Consider DNA, the molecular information that makes organic life possible. Like music, DNA doesn’t represent reality. Though generations of zebras have been fleeing lions, you cannot find in the zebra DNA a string of nucleobases representing “lion” nor another string representing “flight.” Similarly, zebra DNA contains no representation of the sun, wind, rain, or any other external phenomena that zebras encounter during their lives. Nor does DNA represent internal phenomena like body organs or emotions. There is no combination of nucleobases that represents a heart, or fear.

    Instead of trying to represent preexisting things, DNA helps to produce entirely new things. For instance, various strings of DNA nucleobases initiate cellular chemical processes that result in the production of adrenaline. Adrenaline too doesn’t represent reality in any way. Rather, adrenaline circulates through the body, initiating additional chemical processes that increase the heart rate and direct more blood to the muscles.13 DNA and adrenaline thereby help to connect cells in the heart, cells in the leg muscles, and trillions of other cells throughout the body to form a functioning network that can do remarkable things, like run away from a lion.

    If DNA represented reality, we could have asked questions like “Does zebra DNA represent reality more accurately than lion DNA?” or “Is the DNA of one zebra telling the truth about the world, while another zebra is misled by her fake DNA?” These, of course, are nonsensical questions. We might evaluate DNA by the fitness of the organism it produces, but not by truthfulness. While it is common to talk about DNA “errors,” this refers only to mutations in the process of copying DNA—not to a failure to represent reality accurately. A genetic mutation that inhibits the production of adrenaline reduces the fitness of a particular zebra, ultimately causing the network of cells to disintegrate, as when the zebra is killed by a lion and its trillions of cells lose connection with one another and decompose. But this kind of network failure means disintegration, not disinformation. That’s true of countries, political parties, and news networks as much as of zebras.

    Crucially, errors in the copying of DNA don’t always reduce fitness. Once in a blue moon, they increase fitness. Without such mutations, there would be no process of evolution. All life-forms exist thanks to genetic “errors.” The wonders of evolution are possible because DNA doesn’t represent any preexisting realities; it creates new realities.

    Let us pause to digest the implications of this. Information is something that creates new realities by connecting different points into a network. This still includes the view of information as representation. Sometimes, a truthful representation of reality can connect humans, as when 600 million people sat glued to their television sets in July 1969, watching Neil Armstrong and Buzz Aldrin walking on the moon.14 The images on the screens accurately represented what was happening 384,000 kilometers away, and seeing them gave rise to feelings of awe, pride, and human brotherliness that helped connect people.

    However, such fraternal feelings can be produced in other ways, too. The emphasis on connection leaves ample room for other types of information that do not represent reality well. Sometimes erroneous representations of reality might also serve as a social nexus, as when millions of followers of a conspiracy theory watch a YouTube video claiming that the moon landing never happened. These images convey an erroneous representation of reality, but they might nevertheless give rise to feelings of anger against the establishment or pride in one’s own wisdom that help create a cohesive new group.

    Sometimes networks can be connected without any attempt to represent reality, neither accurate nor erroneous, as when genetic information connects trillions of cells or when a stirring musical piece connects thousands of humans.

    As a final example, consider Mark Zuckerberg’s vision of the Metaverse. The Metaverse is a virtual universe made entirely of information. Unlike the one-to-one map built by Jorge Luis Borges’s imaginary empire, the Metaverse isn’t an attempt to represent our world, but rather an attempt to augment or even replace our world. It doesn’t offer us a digital replica of Buenos Aires or Salt Lake City; it invites people to build new virtual communities with novel landscapes and rules. As of 2024 the Metaverse seems like an overblown pipe dream, but within a couple of decades billions of people might migrate to live much of their lives in an augmented virtual reality, holding there most of their social and professional activities. People might come to build relationships, join movements, hold jobs, and experience emotional ups and downs in environments made of bits rather than atoms. Perhaps only in some remote deserts, tattered fragments of the old reality could still be found, sheltering an occasional beast or beggar.

    INFORMATION IN HUMAN HISTORY

    Viewing information as a social nexus helps us understand many aspects of human history that confound the naive view of information as representation. It explains the historical success not only of astrology but of much more important things, like the Bible. While some may dismiss astrology as a quaint sideshow in human history, nobody can deny the central role the Bible has played. If the main job of information had been to represent reality accurately, it would have been hard to explain why the Bible became one of the most influential texts in history.

    The Bible makes many serious errors in its description of both human affairs and natural processes. The book of Genesis claims that all human groups—including, for example, the San people of the Kalahari Desert and the Aborigines of Australia—descend from a single family that lived in the Middle East about four thousand years ago.15 According to Genesis, after the Flood all Noah’s descendants lived together in Mesopotamia, but following the destruction of the Tower of Babel they spread to the four corners of the earth and became the ancestors of all living humans. In fact, the ancestors of the San people lived in Africa for hundreds of thousands of years without ever leaving the continent, and the ancestors of the Aborigines settled Australia more than fifty thousand years ago.16 Both genetic and archaeological evidence rule out the idea that the entire ancient populations of South Africa and Australia were annihilated about four thousand years ago by a flood and that these areas were subsequently repopulated by Middle Eastern immigrants.

    An even graver distortion involves our understanding of infectious diseases. The Bible routinely depicts epidemics as divine punishment for human sins17 and claims they can be stopped or prevented by prayers and religious rituals.18 However, epidemics are of course caused by pathogens and can be stopped or prevented by following hygiene rules and using medicines and vaccines. This is today widely accepted even by religious leaders like the pope, who during the COVID-19 pandemic advised people to self-isolate, instead of congregating to pray together.19

    Yet while the Bible has done a poor job in representing the reality of human origins, migrations, and epidemics, it has nevertheless been very effective in connecting billions of people and creating the Jewish and Christian religions. Like DNA initiating chemical processes that bind billions of cells into organic networks, the Bible initiated social processes that bonded billions of people into religious networks. And just as a network of cells can do things that single cells cannot, so a religious network can do things that individual humans cannot, like building temples, maintaining legal systems, celebrating holidays, and waging holy wars.

    To conclude, information sometimes represents reality, and sometimes doesn’t. But it always connects. This is its fundamental characteristic. Therefore, when examining the role of information in history, although it sometimes makes sense to ask “How well does it represent reality? Is it true or false?” often the more crucial questions are “How well does it connect people? What new network does it create?”

    It should be emphasized that rejecting the naive view of information as representation does not force us to reject the notion of truth, nor does it force us to embrace the populist view of information as a weapon. While information always connects, some types of information—from scientific books to political speeches—may strive to connect people by accurately representing certain aspects of reality. But this requires a special effort, which most information does not make. This is why the naive view is wrong to believe that creating more powerful information technology will necessarily result in a more truthful understanding of the world. If no additional steps are taken to tilt the balance in favor of truth, an increase in the amount and speed of information is likely to swamp the relatively rare and expensive truthful accounts by much more common and cheap types of information.

    When we look at the history of information from the Stone Age to the Silicon Age, we therefore see a constant rise in connectivity, without a concomitant rise in truthfulness or wisdom. Contrary to what the naive view believes, Homo sapiens didn’t conquer the world because we are talented at turning information into an accurate map of reality. Rather, the secret of our success is that we are talented at using information to connect lots of individuals. Unfortunately, this ability often goes hand in hand with believing in lies, errors, and fantasies. This is why even technologically advanced societies like Nazi Germany and the Soviet Union have been prone to hold delusional ideas, without their delusions necessarily weakening them. Indeed, the mass delusions of Nazi and Stalinist ideologies about things like race and class actually helped them make tens of millions of people march together in lockstep.

    In chapters 2–5 we’ll take a closer look at the history of information networks. We’ll discuss how, over tens of thousands of years, humans invented various information technologies that greatly improved connectivity and cooperation without necessarily resulting in a more truthful representation of the world. These information technologies—invented centuries and millennia ago—still shape our world even in the era of the internet and AI. The first information technology we’ll examine, which is also the first information technology developed by humans, is the story.

    CHAPTER 2 Stories: Unlimited Connections

    We Sapiens rule the world not because we are so wise but because we are the only animals that can cooperate flexibly in large numbers. I have explored this idea in my previous books Sapiens and Homo Deus, but a brief recap is inescapable.

    The Sapiens’ ability to cooperate flexibly in large numbers has precursors among other animals. Some social mammals like chimpanzees display significant flexibility in the way they cooperate, while some social insects like ants cooperate in very large numbers. But neither chimps nor ants establish empires, religions, or trade networks. Sapiens are capable of doing such things because we are far more flexible than chimps and can simultaneously cooperate in even larger numbers than ants. In fact, there is no upper limit to the number of Sapiens who can cooperate with one another. The Catholic Church has about 1.4 billion members. China has a population of about 1.4 billion. The global trade network connects about 8 billion Sapiens.

    This is surprising given that humans cannot form long-term intimate bonds with more than a few hundred individuals.1 It takes many years and common experiences to get to know someone’s unique character and history and to cultivate ties of mutual trust and affection. Consequently, if Sapiens networks were connected only by personal human-to-human bonds, our networks would have remained very small. This is the situation among our chimpanzee cousins, for example. Their typical community numbers 20–60 members, and on rare occasions the number might increase to 150–200.2 This appears to have been the situation also among ancient human species like Neanderthals and archaic Homo sapiens. Each of their bands numbered a few dozen individuals, and different bands rarely cooperated.3

    About seventy thousand years ago, Homo sapiens bands began displaying an unprecedented capacity to cooperate with one another, as evidenced by the emergence of inter-band trade and artistic traditions and by the rapid spread of our species from our African homeland to the entire globe. What enabled different bands to cooperate is that evolutionary changes in brain structure and linguistic abilities apparently gave Sapiens the aptitude to tell and believe fictional stories and to be deeply moved by them. Instead of building a network from human-to-human chains alone—as the Neanderthals, for example, did—stories provided Homo sapiens with a new type of chain: human-to-story chains. In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story. And the same story can be familiar to billions of individuals. A story can thereby serve like a central connector, with an unlimited number of outlets into which an unlimited number of people can plug. For example, the 1.4 billion members of the Catholic Church are connected by the Bible and other key Christian stories; the 1.4 billion citizens of China are connected by the stories of communist ideology and Chinese nationalism; and the 8 billion members of the global trade network are connected by stories about currencies, corporations, and brands.

    Even charismatic leaders who have millions of followers are an example of this rule rather than an exception. It may seem that in the case of ancient Chinese emperors, medieval Catholic popes, or modern corporate titans it has been a single flesh-and-blood human—rather than a story—that has served as a nexus linking millions of followers. But, of course, in all these cases almost none of the followers has had a personal bond with the leader. Instead, what they have connected to has been a carefully crafted story about the leader, and it is in this story that they have put their faith.

    Joseph Stalin, who stood at the nexus of one of the biggest personality cults in history, understood this well. When his troublesome son Vasily exploited his famous name to frighten and awe people, Stalin berated him. “But I’m a Stalin too,” protested Vasily. “No, you’re not,” replied Stalin. “You’re not Stalin and I’m not Stalin. Stalin is Soviet power. Stalin is what he is in the newspapers and the portraits, not you, no—not even me!”4

    Present-day influencers and celebrities would concur. Some have hundreds of millions of online followers, with whom they communicate daily through social media. But there is very little authentic personal connection there. The social media accounts are usually run by a team of experts, and every image and word is professionally crafted and curated to manufacture what is nowadays called a brand.5

    A “brand” is a specific type of story. To brand a product means to tell a story about that product, which may have little to do with the product’s actual qualities but which consumers nevertheless learn to associate with the product. For example, over the decades the Coca-Cola corporation has invested tens of billions of dollars in advertisements that tell and retell the story of the Coca-Cola drink.6 People have seen and heard the story so often that many have come to associate a certain concoction of flavored water with fun, happiness, and youth (as opposed to tooth decay, obesity, and plastic waste). That’s branding.7

    As Stalin knew, it is possible to brand not only products but also individuals. A corrupt billionaire can be branded as the champion of the poor; a bungling imbecile can be branded as an infallible genius; and a guru who sexually abuses his followers can be branded as a chaste saint. People think they connect to the person, but in fact they connect to the story told about the person, and there is often a huge gulf between the two.

    Even the story of Cher Ami, the heroic pigeon, was partly the product of a branding campaign aimed at enhancing the public image of the U.S. Army’s Pigeon Service. A 2021 revisionist study by the historian Frank Blazich found that though there is no doubt Cher Ami sustained severe injuries while transporting a message somewhere in Northern France, several key features of the story are doubtful or inaccurate. First, relying on contemporary military records, Blazich demonstrated that headquarters learned about the exact location of the Lost Battalion about twenty minutes prior to the pigeon’s arrival. It was not the pigeon that put a stop to the barrage of friendly fire decimating the Lost Battalion. Even more crucially, there is simply no proof that the pigeon carrying Major Whittlesey’s message was Cher Ami. It might well have been another bird, while Cher Ami might have sustained his wounds a couple of weeks later, during an altogether different battle.

    According to Blazich, the doubts and inconsistencies in Cher Ami’s story were overshadowed by its propaganda value to the army and its appeal to the public. Over the years the story was retold so many times that facts became hopelessly enmeshed with fiction. Journalists, poets, and filmmakers added fanciful details to it, for example that the pigeon lost an eye as well as a leg and that it was awarded the Distinguished Service Cross. In the 1920s and 1930s Cher Ami became the most famous bird in the world. When he died, his carefully preserved corpse was placed on display at the Smithsonian Museum, where it became a pilgrimage site for American patriots and World War I veterans. As the story grew in the telling, it took over even the recollections of survivors of the Lost Battalion, who came to accept the popular narrative at face value. Blazich recounts the case of Sherman Eager, an officer in the Lost Battalion, who decades after the war brought his children to see Cher Ami at the Smithsonian and told them, “You all owe your lives to that pigeon.” Whatever the facts may be, the story of the self-sacrificing winged saviour proved irresistible.8

    As a much more extreme example, consider Jesus. Two millennia of storytelling have encased Jesus within such a thick cocoon of stories that it is impossible to recover the historical person. Indeed, for millions of devout Christians merely raising the possibility that the real person was different from the story is blasphemy. As far as we can tell, the real Jesus was a typical Jewish preacher who built a small following by giving sermons and healing the sick. After his death, however, Jesus became the subject of one of the most remarkable branding campaigns in history. This little-known provincial guru, who during his short career gathered just a handful of disciples and who was executed as a common criminal, was rebranded after death as the incarnation of the cosmic god who created the universe.9 Though no contemporary portrait of Jesus has survived, and though the Bible never describes what he looked like, imaginary renderings of him have become some of the most recognizable icons in the world.

    It should be stressed that the creation of the Jesus story was not a deliberate lie. People like Saint Paul, Tertullian, Saint Augustine, and Martin Luther didn’t set out to deceive anyone. They projected their deeply felt hopes and feelings on the figure of Jesus, in the same way that all of us routinely project our feelings on our parents, lovers, and leaders. While branding campaigns are occasionally a cynical exercise of disinformation, most of the really big stories of history have been the result of emotional projections and wishful thinking. True believers play a key role in the rise of every major religion and ideology, and the Jesus story changed history because it gained an immense number of true believers.

    By gaining all those believers, the story of Jesus managed to have a much bigger impact on history than the person of Jesus. The person of Jesus walked from village to village on his two feet, talking with people, eating and drinking with them, placing his hands on their sick bodies. He made a difference to the lives of perhaps several thousand individuals, all living in one minor Roman province. In contrast, the story of Jesus flew around the whole world, first on the wings of gossip, anecdote, and rumor; then via parchment texts, paintings, and statues; and eventually as blockbuster movies and internet memes. Billions of people not only heard the Jesus story but came to believe in it too, which created one of the biggest and most influential networks in the world.

    Stories like the one about Jesus can be seen as a way of stretching preexisting biological bonds. Family is the strongest bond known to humans. One way that stories build trust between strangers is by making these strangers reimagine each other as family. The Jesus story presented Jesus as the heavenly father of all humans, encouraged hundreds of millions of Christians to see each other as brothers and sisters, and created a shared pool of family memories. While most Christians were not physically present at the Last Supper, they have heard the story so many times, and they have seen so many images of the event, that they “remember” it more vividly than they remember most of the family dinners in which they actually participated.

    Interestingly, Jesus’s last supper was the Jewish Passover meal, which according to the Gospel accounts Jesus shared with his disciples just before his crucifixion. In Jewish tradition, the whole purpose of the Passover meal is to create and reenact artificial memories. Every year Jewish families sit together on the eve of Passover to eat and reminisce about “their” exodus from Egypt. They are supposed not only to tell the story of how the descendants of Jacob escaped slavery in Egypt but to remember how they personally suffered at the hands of the Egyptians, how they personally saw the sea part, and how they personally received the Ten Commandments from Jehovah at Mount Sinai.

    The Jewish tradition doesn’t mince words here. The text of the Passover ritual (the Haggadah) insists that “in every generation a person is obligated to regard himself as if he personally had come out of Egypt.” If anyone objects that this is a fiction, and that they didn’t personally come out of Egypt, Jewish sages have a ready answer. They claim that the souls of all Jews throughout history were created by Jehovah long before they were born and all these souls were present at Mount Sinai.10 As Salvador Litvak, a Jewish social media influencer, explained to his online followers in 2018, “You and I were there together.… When we fulfill the obligation to see ourselves as if we personally left Egypt, it’s not a metaphor. We don’t imagine the Exodus, we remember it.”11

    So every year, in the most important celebration of the Jewish calendar, millions of Jews put on a show that they remember things that they didn’t witness and that in all probability never happened at all. As numerous modern studies show, repeatedly retelling a fake memory eventually causes the person to adopt it as a genuine recollection.12 When two Jews encounter each other for the first time, they can immediately feel that they both belong to the same family, that they were together slaves in Egypt, and that they were together at Mount Sinai. That’s a powerful bond that sustained the Jewish network over many centuries and continents.

    INTERSUBJECTIVE ENTITIES

    The Jewish Passover story builds a large network by taking existing biological kin bonds and stretching them way beyond their biological limits. But there is an even more revolutionary way for stories to build networks. Like DNA, stories can create entirely new entities. Indeed, stories can even create an entirely new level of reality. As far as we know, prior to the emergence of stories the universe contained just two levels of reality. Stories added a third.

    The two levels of reality that preceded storytelling are objective reality and subjective reality. Objective reality consists of things like stones, mountains, and asteroids—things that exist whether we are aware of them or not. An asteroid hurtling toward planet Earth, for example, exists even if nobody knows it’s out there. Then there is subjective reality: things like pain, pleasure, and love that aren’t “out there” but rather “in here.” Subjective things exist in our awareness of them. An unfelt ache is an oxymoron.

    But some stories are able to create a third level of reality: intersubjective reality. Whereas subjective things like pain exist in a single mind, intersubjective things like laws, gods, nations, corporations, and currencies exist in the nexus between large numbers of minds. More specifically, they exist in the stories people tell one another. The information humans exchange about intersubjective things doesn’t represent anything that had already existed prior to the exchange of information; rather, the exchange of information creates these things.

    When I tell you that I am in pain, telling you about it doesn’t create the pain. And if I stop talking about the pain, it doesn’t make the pain go away. Similarly, when I tell you that I saw an asteroid, this doesn’t create the asteroid. The asteroid exists whether people talk about it or not. But when lots of people tell one another stories about laws, gods, or currencies, this is what creates these laws, gods, or currencies. If people stop talking about them, they disappear. Intersubjective things exist in the exchange of information.

    Let’s take a closer look. The calorific value of pizza doesn’t depend on our beliefs. A typical pizza contains between fifteen hundred and twenty-five hundred calories.13 In contrast, the financial value of money—and pizzas—depends entirely on our beliefs. How many pizzas can you purchase for a dollar, or for a bitcoin? In 2010, Laszlo Hanyecz bought two pizzas for 10,000 bitcoins. It was the first known commercial transaction involving bitcoin—and with hindsight, also the most expensive pizza ever. By November 2021, a single bitcoin was valued at more than $69,000, so the bitcoins Hanyecz paid for his two pizzas were worth $690 million, enough to purchase millions of pizzas.14 While the calorific value of pizza is an objective reality that remained the same between 2010 and 2021, the financial value of bitcoin is an intersubjective reality that changed dramatically during the same period, depending on the stories people told and believed about bitcoin.

    Another example. Suppose I ask, “Does the Loch Ness Monster exist?” This is a question about the objective level of reality. Some people believe that dinosaur-like animals really do inhabit Loch Ness. Others dismiss the idea as a fantasy or a hoax. Over the years, many attempts have been made to resolve the disagreement once and for all, using scientific methods such as sonar scans and DNA surveys. If huge animals live in the lake, they should appear on sonar, and they should leave DNA traces. Based on the available evidence, the scientific consensus is that the Loch Ness Monster does not exist. (A DNA survey conducted in 2019 found genetic material from three thousand species, but no monster. At most, Loch Ness may contain some five-kilo eels.15) Many people may nevertheless continue to believe that the Loch Ness Monster exists, but believing it doesn’t change objective reality.

    In contrast to animals, whose existence can be verified or disproved through objective tests, states are intersubjective entities. We normally don’t notice it, because everybody takes the existence of the United States, China, Russia, or Brazil for granted. But there are cases when people disagree about the existence of certain states, and then their intersubjective status emerges. The Israeli-Palestinian conflict, for example, revolves around this matter, because some people and governments refuse to acknowledge the existence of Israel and others refuse to acknowledge the existence of Palestine. As of 2024, the governments of Brazil and China, for example, say that both Israel and Palestine exist; the governments of the United States and Cameroon recognize only Israel’s existence; whereas the governments of Algeria and Iran recognize only Palestine. Other cases range from Kosovo, which as of 2024 is recognized as a state by around half of the 193 UN members,16 to Abkhazia, which almost all governments see as a sovereign territory of Georgia, but which is recognized as a state by Russia, Venezuela, Nicaragua, Nauru, and Syria.17

    Indeed, almost all states pass at least temporarily through a phase during which their existence is contested, when struggling for independence. Did the United States come into existence on July 4, 1776, or only when other states like France and finally the U.K. recognized it? Between the declaration of U.S. independence on July 4, 1776, and the signing of the Treaty of Paris on September 3, 1783, some people like George Washington believed the United States existed, while other people like King George III vehemently rejected this idea.

    Disagreements about the existence of states cannot be resolved by an objective test, such as a DNA survey or a sonar scan. Unlike animals, states are not an objective reality. When we ask whether a particular state exists, we are raising a question about intersubjective reality. If enough people agree that a particular state exists, then it does. It can then do things like sign legally binding treaties with other governments as well as NGOs and private corporations.

    Of all genres of stories, those that create intersubjective realities have been the most crucial for the development of large-scale human networks. Implanting fake family memories is certainly helpful, but no religions or empires managed to survive for long without a strong belief in the existence of a god, a nation, a law code, or a currency. For the formation of the Christian Church, for example, it was important that people recollect what Jesus said at the Last Supper, but the crucial step was making people believe that Jesus was a god rather than just an inspiring rabbi. For the formation of the Jewish religion, it was helpful that Jews “remembered” how they together escaped slavery in Egypt, but the really decisive step was making all Jews adhere to the same religious law code, the Halakha.

    Intersubjective things like laws, gods, and currencies are extremely powerful within a particular information network and utterly meaningless outside it. Suppose a billionaire crashes his private jet on a deserted island and finds himself alone with a suitcase full of banknotes and bonds. When he was in São Paulo or Mumbai, he could use these papers to make people feed him, clothe him, protect him, and build him a private jet. But once he is cut off from other members of our information network, his banknotes and bonds immediately become worthless. He cannot use them to get the island’s monkeys to provide him with food or to build him a raft.

    THE POWER OF STORIES

    Whether through implanting fake memories, forming fictional relationships, or creating intersubjective realities, stories produced large-scale human networks. These networks in turn completely changed the balance of power in the world. Story-based networks made Homo sapiens the most powerful of all animals, giving it a crucial edge not only over lions and mammoths but also over other ancient human species like Neanderthals.

    Neanderthals lived in small isolated bands, and to the best of our knowledge different bands cooperated with one another only rarely and weakly, if at all.18 Stone Age Sapiens too lived in small bands of a few dozen individuals. But following the emergence of storytelling, Sapiens bands no longer lived in isolation. Bands were connected by stories about things like revered ancestors, totem animals, and guardian spirits. Bands that shared stories and intersubjective realities constituted a tribe. Each tribe was a network connecting hundreds or even thousands of individuals.19

    Belonging to a large tribe had an obvious advantage in times of conflict. Five hundred Sapiens could easily defeat fifty Neanderthals.20 But tribal networks had many additional advantages. If we live in an isolated band of fifty people and a severe drought hits our home territory, many of us might starve to death. If we try to migrate elsewhere, we are likely to encounter hostile groups, and we might also find it difficult to forage for food, water, and flint (to make tools) in unfamiliar territory. However, if our band is part of a tribal network, in times of need at least some of us could go live with our distant friends. If our shared tribal identity is strong enough, they would welcome us and teach us about the local dangers and opportunities. A decade or two later, we might reciprocate. The tribal network, then, acted like an insurance policy. It minimized risk by spreading it across a lot more people.21

    Even in quiet times Sapiens could benefit enormously from exchanging information not just with a few dozen members of a small band but with an entire tribal network. If one of the tribe’s bands discovered a better way to make spear points, learned how to heal wounds with some rare medicinal herb, or invented a needle to sew clothes, that knowledge could be quickly passed to the other bands. Even though individually Sapiens might not have been more intelligent than Neanderthals, five hundred Sapiens together were far more intelligent than fifty Neanderthals.22

    All this was made possible by stories. The power of stories is often missed or denied by materialist interpretations of history. In particular, Marxists tend to view stories as merely a smoke screen for underlying power relations and material interests. According to Marxist theories, people are always motivated by objective material interests and use stories only to camouflage these interests and confound their rivals. For example, in this reading the Crusades, World War I, and the Iraq War were all fought for the economic interests of powerful elites rather than for religious, nationalist, or liberal ideals. Understanding these wars means setting aside all the mythological fig leaves—about God, patriotism, or democracy—and observing power relations in their nakedness.

    This Marxist view, however, is not only cynical but wrong. While materialist interests certainly played a role in the Crusades, World War I, the Iraq War, and most other human conflicts, that does not mean that religious, national, and liberal ideals played no role at all. Moreover, materialist interests by themselves cannot explain the identities of the rival camps. Why is it that in the twelfth century landowners and merchants from France, Germany, and Italy united to conquer territories and trade routes in the Levant—instead of landowners and merchants from France and North Africa uniting to conquer Italy? And why is it that in 2003, the United States and Britain sought to conquer the oil fields of Iraq, rather than the gas fields of Norway? Can this really be explained by purely materialist considerations, without any recourse to people’s religious and ideological beliefs?

    In fact, all relations between large-scale human groups are shaped by stories, because the identities of these groups are themselves defined by stories. There are no objective definitions for who is British, American, Norwegian, or Iraqi; all these identities are shaped by national and religious myths that are constantly challenged and revised. Marxists may claim that large-scale groups have objective identities and interests, independent of stories. If that is so, how can we explain that only humans have large-scale groups like tribes, nations, and religions, whereas chimpanzees lack them? After all, chimpanzees share with humans all our objective material interests; they too need to drink, eat, and protect themselves from diseases. They too want sex and social power. But chimpanzees cannot maintain large-scale groups, because they are unable to create the stories that connect such groups and define their identities and interests. Contrary to Marxist thinking, large-scale identities and interests in history are always intersubjective; they are never objective.

    This is good news. If history had been shaped solely by material interests and power struggles, there would be no point talking to people who disagree with us. Any conflict would ultimately be the result of objective power relations, which cannot be changed merely by talking. In particular, if privileged people can see and believe only those things that enshrine their privileges, how can anything except violence persuade them to renounce those privileges and alter their beliefs? Luckily, since history is shaped by intersubjective stories, sometimes we can avert conflict and make peace by talking with people, changing the stories in which they and we believe, or coming up with a new story that everyone can accept.

    Take, for example, the rise of Nazism. There certainly were material interests that drove millions of Germans to support Hitler. The Nazis would probably never have come to power if it wasn’t for the economic crisis of the early 1930s. However, it is wrong to think that the Third Reich was the inevitable outcome of underlying power relations and material interests. Hitler won the 1933 elections because during the economic crisis millions of Germans came to believe the Nazi story rather than one of the alternative stories on offer. This wasn’t the inevitable result of Germans pursuing their material interests and protecting their privileges; it was a tragic mistake. We can confidently say that it was a mistake, and that Germans could have chosen better stories, because we know what happened next. Twelve years of Nazi rule didn’t foster the Germans’ material interests. Nazism led to the destruction of Germany and the deaths of millions. Later, when Germans adopted liberal democracy, this did lead to a lasting improvement in their lives. Couldn’t the Germans have skipped the failed Nazi experiment and put their faith in liberal democracy already in the early 1930s? The position of this book is that they could have. History is often shaped not by deterministic power relations, but rather by tragic mistakes that result from believing in mesmerizing but harmful stories.

    THE NOBLE LIE

    The centrality of stories reveals something fundamental about the power of our species, and it explains why power doesn’t always go hand in hand with wisdom. The naive view of information says that information leads to truth, and knowing the truth helps people to gain both power and wisdom. This sounds reassuring. It implies that people who ignore the truth are unlikely to have much power, whereas people who respect the truth can gain much power, but that power would be tempered by wisdom. For example, people who ignore the truth about human biology might believe racist myths but will not be able to produce powerful medicines and bioweapons, whereas people who understand biology will have that kind of power but will not use it in the service of racist ideologies. If this had indeed been the case, we could sleep calmly, trusting our presidents, high priests, and CEOs to be wise and honest. A politician, a movement, or a country might conceivably get ahead here and there with the help of lies and deceptions, but in the long term that would be a self-defeating strategy.

    Unfortunately, this is not the world in which we live. In history, power stems only partially from knowing the truth. It also stems from the ability to maintain social order among a large number of people. Suppose you want to make an atom bomb. To succeed, you obviously need some accurate knowledge of physics. But you also need lots of people to mine uranium ore, build nuclear reactors, and provide food for the construction workers, miners, and physicists. The Manhattan Project directly employed about 130,000 people, with millions more working to sustain them.23 Robert Oppenheimer could devote himself to his equations because he relied on thousands of miners to extract uranium at the Eldorado mine in northern Canada and the Shinkolobwe mine in the Belgian Congo24—not to mention the farmers who grew potatoes for his lunch. If you want to make an atom bomb, you must find a way to make millions of people cooperate.

    It is the same with all ambitious projects that humans undertake. A Stone Age band going to hunt a mammoth obviously needed to know some true facts about mammoths. If they believed they could kill a mammoth by casting spells, their hunting expedition would have failed. But just knowing facts about mammoths wasn’t enough, either. The hunters also needed to make sure all of them agreed on the same plan and bravely did their bit even in the face of mortal danger. If they believed that by pronouncing a spell they could guarantee a good afterlife for dead hunters, their hunting expeditions had a much higher chance of success. Even if objectively the spell was powerless and did not benefit dead hunters in any way, by fortifying the courage and solidarity of living hunters, it nevertheless made a crucial contribution to the hunt’s success.25

    While power depends on both truth and order, in most cases it is the people who know how to maintain order who call the shots, giving instructions to the people who merely know the truth about things like mammoths or nuclear physics. Robert Oppenheimer obeyed Franklin Delano Roosevelt rather than the other way around. Similarly, Werner Heisenberg obeyed Adolf Hitler, Igor Kurchatov deferred to Joseph Stalin, and in contemporary Iran experts in nuclear physics follow the orders of experts in Shiite theology.

    What the people at the top know, which nuclear physicists don’t always realize, is that telling the truth about the universe is hardly the most efficient way to produce order among large numbers of humans. It is true that E = mc², and it explains a lot of what happens in the universe, but knowing that E = mc² usually doesn’t resolve political disagreements or inspire people to make sacrifices for a common cause. Instead, what holds human networks together tends to be fictional stories, especially stories about intersubjective things like gods, money, and nations. When it comes to uniting people, fiction enjoys two inherent advantages over the truth. First, fiction can be made as simple as we like, whereas the truth tends to be complicated, because the reality it is supposed to represent is complicated. Take, for example, the truth about nations. It is difficult to grasp that the nation to which one belongs is an intersubjective entity that exists only in our collective imagination. You rarely hear politicians say such things in their political speeches. It is far easier to believe that our nation is God’s chosen people, entrusted by the Creator with some special mission. This simple story has been repeatedly told by countless politicians from Israel to Iran and from the United States to Russia.

    Second, the truth is often painful and disturbing, and if we try to make it more comforting and flattering, it will no longer be the truth. In contrast, fiction is highly malleable. The history of every nation contains some dark episodes that citizens don’t like to acknowledge and remember. An Israeli politician who in her election speeches details the miseries inflicted on Palestinian civilians by the Israeli occupation is unlikely to get many votes. In contrast, a politician who builds a national myth by ignoring uncomfortable facts, focusing on glorious moments in the Jewish past, and embellishing reality wherever necessary may well sweep to power. That’s the case not just in Israel but in all countries. How many Italians or Indians want to hear the unblemished truth about their nations? An uncompromising adherence to the truth is essential for scientific progress, and it is also an admirable spiritual practice, but it is not a winning political strategy.

    Already in his Republic, Plato imagined that the constitution of his utopian state would be based on “the noble lie”—a fictional story about the origin of the social order, one that secures the citizens’ loyalty and prevents them from questioning the constitution. Citizens should be told, Plato wrote, that they were all born out of the earth, that the land is their mother, and that they therefore owe filial loyalty to the motherland. They should further be told that when they were conceived, the gods intermingled different metals—gold, silver, bronze, and iron—into them, which justifies a natural hierarchy between golden rulers and bronze servants. While Plato’s utopia was never realized in practice, numerous polities through the ages told their inhabitants variations of this noble lie.

    Plato’s noble lie notwithstanding, we should not conclude that all politicians are liars or that all national histories are deceptions. The choice isn’t simply between telling the truth and lying. There is a third option. Telling a fictional story is lying only when you pretend that the story is a true representation of reality. Telling a fictional story isn’t lying when you avoid such pretense and acknowledge that you are trying to create a new intersubjective reality rather than represent a preexisting objective reality.

    For example, on September 17, 1787, the Constitutional Convention signed the U.S. Constitution, which came into force in 1789. The Constitution didn’t reveal any preexisting truth about the world, but crucially it wasn’t a lie, either. Rejecting Plato’s recommendation, the authors of the text didn’t deceive anyone about the text’s origins. They didn’t pretend that the text came down from heaven or that it had been inspired by some god. Rather, they acknowledged that it was an extremely creative legal fiction generated by fallible human beings.

    “We the People of the United States,” says the Constitution about its own origins, “in Order to form a more perfect Union … do ordain and establish this Constitution.” Despite the acknowledgment that it is a human-made legal fiction, the U.S. Constitution indeed managed to form a powerful union. It maintained for more than two centuries a surprising degree of order among many millions of people who belonged to a wide range of religious, ethnic, and cultural groups. The U.S. Constitution has thus functioned like a tune that without claiming to represent anything has nevertheless made numerous people act together in order.

    It is crucial to note that “order” should not be confused with fairness or justice. The order created and maintained by the U.S. Constitution condoned slavery, the subordination of women, the expropriation of indigenous people, and extreme economic inequality. The genius of the U.S. Constitution is that by acknowledging that it is a legal fiction created by human beings, it was able to provide mechanisms to reach agreement on amending itself and remedying its own injustices (as chapter 5 explores in greater depth). The Constitution’s Article V details how people can propose and ratify such amendments, which “shall be valid to all Intents and Purposes, as Part of this Constitution.” Less than a century after the Constitution was written, the Thirteenth Amendment abolished slavery.

    In this, the U.S. Constitution was fundamentally different from stories that denied their fictive nature and claimed divine origin, such as the Ten Commandments. Like the U.S. Constitution, the Ten Commandments endorsed slavery. The Tenth Commandment says, “You shall not covet your neighbor’s house. You shall not covet your neighbor’s wife, or his male slave or female slave” (Exodus 20:17). This implies that God is perfectly okay with people holding slaves, and objects only to the coveting of slaves belonging to someone else. But unlike the U.S. Constitution, the Ten Commandments failed to provide any amendment mechanism. There is no Eleventh Commandment that says, “You can amend commandments by a two-thirds majority vote.”

    This crucial difference between the two texts is clear from their opening gambits. The U.S. Constitution opens with “We the People.” By acknowledging its human origin, it invests humans with the power to amend it. The Ten Commandments open with “I am the Lord your God.” By claiming divine origin, it precludes humans from changing it. As a result, the biblical text still endorses slavery even today.

    All human political systems are based on fictions, but some admit it, and some do not. Being truthful about the origins of our social order makes it easier to make changes in it. If humans like us invented it, we can amend it. But such truthfulness comes at a price. Acknowledging the human origins of the social order makes it harder to persuade everyone to agree on it. If humans like us invented it, why should we accept it? As we shall see in chapter 5, until the late eighteenth century the lack of mass communication technology made it extremely difficult to conduct open debates between millions of people about the rules of the social order. To maintain order, Russian tsars, Muslim caliphs, and Chinese sons of heaven therefore claimed that the fundamental rules of society came down from heaven and were not open to human amendment. In the early twenty-first century, many political systems still claim superhuman authority and oppose open debates that may result in unwelcome changes.

    THE PERENNIAL DILEMMA

    After we understand the key role of fiction in history, it is finally possible to present a more complete model of information networks, which goes beyond both the naive view of information and the populist critique of that view. Contrary to the naive view, information isn’t the raw material of truth, and human information networks aren’t geared only to discover the truth. But contrary to the populist view, information isn’t just a weapon, either. Rather, to survive and flourish, every human information network needs to do two things simultaneously: discover truth and create order. Accordingly, as history unfolded, human information networks have been developing two distinct sets of skills. On the one hand, as the naive view expects, the networks have learned how to process information to gain a more accurate understanding of things like medicine, mammoths, and nuclear physics. At the same time, the networks have also learned how to use information to maintain stronger social order among larger populations, by using not just truthful accounts but also fictions, fantasies, propaganda, and—occasionally—downright lies.

    The naive view of information
    A more complete historical view of information

    Having a lot of information doesn’t in and of itself guarantee either truth or order. It is a difficult process to use information to discover the truth and simultaneously use it to maintain order. What makes things worse is that these two processes are often contradictory, because it is frequently easier to maintain order through fictions. Sometimes—as in the case of the U.S. Constitution—fictional stories may acknowledge their fictionality, but more often they disavow it. Religions, for example, always claim to be an objective and eternal truth rather than a fictional story invented by humans. In such cases, the search for truth threatens the foundations of the social order. Many societies require their populations not to know their true origins: ignorance is strength. What happens, then, when people get uncomfortably close to the truth? What happens when the same bit of information reveals an important fact about the world, and also undermines the noble lie that holds society together? In such cases society may seek to preserve order by placing limits on the search for truth.

    One obvious example is Darwin’s theory of evolution. Understanding evolution greatly advances our understanding of the origins and biology of species, including Homo sapiens, but it also undermines the central myths that maintain order in numerous societies. No wonder that various governments and churches have banned or limited the teaching of evolution, preferring to sacrifice truth for the sake of order.26

    A related problem is that an information network may allow and even encourage people to search for truth, but only in specific fields that help generate power without threatening the social order. The result can be a very powerful network that is singularly lacking in wisdom. Nazi Germany, for example, cultivated many of the world’s leading experts in chemistry, optics, engineering, and rocket science. It was largely Nazi rocket science that later brought the Americans to the moon.27 This scientific prowess helped the Nazis build an extremely powerful war machine, which was then deployed in the service of a deranged and murderous mythology. Under Nazi rule Germans were encouraged to develop rocket science, but they were not free to question racist theories about biology and history.

    That’s a major reason why the history of human information networks isn’t a triumphant march of progress. While over the generations human networks have grown increasingly powerful, they have not necessarily grown increasingly wise. If a network privileges order over truth, it can become very powerful but use that power unwisely.

    Instead of a march of progress, the history of human information networks is a tightrope walk trying to balance truth with order. In the twenty-first century we aren’t much better at finding the right balance than our ancestors were in the Stone Age. Contrary to what the mission statements of corporations like Google and Facebook imply, simply increasing the speed and efficiency of our information technology doesn’t necessarily make the world a better place. It only makes the need to balance truth and order more urgent. The invention of the story taught us this lesson already tens of thousands of years ago. And the same lesson would be taught again, when humans came up with their second great information technology: the written document.

    CHAPTER 3 Documents: The Bite of the Paper Tigers

    Stories were the first crucial information technology developed by humans. They laid the foundation for all large-scale human cooperation and made humans the most powerful animals on earth. But as an information technology, stories have their limitations.

    To appreciate this, consider the role storytelling plays in the formation of nations. Many nations have first been conceived in the imagination of poets. Sarah Aaronsohn and the NILI underground are remembered by present-day Israelis as some of the first Zionists who risked their lives in the 1910s to establish a Jewish state in Palestine, but from where did NILI members get this idea in the first place? They were inspired by an earlier generation of poets, thinkers, and visionaries such as Theodor Herzl and Hayim Nahman Bialik.

    In the 1890s and first decade of the twentieth century, Bialik, a Ukrainian Jew, published numerous poems and stories bewailing the persecution and weakness of European Jews and calling on them to take their fate in their hands—to defend themselves by force of arms, immigrate to Palestine, and there establish their own state. One of his most stirring poems was written following the Kishinev Pogrom of 1903, in which forty-nine Jews were murdered and dozens more were injured.1 “In the City of Slaughter” condemned the murderous antisemitic mob who perpetrated the atrocities, but it also criticized the Jews themselves for their pacifism and helplessness.

    In one heart-wrenching scene, Bialik described how Jewish women were gang-raped, while their husbands and brothers hid nearby, afraid to intervene. The poem compares the Jewish men to terrified mice and imagines how they quietly prayed to God to perform some miracle, which failed to materialize. The poem then tells how even after the pogrom was over, the survivors had no thought of arming themselves and instead entered Talmudic disputations about whether the raped women were now ritualistically “defiled” or whether they were still “pure.” This poem is mandatory reading in many Israeli schools today. It is also mandatory reading for anyone wishing to understand how after two millennia of being one of the most pacifist groups in history, Jews built one of the most formidable armies in the world. Not for nothing was Bialik named Israel’s national poet.2

    The fact that Bialik lived in Ukraine, and was intimately familiar with the persecution of Ashkenazi Jews in eastern Europe but had little understanding of conditions in Palestine, contributed to the subsequent conflict there between Jews and Arabs. Bialik’s poems inspired Jews to see themselves as victims in dire need of developing their military might and building their own country, but hardly considered the catastrophic consequences for the Arab inhabitants of Palestine, or indeed for the Mizrahi Jewish communities native to the Middle East. When the Arab-Israeli conflict exploded in the late 1940s, hundreds of thousands of Palestinians and hundreds of thousands of Mizrahi Jews were driven out of their ancestral homes in the Middle East, partly as a result of poems composed half a century earlier in Ukraine.3

    While Bialik was writing in Ukraine, the Hungarian Jew Theodor Herzl was busy organizing the Zionist movement in the 1890s and early years of the twentieth century. As a central part of his political activism, Herzl published two books. The Jewish State (1896) was a manifesto outlining Herzl’s idea of establishing a Jewish state in Palestine, and The Old New Land (1902) was a utopian novel set in the year 1923 describing the prosperous Jewish state that Herzl envisioned. The two books—which fatefully also tended to ignore realities on the ground in Palestine—were immensely influential in shaping the Zionist movement. The Old New Land appeared in Hebrew under the title Tel Aviv (a loose Hebrew translation of “Old New Land”). The city of Tel Aviv, established seven years after the book’s publication, took its name from the book. While Bialik is Israel’s national poet, Herzl is known as the visionary of the state.

    The yarns Bialik and Herzl wove ignored many crucial facts about contemporary reality, most notably that around 1900 the Jews of Palestine comprised only 6–9 percent of the region’s total population of about 600,000 people.4 While disregarding such demographic facts, Bialik and Herzl accorded great importance to mythology, most notably the stories of the Bible, without which modern Zionism is unimaginable. Bialik and Herzl were also influenced by the nationalist myths that were created in the nineteenth century by almost every other ethnic group in Europe. The Ukrainian Jew Bialik and the Hungarian Jew Herzl did for Zionism what was earlier done by the poets Taras Shevchenko for Ukrainian nationalism,5 Sándor Petőfi for Hungarian nationalism,6 and Adam Mickiewicz for Polish nationalism.7 Observing the growth of other national movements all around, Herzl wrote that nations arise “out of dreams, songs, fantasies.”8

    But dreams, songs, and fantasies, however inspiring, are not enough to create a functioning nation-state. Bialik inspired generations of Jewish fighters, but to equip and maintain an army, it is also necessary to raise taxes and buy guns. Herzl’s utopian book laid the foundations for the city of Tel Aviv, but to keep the city going, it was also necessary to dig a sewage system. When all is said and done, the essence of patriotism isn’t reciting stirring poems about the beauty of the motherland, and it certainly isn’t making hate-filled speeches against foreigners and minorities. Rather, patriotism means paying your taxes so that people on the other side of the country also enjoy the benefit of a sewage system, as well as security, education, and health care.

    To manage all these services and raise the necessary taxes, enormous amounts of information need to be collected, stored, and processed: information about properties, payments, exemptions, discounts, debts, inventories, shipments, budgets, bills, and salaries. This, however, is not the kind of information that can be turned into a memorable poem or a captivating myth. Instead, tax records come in the shape of various types of lists, ranging from a simple item-by-item record to more elaborate tables and spreadsheets. No matter how intricate these data sets may become, they eschew narrative in favor of dryly listing amounts owed and amounts paid. Poets can afford to ignore such mundane facts, but tax collectors cannot.

    Lists are crucial not only for national taxation systems but also for almost all other complex financial institutions. Corporations, banks, and stock markets cannot exist without them. A church, a university, or a library that wants to balance its budget soon realizes that in addition to priests and poets who can mesmerize people with stories, it needs accountants who know their way around the various types of lists.

    Lists and stories are complementary. National myths legitimize the tax records, while the tax records help transform aspirational stories into concrete schools and hospitals. Something analogous happens in the field of finance. The dollar, the pound sterling, and the bitcoin all come into being by persuading people to believe a story, and tales told by bankers, finance ministers, and investment gurus raise or lower their value. When the chairperson of the Federal Reserve wants to curb inflation, when a finance minister wants to pass a new budget, and when a tech entrepreneur wants to draw investors, they all turn to storytelling. But to actually manage a bank, a budget, or a start-up, lists are essential.

    The big problem with lists, and the crucial difference between lists and stories, is that lists tend to be far more boring than stories, which means that while we easily remember stories, we find it difficult to remember lists. This is an important fact about how the human brain processes information. Evolution has adapted our brains to be good at absorbing, retaining, and processing even very large quantities of information when they are shaped into a story. The Ramayana, one of the foundational tales of Hindu mythology, is twenty-four thousand verses long and runs to about seventeen hundred pages in modern editions, yet despite its enormous length generations of Hindus succeeded in remembering and reciting it by heart.9

    In the twentieth and twenty-first centuries, the Ramayana was repeatedly adapted for film and television. In 1987–88, a seventy-eight-episode version (running to about 2,730 hours) was the most watched television series in the world, with more than 650 million viewers. According to a BBC report, when episodes were aired, “streets would be deserted, shops would be closed, and people would bathe and garland their TV sets.” During the 2020 COVID-19 lockdown the series was re-aired and again became the most watched show in the world.10 While modern TV audiences need not memorize any texts by heart, it is noteworthy how easy they find it to follow the intricate plots of epic dramas, detective thrillers, and soap operas, recalling who each character is and how they are related to numerous others. We are so accustomed to performing such feats of memory that we seldom consider how extraordinary they are.

    What makes us so good at remembering epic poems and long-running TV series is that long-term human memory is particularly adapted to retaining stories. As Kendall Haven writes in his 2007 book Story Proof: The Science Behind the Startling Power of Story, “Human minds … rely on stories and on story architecture as the primary roadmap for understanding, making sense of, remembering, and planning our lives.… Lives are like stories because we think in story terms.” Haven references more than 120 academic studies, concluding that “research overwhelmingly, convincingly, and without opposition provides the evidence” that stories are a highly efficient “vehicle for communicating factual, conceptual, emotional, and tacit information.”11

    In contrast, most people find it hard to remember lists by heart, and few people would be interested in watching a TV recitation of India’s tax records or annual budget. Mnemonic methods used to memorize lists of items often work by weaving the items into a plot, thereby turning the list into a story.12 But even with the help of such mnemonic devices, who could remember their country’s tax records or budget? The information may be vital—determining what quality of health care, education, and welfare services citizens enjoy—but our brains are not adapted to remembering such things. Unlike national poems and myths, which can be stored in our brains, complex national taxation and administration systems have required a unique nonorganic information technology in order to function. This technology is the written document.

    TO KILL A LOAN

    The written document was invented many times in many places. Some of the earliest examples come from ancient Mesopotamia. A cuneiform clay tablet dated to the twenty-eighth day of the tenth month of the forty-first year of the reign of King Shulgi of Ur (ca. 2053/4 BCE) recorded the monthly deliveries of sheep and goats. Fifteen sheep were delivered on the second day of the month, 7 sheep on the third day, 11 sheep on the fourth, 219 on the fifth, 47 on the sixth, and so on until 3 sheep were delivered on the twenty-eighth. In total, says the clay tablet, 896 animals were received that month. Remembering all these deliveries was important for the royal administration, to monitor people’s obedience and to keep track of available resources. While doing so in one’s head was a formidable challenge, it was easy for a learned scribe to write them down on a clay tablet.13

    Like stories and like all other information technologies in history, written documents didn’t necessarily represent reality accurately. The Ur tablet, for example, contained a mistake. The document says that a total of 896 animals were received during that month, but when modern scholars added up all the individual entries they reached a total of 898. The scribe who wrote the document apparently made a mistake when he calculated the overall tally, and the tablet preserved this mistake for posterity.

    But whether true or false, written documents created new realities. By recording lists of properties, taxes, and payments, they made it far easier to create administrative systems, kingdoms, religious organizations, and trade networks. More specifically, documents changed the method used for creating intersubjective realities. In oral cultures, intersubjective realities were created by telling a story that many people repeated with their mouths and remembered in their brains. Brain capacity consequently placed a limit on the kinds of intersubjective realities that humans created. Humans couldn’t forge an intersubjective reality that their brains couldn’t remember.

    This limit could be transcended, however, by writing documents. The documents didn’t represent an objective empirical reality; the reality was the documents themselves. As we shall see in later chapters, written documents thereby provided precedents and models that would eventually be used by computers. The ability of computers to create intersubjective realities is an extension of the power of clay tablets and pieces of paper.

    As a key example, consider ownership. In oral communities that lacked written documents, ownership was an intersubjective reality created through the words and behaviors of the community members. To own a field meant that your neighbors agreed that this field was yours and behaved accordingly. They didn’t build a hut on that field, graze their livestock there, or pick fruits there without first asking your permission. Ownership was created and maintained by people continuously saying or signaling things to one another. This made ownership the affair of a local community and placed a limit on the ability of a distant central authority to control all landownership. No king, minister, or priest could remember who owned each field in hundreds of distant villages. This also placed a limit on the ability of individuals to claim and exercise absolute property rights, and instead favored various forms of communal property rights. For example, your neighbors might acknowledge your right to cultivate a field but not your right to sell it to foreigners.14

    In a literate state, to own a field increasingly came to mean that it is written on some clay tablet, bamboo strip, piece of paper, or silicon chip that you own that field. If your neighbors have been grazing their sheep for years on a piece of land, and none of them ever said that you own it, but you can somehow produce an official document that says it is yours, you have a good chance of enforcing your claim. Conversely, if all the neighbors agree that it is your field but you don’t have any official document that proves it, tough luck. Ownership is still an intersubjective reality created by exchanging information, but the information now takes the form of a written document (or a computer file) rather than of people talking and gesturing to each other. This means that ownership can now be determined by a central authority that produces and holds the relevant documents. It also means that you can sell your field without asking your neighbors’ permission, simply by transferring the crucial document to someone else.

    The power of documents to create intersubjective realities was beautifully manifested in the Old Assyrian dialect, which treated documents as living things that could also be killed. Loan contracts were “killed” (duākum) when the debt was repaid. This was done by destroying the tablet, adding some mark to it, or breaking its seal. The loan contract didn’t represent reality; it was the reality. If somebody repaid the loan but failed to “kill the document,” the debt was still owed. Conversely, if somebody didn’t repay the loan but the document “died” in some other way—perhaps the dog ate it—the debt was no more.15 The same happens with money. If your dog eats a hundred-dollar bill, those hundred dollars cease to exist.

    In Shulgi’s Ur, in ancient Assyria, and in numerous subsequent polities, social, economic, and political relations relied on documents that create reality instead of merely representing it. When writing constitutions, peace treaties, and commercial contracts, lawyers, politicians, and businesspeople wrangle for weeks and even months over each word—because they know that these pieces of paper can wield enormous power.

    BUREAUCRACY

    Every new information technology has its unexpected bottlenecks. It solves some old problems but creates new ones. In the early 1730s BCE, Narâmtani, a priestess in the Mesopotamian city of Sippar, wrote a letter (on a clay tablet) to a relative, asking him to send her a few clay tablets he kept in his house. She explained that her claim to an inheritance was being contested and she couldn’t prove her case in court without those documents. She ended her message with a plea: “Now, do not neglect me!”16

    We don’t know what happened next, but just imagine the situation if the relative searched his house but could not find the missing tablets. As people produced more and more documents, finding them turned out to be far from easy. This was a particular challenge for kings, priests, merchants, and anyone else who accumulated thousands of documents in their archives. How do you find the right tax record, payment receipt, or business contract when you need it? Written documents were much better than human brains in recording certain types of information. But they created a new and very thorny problem: retrieval.17

    The brain is remarkably efficient in retrieving whatever information is stored in its network of tens of billions of neurons and trillions of synapses. Though our brain archives countless complex stories about our personal life, our national history, and our religious mythology, healthy people can retrieve information about any of them in less than a second. What did you eat for breakfast? Who was your first crush? When did your country gain its independence? What’s the first verse in the Bible?

    How did you retrieve all these pieces of information? What mechanism activates the right neurons and synapses to rapidly call up the necessary information? Though neuroscientists have made some progress in the study of memory, nobody yet understands what memories are, or how exactly they are stored and retrieved.18 What we do know is that millions of years of evolution streamlined the brain’s retrieval processes. However, once humans have outsourced memories from organic brains to inorganic documents, retrieval could no longer rely on that streamlined biological system. Nor could it rely on the foraging abilities that humans evolved over millions of years. Evolution has adapted humans for finding fruits and mushrooms in a forest, but not for finding documents in an archive.

    Foragers locate fruits and mushrooms in a forest, because evolution has organized forests according to a discernible organic order. Fruit trees photosynthesize, so they require sunlight. Mushrooms feed on dead organic matter, which can usually be found in the ground. So mushrooms are usually down at soil level, whereas fruits grow further up. Another common rule is that apples grow on apple trees, whereas figs grow on figs trees. So if you are looking for an apple, you first need to locate an apple tree, and then look up. When living in a forest, humans learn this organic order.

    It is very different with archives. Since documents aren’t organisms, they don’t obey any biological laws, and evolution didn’t organize them for us. Tax reports don’t grow on a tax-report shelf. They need to be placed there. For that, somebody first needs to come up with the idea of categorizing information by shelves, and to decide which documents should go on which shelf. Unlike foragers, who need merely to discover the preexisting order of the forest, archivists need to devise a new order for the world. That order is called bureaucracy.

    Bureaucracy is the way people in large organizations solved the retrieval problem and thereby created bigger and more powerful information networks. But like mythology, bureaucracy too tends to sacrifice truth for order. By inventing a new order and imposing it on the world, bureaucracy distorted people’s understanding of the world in unique ways. Many of the problems of our twenty-first-century information networks—like biased algorithms that mislabel people, or rigid protocols that ignore human needs and feelings—are not new problems of the computer age. They are quintessential bureaucratic problems that have existed long before anyone even dreamed of computers.

    BUREAUCRACY AND THE SEARCH FOR TRUTH

    Bureaucracy literally means “rule by writing desk.” The term was invented in eighteenth-century France, when the typical official sat next to a writing desk with drawers—a bureau.19 At the heart of the bureaucratic order, then, is the drawer. Bureaucracy seeks to solve the retrieval problem by dividing the world into drawers, and knowing which document goes into which drawer.

    The principle remains the same regardless of whether the document is placed into a drawer, a shelf, a basket, a jar, a computer folder, or any other receptacle: divide and rule. Divide the world into containers, and keep the containers separate so the documents don’t get mixed up. This principle, however, comes with a price. Instead of focusing on understanding the world as it is, bureaucracy is often busy imposing a new and artificial order on the world. Bureaucrats begin by inventing various drawers, which are intersubjective realities that don’t necessarily correspond to any objective divisions in the world. The bureaucrats then try to force the world to fit into these drawers, and if the fit isn’t very good, the bureaucrats push harder. Anyone who ever filled out an official form knows this only too well. When you fill out the form, and none of the listed options fits your circumstances, you must adapt yourself to the form, rather than the form adapting to you. Reducing the messiness of reality to a limited number of fixed drawers helps bureaucrats keep order, but it comes at the expense of truth. Because they are fixated on their drawers—even when reality is far more complex—bureaucrats often develop a distorted understanding of the world.

    The urge to divide reality into rigid drawers also leads bureaucrats to pursue narrow goals irrespective of the wider impact of their actions. A bureaucrat tasked with increasing industrial production is likely to ignore environmental considerations that fall outside her purview, and perhaps dump toxic waste into a nearby river, leading to an ecological disaster downstream. If the government then establishes a new department to combat pollution, its bureaucrats are likely to push for ever more stringent regulations, even if this results in economic ruin for communities upstream. Ideally, someone should be able to take into account all the different considerations and aspects, but such a holistic approach requires transcending or abolishing the bureaucratic division.

    The distortions created by bureaucracy affect not only government agencies and private corporations but also scientific disciplines. Consider, for example, how universities are divided into different faculties and departments. History is separate from biology and from mathematics. Why? Certainly this division doesn’t reflect objective reality. It is the intersubjective invention of academic bureaucrats. The COVID-19 pandemic, for example, was at one and the same time a historical, biological, and mathematical event. But the academic study of pandemics is divided between the separate departments of history, biology, and mathematics (among others). Students pursuing an academic degree must usually decide to which of these departments they belong. Their decision limits their choice of courses, which in turn shapes their understanding of the world. Mathematics students learn how to predict future morbidity levels from present rates of infection; biology students learn how viruses mutate over time; and history students learn how religious and political beliefs affect people’s willingness to follow government instructions. To fully understand COVID-19 requires taking into account mathematical, biological, and historical phenomena, but academic bureaucracy doesn’t encourage such a holistic approach.

    As you climb the academic ladder, the pressure to specialize only increases. The academic world is ruled by the law of publish or perish. If you want a job, you must publish in peer-reviewed journals. But journals are divided by discipline, and publishing an article on virus mutations in a biology journal demands following different conventions from publishing an article on the politics of pandemics in a history journal. There are different jargons, different citation rules, and different expectations. Historians should have a deep understanding of culture and know how to read and interpret historical documents. Biologists should have a deep understanding of evolution and know how to read and interpret DNA molecules. Things that fall in between categories—like the interplay between human political ideologies and virus evolution—are often left unaddressed.20

    To appreciate how academics force a messy and fluid world into rigid bureaucratic categories, let’s dig a little deeper in the specific discipline of biology. Before Darwin could explain the origin of species, earlier scholars like Carl Linnaeus first had to define what a species is and classify all living organisms into species. To argue that lions and tigers evolved from a common feline ancestor, you first have to define “lions” and “tigers.”21 This turned out to be a difficult and never-ending job, because animals, plants, and other organisms often trespass the boundaries of their allotted drawers.

    Evolution cannot be easily contained in any bureaucratic schema. The whole point of evolution is that species continually change, which means that putting each species in one unchanging drawer distorts biological reality. For example, it is an open question when Homo erectus ended and Homo sapiens began. Were there once two Erectus parents whose child was the first Sapiens?22 Species also keep intermingling, with animals belonging to seemingly separate species not only having sex but even siring fertile offspring. Most Sapiens living today have about 1–3 percent Neanderthal DNA,23 indicating that there once was a child whose father was a Neanderthal and whose mother was a Sapiens (or vice versa). So are Sapiens and Neanderthals the same species or different species? And is “species” an objective reality that biologists discover, or is it an intersubjective reality that biologists impose?24

    There are numerous other examples of animals breaking out of their drawers, so the neat bureaucratic division fails to accurately categorize ring species, fusion species, and hybrids.25 Grizzly bears and polar bears sometimes produce pizzly bears and grolar bears.26 Lions and tigers produce ligers and tigons.27

    When we shift our attention from mammals and other multicellular organisms to the world of single-cell bacteria and archaea, we discover that anarchy reigns. In a process known as horizontal gene transfer, single-cell organisms routinely exchange genetic material not only with organisms from related species but also with organisms from entirely different genera, kingdoms, orders, and even domains. Bacteriologists have a very difficult job keeping tabs on these chimeras.28

    And when we reach the very edge of life and consider viruses like SARS-CoV-2 (responsible for COVID-19), things become even more complicated. Viruses straddle the supposed rigid boundary between living beings and lifeless matter—between biology and chemistry. Unlike bacteria, viruses aren’t single-cell organisms. They aren’t cells at all, and don’t possess any cellular machinery of their own. Viruses don’t eat or metabolize, and cannot reproduce by themselves. They are tiny packets of genetic code, which are able to penetrate cells, hijack their cellular machinery, and instruct them to produce more copies of that alien genetic code. The new copies burst out of the cell to infect and hijack more cells, which is how the alien code turns viral. Scientists argue endlessly about whether viruses should count as life-forms or whether they fall outside the boundary of life.29 But this boundary isn’t an objective reality; it is an intersubjective convention. Even if biologists reach a consensus that viruses are life-forms, it wouldn’t change anything about how viruses behave; it will only change how humans think about them.

    Of course, intersubjective conventions are themselves part of reality. As we humans become more powerful, so our intersubjective beliefs become more consequential for the world outside our information networks. For example, scientists and legislators have categorized species according to the threat of extinction they face, on a scale ranging from “least concern” through “vulnerable” and “endangered” to “extinct.” Defining a particular population of animals as an “endangered species” is an intersubjective human convention, but it can have far-reaching consequences, for instance by imposing legal restrictions on hunting those animals or destroying their habitat. A bureaucratic decision about whether a certain animal belongs in the “endangered species” drawer or in the “vulnerable species” drawer could make the difference between life and death. As we shall see time and again in subsequent chapters, when a bureaucracy puts a label on you, even though the label might be pure convention, it can still determine your fate. That’s true whether the bureaucrat is a flesh-and-blood expert on animals; a flesh-and-blood expert on humans; or an inorganic AI.

    THE DEEP STATE

    In defense of bureaucracy it should be noted that while it sometimes sacrifices truth and distorts our understanding of the world, it often does so for the sake of order, without which it would be hard to maintain any large-scale human network. While bureaucracies are never perfect, is there a better way to manage big networks? For example, if we decided to abolish all conventional divisions in the academic world, all departments and faculties and specialized journals, would every prospective doctor be expected to devote several years to the study of history, and would people who studied the impact of the Black Death on Christian theology be considered expert virologists? Would it lead to better health-care systems?

    Anyone who fantasizes about abolishing all bureaucracies in favor of a more holistic approach to the world should reflect on the fact that hospitals too are bureaucratic institutions. They are divided into different departments, with hierarchies, protocols, and lots of forms to fill out. They suffer from many bureaucratic illnesses, but they still manage to cure us of many of our biological illnesses. The same goes for almost all the other services that make our life better, from our schools to our sewage system.

    When you flush the toilet, where does the waste go? It goes into the deep state. There is an intricate subterranean web of pipes, pumps, and tunnels that runs under our houses and collects our waste, separates it from the supply of drinking water, and either treats or safely disposes of it. Somebody needs to design, construct, and maintain that deep web, plug holes in it, monitor pollution levels, and pay the workers. That too is bureaucratic work, and we would face a lot of discomfort and even death if we abolished that particular department. Sewage water and drinking water are always in danger of mixing, but luckily for us there are bureaucrats who keep them separate.

    Prior to the establishment of modern sewage systems, waterborne infectious diseases like dysentery and cholera killed millions of people around the world.30 In 1854 hundreds of London residents began dying of cholera. It was a relatively small outbreak, but it proved to be a turning point in the history of cholera, of epidemics more generally, and of sewage. The leading medical theory of the day argued that cholera epidemics were caused by “bad air.” But the physician John Snow suspected that the cause was the water supply. He painstakingly tracked and listed all known cholera patients, their place of residence, and their source of water. The resulting data led him to identify the water pump on Broad Street in Soho as the epicenter of the outbreak.

    This was tedious bureaucratic work—collecting data, categorizing it, and mapping it—but it saved lives. Snow explained his findings to local officials, persuading them to disable the Broad Street pump, which effectively ended the outbreak. Subsequent research discovered that the well providing water to the Broad Street pump was dug less than a meter from a cholera-infected cesspit.31

    Snow’s discovery, and the work of many subsequent scientists, engineers, lawyers, and officials, resulted in a sprawling bureaucracy regulating cesspits, water pumps, and sewage lines. In today’s England, digging wells and constructing cesspits require filling out forms and getting licenses, which ensure that drinking water doesn’t come from a well someone dug next to a cesspit.32

    It is easy to forget about this system when it works well, but since 1854 it has saved millions of lives, and it is one of the most important services provided by modern states. In 2014, Prime Minister Narendra Modi of India identified the lack of toilets as one of India’s biggest problems. Open defecation is a major cause for spreading diseases like cholera, dysentery, and diarrhea, as well as exposing women and girls to sexual assaults. As part of his flagship Clean India Mission, Modi promised to provide all Indian citizens with access to toilets, and between 2014 and 2020 the Indian state invested around ten billion dollars in the project, building more than 100 million new latrines.33 Sewage isn’t the stuff of epic poems, but it is a test of a well-functioning state.

    THE BIOLOGICAL DRAMAS

    Mythology and bureaucracy are the twin pillars of every large-scale society. Yet while mythology tends to inspire fascination, bureaucracy tends to inspire suspicion. Despite the services they provide, even beneficial bureaucracies often fail to win the public’s trust. For many people, the very word “bureaucracy” carries negative connotations. This is because it is inherently difficult to know whether a bureaucratic system is beneficial or malicious. For all bureaucracies—good or bad—share one key characteristic: it is hard for humans to understand them.

    Any kid can tell the difference between a friend and a bully. You know if someone shares their lunch with you or instead takes yours. But when the tax collector comes to take a cut from your earnings, how can you tell whether it goes to build a new public sewage system or a new private dacha for the president? It is hard to get all the relevant information, and even harder to interpret it. It is similarly difficult for citizens to understand the bureaucratic procedures determining how pupils are admitted to schools, how patients are treated in hospitals, or how garbage is collected and recycled. It takes a minute to tweet allegations of bias, fraud, or corruption, and many weeks of arduous work to prove or disprove them.

    Documents, archives, forms, licenses, regulations, and other bureaucratic procedures have changed the way information flows in society, and with it the way power works. This made it far more difficult to understand power. What is happening behind the closed doors of offices and archives, where anonymous officials analyze and organize piles of documents and determine our fate with a stroke of a pen or a click of a mouse?

    In tribal societies that lack written documents and bureaucracies, the human network is composed of only human-to-human and human-to-story chains. Authority belongs to the people who control the junctions that link the various chains. These junctions are the tribe’s foundational myths. Charismatic leaders, orators, and mythmakers know how to use these stories in order to shape identities, build alliances, and sway emotions.34

    In human networks connected by written documents and bureaucratic procedures—from ancient Ur to modern India—society relies in part on the interaction between humans and documents. In addition to human-to-human and human-to-story chains, such societies are held together by human-to-document chains. When we observe a bureaucratic society at work, we still see humans telling stories to other humans, as when millions of Indians watch the Ramayana series, but we also see humans passing documents to other humans, as when TV networks are required to apply for broadcasting licenses and fill out tax reports. Looked at from a different perspective, what we see is documents compelling humans to engage with other documents.

    This led to shifts in authority. As documents became a crucial nexus linking many social chains, considerable power came to be invested in these documents, and experts in the arcane logic of documents emerged as new authority figures. Administrators, accountants, and lawyers mastered not just reading and writing but also the skills of composing forms, separating drawers, and managing archives. In bureaucratic systems, power often comes from understanding how to manipulate obscure budgetary loopholes and from knowing your way around the labyrinths of offices, committees, and subcommittees.

    This shift in authority changed the balance of power in the world. For better or worse, literate bureaucracies tended to strengthen the central authority at the expense of ordinary citizens. It’s not just that documents and archives made it easier for the center to tax, judge, and conscript everybody. The difficulty of understanding bureaucratic power simultaneously made it harder for the masses to influence, resist, or evade the central authority. Even when bureaucracy was a benign force, providing people with sewage systems, education, and security, it still tended to increase the gap between rulers and ruled. The system enabled the center to collect and record a lot more information about the people it governed, while the latter found it much more difficult to understand how the system itself worked.

    Art, which helps us understand many other aspects of life, offered only limited assistance in this case. Poets, playwrights, and moviemakers have occasionally focused on the dynamics of bureaucratic power. However, this has proven to be a very difficult story to communicate. Artists usually work with a limited set of story lines that are rooted in our biology, but none of these biological dramas sheds much light on the workings of bureaucracy, because they have all been scripted by evolution millions of years before the emergence of documents and archives. To understand what “biological dramas” are, and why they are a poor guide for understanding bureaucracy, let’s consider in detail the plot of one of humanity’s greatest artistic masterpieces—the Ramayana.

    One important plotline of the Ramayana concerns the relations between the eponymous prince, Rama, his father, King Dasharatha, and his stepmother, Queen Kaikeyi. Though Rama, being the eldest son, is the rightful heir to the kingdom, Kaikeyi persuades the king to banish Rama to the wilderness and bestow the succession instead on her son Bharata. Underlying this plotline are several biological dramas that go back hundreds of millions of years in mammalian and avian evolution.

    All mammal and bird offspring depend on their parents in the first stage of life, seek parental care, and fear parental neglect or hostility. Life and death hang in the balance. A cub or chick pushed out of the nest too soon is in danger of death from starvation or predation. Among humans, the fear of being neglected or abandoned by one’s parents is a template not just for children’s stories like Snow White, Cinderella, and Harry Potter but also for some of our most influential national and religious myths. The Ramayana is far from being the sole example. In Christian theology damnation is conceived as losing all contact with the mother church and the heavenly father. Hell is a lost child crying for his or her missing parents.

    A related biological drama, which is also familiar to human children, mammalian cubs, and avian chicks, is “Father loves me more than he loves you.” Biologists and geneticists have identified sibling rivalry as one of the key processes of evolution.35 Siblings routinely compete for food and parental attention, and in some species the killing of one sibling by another is commonplace. About a quarter of spotted hyena cubs are killed by their siblings, who typically enjoy greater parental care as a result.36 Among sand tiger sharks, females hold numerous embryos in their uterus. The first embryo that reaches about ten centimeters in length then eats all the others.37 The dynamics of sibling rivalry are manifested in numerous myths in addition to the Ramayana, for instance in the stories of Cain and Abel, King Lear, and the TV series Succession. Entire nations—like the Jewish people—may base their identity on the claim that “we are Father’s favorite children.”

    The second major plotline of the Ramayana focuses on the romantic triangle formed by Prince Rama, his lover, Sita, and the demon-king Ravana, who kidnaps Sita. “Boy meets girl” and “boy fights boy over girl” are also biological dramas that have been enacted by countless mammals, birds, reptiles, and fish for hundreds of millions of years. We are mesmerized by these stories because understanding them has been essential for our ancestors’ survival. Human storytellers like Homer, Shakespeare, and Valmiki—the purported author of the Ramayana—have displayed an amazing capacity to elaborate on the biological dramas, but even the greatest poetical narratives usually copy their basic plotline from the handbook of evolution.

    A third theme recurring in the Ramayana is the tension between purity and impurity, with Sita being the paragon of purity in Hindu culture. The cultural obsession with purity originates in the evolutionary struggle to avoid pollution. All animals are torn between the need to try new food and the fear of being poisoned. Evolution therefore equipped animals with both curiosity and the capacity to feel disgust on coming into contact with something toxic or otherwise dangerous.38 Politicians and prophets have learned how to manipulate these disgust mechanisms. In nationalist and religious myths, countries or churches are depicted as a biological body in danger of being polluted by impure intruders. For centuries bigots have often said that ethnic and religious minorities spread diseases,39 that LGBTQ people are a source of pollution,40 or that women are impure.41 During the Rwanda genocide of 1994, Hutu propaganda referred to the Tutsis as cockroaches. The Nazis compared Jews to rats. Experiments have shown that chimpanzees, too, react with disgust to images of unfamiliar chimpanzees from another band.42

    Perhaps in no other culture was the biological drama of “purity versus impurity” carried to greater extremes than in traditional Hinduism. It constructed an intersubjective system of castes ranked by their supposed level of purity, with the pure Brahmins at the top and the allegedly impure Dalit (formerly known as untouchables) at the bottom. Professions, tools, and everyday activities have also been classified by their level of purity, and strict rules have forbidden “impure” persons to marry “pure” people, touch them, prepare food for them, or even come near them.

    The modern state of India still struggles with this legacy, which influences almost all aspects of life. For example, fears of impurity created various complications for the aforementioned Clean India Mission, because allegedly “pure” people were reluctant to get involved in “impure” activities such as building, maintaining, and cleaning toilets, or to share public latrines with allegedly “impure” persons.43 On September 25, 2019, two Dalit children—twelve-year-old Roshni Valmiki and her ten-year-old nephew Avinash—were lynched in the Indian village of Bhakhedi for defecating near the house of a family from the higher Yadav caste. They were forced to defecate in public because their houses lacked functioning toilets. A local official later explained that their household—while being among the poorest in the village—was nevertheless excluded from the list of families eligible for government aid to build toilets. The children routinely suffered from other caste-based discrimination, for example being forced to bring separate mats and utensils to school and to sit apart from the other pupils, so as not to “pollute” them.44

    The list of biological dramas that press our emotional buttons includes several additional classics, such as “Who will be alpha?” “Us versus them,” and “Good versus evil.” These dramas, too, feature prominently in the Ramayana, and all of them are well known to wolf packs and chimpanzee bands as well as to human societies. Together, these biological dramas form the backbone of almost all human art and mythology. But art’s dependence on the biological dramas have made it difficult for artists to explain the mechanisms of bureaucracy. The Ramayana is set within the context of large agrarian kingdoms, but it shows little interest in how such kingdoms register property, collect taxes, catalog archives, or finance wars. Sibling rivalry and romantic triangles aren’t a good guide for the dynamics of documents, which have no siblings and no romantic life.

    Storytellers like Franz Kafka, who focused on the often surreal ways that bureaucracy shapes human lives, pioneered new nonbiological plotlines. In Kafka’s Trial, the bank clerk K. is arrested by unidentified officials of an unfathomable agency for an unnamed crime. Despite his best efforts, he never understands what is happening to him or uncovers the aims of the agency that is crushing him. While sometimes taken as an existential or theological reference to the human condition in the universe and to the unfathomability of God, on a more mundane level the story highlights the potentially nightmarish character of bureaucracies, which as an insurance lawyer Kafka knew all too well.

    In bureaucratic societies, the lives of ordinary people are often upended by unidentified officials of an unfathomable agency for incomprehensible reasons. Whereas stories about heroes who confront monsters—from the Ramayana to Spider-Man—repackage the biological dramas of confronting predators and romantic rivals, the unique horror of Kafkaesque stories comes from the unfathomability of the threat. Evolution has primed our minds to understand death by a tiger. Our mind finds it much more difficult to understand death by a document.

    Some portrayals of bureaucracy are satirical. Joseph Heller’s iconic 1961 novel, Catch-22, illustrated the central role bureaucracy plays in war. The ex–private first class Wintergreen in the mail room—who decides which letters to forward—is a more powerful figure than any general.45 The 1980s British sitcoms Yes Minister and Yes, Prime Minister showed the ways that civil servants use arcane regulations, obscure subcommittees, and piles of documents to manipulate and control their political bosses. The 2015 comedy-drama The Big Short (based on a 2010 book by Michael Lewis) explored the bureaucratic roots of the 2007–8 financial crisis. The movie’s arch-villains are not humans but collateralized debt obligations (CDOs), which are financial devices invented by investment bankers and understood by nobody else in the world. These bureaucratic Godzillas slumbered unnoticed in the depths of bank portfolios, until they suddenly emerged in 2007 to wreak havoc on the lives of billions of people by instigating a major financial crisis.

    Artworks like these have had some success in shaping perceptions of how bureaucratic power works, but this is an uphill battle, because since the Stone Age our minds have been primed to focus on biological dramas rather than bureaucratic ones. Most Hollywood and Bollywood blockbusters are not about CDOs. Rather, even in the twenty-first century, most blockbusters are essentially Stone Age stories about the hero who fights the monster to win the girl. Similarly, when depicting the dynamics of political power, TV series like Game of Thrones, The Crown, and Succession focus on the family intrigues of the dynastic court rather than on the bureaucratic labyrinth that sustains—and sometimes curbs—the dynasty’s power.

    LET’S KILL ALL THE LAWYERS

    The difficulty of depicting and understanding bureaucratic realities has had unfortunate results. On the one hand, it leaves people feeling helpless in the face of harmful powers they do not understand, like the hero of Kafka’s Trial. On the other hand, it also leaves people with the impression that bureaucracy is a malign conspiracy, even in cases when it is in fact a benign force providing us with health care, security, and justice.

    In the sixteenth century, Ludovico Ariosto described the allegorical figure of Discord as a woman who walks around in a cloud of “sheaves of summonses and writs, cross-examinations and powers of attorney, and great piles of glosses, counsel’s opinions and precedents—all of which tended to the greater insecurity of impoverished folk. In front and behind her and on either side she was hemmed in by notaries, attorneys and barristers.”46

    In his description of Jack Cade’s Rebellion (1450) in Henry VI, Part 2, Shakespeare has a commoner rebel called Dick the Butcher take the antipathy to bureaucracy to its logical conclusion. Dick has a plan to establish a better social order. “The first thing we do,” advises Dick, “let’s kill all the lawyers.” The rebel leader, Jack Cade, runs with Dick’s proposal in a forceful attack on bureaucracy and in particular on written documents: “Is not this a lamentable thing, that of the skin of an innocent lamb should be made parchment? That parchment, being scribbled o’er, should undo a man? Some say the bee stings: but I say, ’tis the bee’s wax; for I did but seal once to a thing, and I was never mine own man since.” Just then the rebels capture a clerk and accuse him of being able to write and read. After a short interrogation that establishes his “crime,” Cade orders his men, “Hang him with his pen and inkhorn about his neck.”47

    Seventy years prior to Jack Cade’s Rebellion, during the even bigger 1381 Peasants’ Revolt, the rebels focused their ire not only on flesh-and-blood bureaucrats but also on their documents, destroying numerous archives, burning court rolls, charters, and administrative and legal records. In one incident, they made a bonfire of the archives of the University of Cambridge. An old woman named Margery Starr scattered the ashes to the winds while crying, “Away with the learning of the clerks, away with it!” Thomas Walsingham, a monk in St. Albans Abbey who witnessed the destruction of the abbey’s archive firsthand, described how the rebels “set fire to all court rolls and muniments, so that after they had got rid of these records of their ancient service their lords would not be able to claim any right at all against them at some future time.”48 Killing the documents erased the debts.

    Similar attacks on archives characterized numerous other insurgencies throughout history. For example, during the Great Jewish Revolt in 66 CE, one of the first things the rebels did upon capturing Jerusalem was to set fire to the central archive in order to destroy records of debts, thereby wining the support of the populace.49 During the French Revolution in 1789, numerous local and regional archives were destroyed for comparable reasons.50 Many rebels might have been illiterate, but they knew that without the documents the bureaucratic machine couldn’t function.

    I can sympathize with the suspicion of government bureaucracies and of the power of official documents, because they have played an important role in my own family. My maternal grandfather had his life upended by a government census and by the inability to find a crucial document. My grandfather Bruno Luttinger was born in 1913 in Chernivtsi. Today this town is in Ukraine, but in 1913 it was part of the Habsburg Empire. Bruno’s father disappeared in World War I, and he was raised by his mother, Chaya-Pearl. When the war was over, Chernivtsi was annexed to Romania. In the late 1930s, as Romania became a fascist dictatorship, an important plank of its new antisemitic policy was to conduct a Jewish census.

    In 1936 official statistics said that 758,000 Jews lived in Romania, constituting 4.2 percent of the population. The same official statistics said that the total number of refugees from the U.S.S.R., Jews and non-Jews, was about 11,000. In 1937 a new fascist government came to power, headed by Prime Minister Octavian Goga. Goga was a renowned poet as well as a politician, but he quickly graduated from patriotic poetry to fake statistics and oppressive bureaucracy. He and his colleagues ignored the official statistics and claimed that hundreds of thousands of Jewish refugees were flooding into Romania. In several interviews Goga claimed that half a million Jews had entered Romania illegally and that the total number of Jews in the country was 1.5 million. Government organs, far-right statisticians, and popular newspapers regularly cited even higher figures. The Romanian embassy in Paris, for example, claimed there were a million Jewish refugees in Romania. Christian Romanians were gripped by mass hysteria that they would soon be replaced or become a minority in a Jewish-led country.

    Goga’s government stepped in to offer a solution to the imaginary problem invented by its own propaganda. On January 22, 1938, the government issued a law ordering all Jews in Romania to provide documented proof that they were born in Romanian territory and were entitled to Romanian citizenship. Jews who failed to provide proof would lose their citizenship, along with all rights to residence and employment.

    Suddenly Romania’s Jews found themselves in a bureaucratic hell. Many had to travel to their birthplace to look for the relevant documents, only to discover that the municipal archives were destroyed during World War I. Jews born in territories annexed to Romania only after 1918—like Chernivtsi—faced special difficulties, because they lacked Romanian birth certificates and because many other documents about their families were archived in the former Habsburg capitals of Vienna and Budapest instead of in Bucharest. Jews often didn’t even know which documents they were supposed to be looking for, because the census law didn’t specify which documents were considered sufficient “proof.”

    Clerks and archivists gained a new and lucrative source of income as frantic Jews offered to pay large bribes to get their hands on the right document. Even if no bribes were involved, the process was extremely costly: any request for documentation, as well as filing the citizenship request with the authorities, involved paying fees. Finding and filing the right document did not guarantee success. A difference of a single letter between how a name was spelled on the birth certificate and on the citizenship papers was enough for the authorities to revoke the citizenship.

    Many Jews could not clear these bureaucratic hurdles and didn’t even file a citizenship request. Of those who did, only 63 percent got their citizenship approved. Altogether, out of 758,000 Romanian Jews, 367,000 lost their citizenship.51 My grandfather Bruno was among them. When the new census law was passed in Bucharest, Bruno did not think much about it. He was born in Chernivtsi and lived there all his life. The thought that he needed to prove to some bureaucrat that he was not an alien struck him as ridiculous. Moreover, in early 1938 his mother fell ill and died, and Bruno felt he had much bigger things to worry about than chasing documents.

    In December 1938 an official letter arrived from Bucharest canceling Bruno’s citizenship, and as an alien he was promptly fired from his job in a Chernivtsi radio shop. Bruno was now not only alone and jobless but also stateless and without much prospect for alternative employment. Nine months later World War II erupted, and the danger for paperless Jews was mounting. Of the Romanian Jews who lost their citizenship in 1938, the vast majority would be murdered over the next few years by the Romanian fascists and their Nazi allies (Jews who retained their citizenship had a much higher survival rate).52

    My grandfather repeatedly tried to escape the tightening noose, but it was difficult without the right papers. Several times he smuggled himself onto trains and ships, only to be caught and arrested. In 1940 he finally managed to board one of the last ships bound for Palestine before the gates of hell slammed shut. When he arrived in Palestine, he was immediately imprisoned by the British as an illegal immigrant. After two months in prison, the British offered a deal: stay in jail and risk deportation, or enlist in the British army and get Palestinian citizenship. My grandfather grabbed the offer with both hands and from 1941 to 1945 served in the British army in the North African and Italian campaigns. In exchange, he got his papers.

    In our family it became a sacred duty to preserve documents. Bank statements, electricity bills, expired student cards, letters from the municipality—if it had an official-looking stamp on it, it would be filed in one of the many folders in our cupboard. You never knew which of these documents might one day save your life.

    THE MIRACLE DOCUMENT

    Should we love the bureaucratic information network or hate it? Stories like that of my grandfather indicate the dangers inherent in bureaucratic power. Stories like that of the London cholera epidemic indicate its potential benevolence. All powerful information networks can do both good and ill, depending on how they are designed and used. Merely increasing the quantity of information in a network doesn’t guarantee its benevolence, nor make it any easier to find the right balance between truth and order. That is a key historical lesson for the designers and users of the new information networks of the twenty-first century.

    Future information networks, particularly those based on AI, will be different from previous networks in many ways. While in part 1 we are examining how mythology and bureaucracy have been essential for large-scale information networks, in part 2 we will see how AI is taking up the role of both bureaucrats and mythmakers. AI tools know how to find and process data better than flesh-and-blood bureaucrats, and AI is also acquiring the ability to compose stories better than most humans.

    But before we explore the new AI-based information networks of the twenty-first century, and before we examine the threats and promises of AI mythmakers and AI bureaucrats, there is one more thing we need to understand about the long-term history of information networks. We have now seen that information networks don’t maximize truth, but rather seek to find a balance between truth and order. Bureaucracy and mythology are both essential for maintaining order, and both are happy to sacrifice truth for the sake of order. What mechanisms, then, ensure that bureaucracy and mythology don’t lose touch with truth altogether, and what mechanisms enable information networks to identify and correct their own mistakes, even at the price of some disorder?

    The way human information networks have dealt with the problem of errors will be the main subject of the next two chapters. We’ll start by considering the invention of another information technology: the holy book. Holy books like the Bible and the Quran are an information technology that is meant to both include all the vital information society needs and be free from all possibility of error. What happens when an information network believes itself to be utterly incapable of any error? The history of allegedly infallible holy books highlights some of the limitations of all information networks and holds important lessons for the attempt to create infallible AIs in the twenty-first century.

    CHAPTER 4 Errors: The Fantasy of Infallibility

    As Saint Augustine famously said, “To err is human; to persist in error is diabolical.”1 The fallibility of human beings, and the need to correct human errors, have played key roles in every mythology. According to Christian mythology, the whole of history is an attempt to correct Adam and Eve’s original sin. According to Marxist-Leninist thinking, even the working class is likely to be fooled by its oppressors and misidentify its own interests, which is why it requires the leadership of a wise party vanguard. Bureaucracy, too, is constantly on the lookout for errors, from misplaced documents to inefficient procedures. Complex bureaucratic systems usually contain self-disciplinary bodies, and when a major catastrophe occurs—like a military defeat or a financial meltdown—commissions of inquiry are set up to understand what went wrong and make sure the same mistake is not repeated.

    In order to function, self-correcting mechanisms need legitimacy. If humans are prone to error, how can we trust the self-correcting mechanisms to be free from error? To escape this seemingly endless loop, humans have often fantasized about some superhuman mechanism, free from all error, that they can rely upon to identify and correct their own mistakes. Today one might hope that AI could provide such a mechanism, as when in April 2023 Elon Musk announced, “I’m going to start something, which I call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe.”2 We will see in later chapters why this is a dangerous fantasy. In previous eras, such fantasies took a different form—religion.

    In our personal lives, religion can fulfill many different functions, like providing solace or explaining the mysteries of life. But historically, the most important function of religion has been to provide superhuman legitimacy for the social order. Religions like Judaism, Christianity, Islam, and Hinduism propose that their ideas and rules were established by an infallible superhuman authority, and are therefore free from all possibility of error, and should never be questioned or changed by fallible humans.

    TAKING HUMANS OUT OF THE LOOP

    At the heart of every religion lies the fantasy of connecting to a superhuman and infallible intelligence. This is why, as we shall explore in chapter 8, studying the history of religion is highly relevant to present-day debates about AI. In the history of religion, a recurrent problem is how to convince people that a certain dogma indeed originated from an infallible superhuman source. Even if in principle I am eager to submit to the gods’ will, how do I know what the gods really want?

    Throughout history many humans claimed to convey messages from the gods, but the messages often contradicted each other. One person said a god appeared to her in a dream; another person said she was visited by an angel; a third recounted how he met a spirit in a forest—and each preached a different message. The anthropologist Harvey Whitehouse recounts how when he was doing fieldwork among the Baining people of New Britain in the late 1980s, a young man called Tanotka fell sick, and in his feverish delirium began making cryptic statements like “I am Wutka” and “I am a post.” Most of these statements were heard only by Tanotka’s older brother, Baninge, who began telling about them to other people and interpreting them in a creative way. Baninge said that his brother was possessed by an ancestral spirit called Wutka and that he was divinely chosen to be the main support of the community, just as local houses were supported by a central post.

    After Tanotka recovered, he continued to deliver cryptic messages from Wutka, which were interpreted by Baninge in ever more elaborate ways. Baninge also began having dreams of his own, which allegedly revealed additional divine messages. He claimed that the end of the world was imminent, and convinced many of the locals to grant him dictatorial powers so that he could prepare the community for the coming apocalypse. Baninge proceeded to waste almost all the community’s resources on extravagant feasts and rituals. When the apocalypse didn’t materialize and the community almost starved, Baninge’s power collapsed. Though some locals continued to believe that he and Tanotka were divine messengers, many others concluded that the two were charlatans—or perhaps the servants of the Devil.3

    How could people distinguish the true will of the gods from the inventions or imaginations of fallible humans? Unless you had a personal divine revelation, knowing what the gods said meant trusting what fallible humans like Tanotka and Baninge claimed the gods said. But how can you trust these humans, especially if you don’t know them personally? Religion wants to take fallible humans out of the loop and give people access to infallible superhuman laws, but religion repeatedly boiled down to trusting this or that human.

    One way around this problem was to create religious institutions that vetted the purported divine messengers. Already in tribal societies communication with superhuman entities like tribal spirits was often the domain of religious experts. Among the Baining people, specialized spirit mediums known as agungaraga were traditionally responsible for communicating with the spirits and thereby learning the hidden causes of misfortunes ranging from illness to crop failure. Their membership in an established institution made the agungaraga more trustworthy than Tanotka and Baninge, and made their authority more stable and widely acknowledged.4 Among the Kalapalo tribe of Brazil religious rituals were organized by hereditary ritual officers known as the anetaū. In ancient Celtic and Hindu societies similar duties were the preserve of druids and Brahmins.5 As human societies grew and became more complex, so did their religious institutions. Priests and oracles had to train long and hard for the important task of representing the gods, so people no longer needed to trust just any layperson who claimed to have met an angel or to carry a divine message.6 In ancient Greece, for example, if you wanted to know what the gods said, you went to an accredited expert like the Pythia—the high priestess at the temple of Apollo in Delphi.

    But as long as religious institutions like oracular temples were staffed by fallible humans, they too were open to error and corruption. Herodotus recounts that when Athens was ruled by the tyrant Hippias, the pro-democracy faction bribed the Pythia to help them. Whenever any Spartan came to the Pythia to consult the gods on either official or private matters, the Pythia invariably replied that the Spartans must first free Athens from the tyrant. The Spartans, who were Hippias’s allies, eventually submitted to the alleged will of the gods and sent an army to Athens that deposed Hippias in 510 BCE, leading to the establishment of Athenian democracy.7

    If a human prophet could falsify the words of a god, then the key problem of religion wasn’t solved by creating religious institutions like temples and priestly orders. People still needed to trust fallible humans in order to access the supposedly infallible gods. Was it possible to somehow bypass the humans altogether?

    THE INFALLIBLE TECHNOLOGY

    Holy books like the Bible and the Quran are a technology to bypass human fallibility, and religions of the book—like Judaism, Christianity, and Islam—have been built around that technological artifact. To appreciate how this technology is meant to work, we should begin by explaining what a book is and what makes books different from other kinds of written texts. A book is a fixed block of texts—such as chapters, stories, recipes, or epistles—that always go together and have many identical copies. This makes a book something different from oral tales, from bureaucratic documents, and from archives. When telling a story orally, every time we tell the story it might be a little different, and if many people tell the story over a long time, significant variations are bound to creep in. In contrast, all copies of a book are supposed to be identical. As for bureaucratic documents, they tend to be relatively short, and often exist only as a single copy in one archive. If a long document has many copies placed in numerous archives, we would normally call it a book. Finally, a book that contains many texts is also different from an archive, because each archive contains a different collection of texts, whereas all copies of a book contain the same chapters, the same stories, or the same recipes. The book thereby ensures that many people in many times and places can access the same database.

    The book became an important religious technology in the first millennium BCE. After tens of thousands of years in which gods spoke to humans via shamans, priests, prophets, oracles, and other human messengers, religious movements like Judaism began arguing that the gods speak through this novel technology of the book. There is one specific book whose many chapters allegedly contain all the divine words about everything from the creation of the universe to food regulations. Crucially, no priest, prophet, or human institution can forget or change these divine words, because you can always compare what the fallible humans are telling you with what the infallible book records.

    But religions of the book had their own set of problems. Most obviously, who decides what to include in the holy book? The first copy didn’t come down from heaven. It had to be compiled by humans. Still, the faithful hoped that this thorny problem could be solved by a once-and-for-all supreme effort. If we could get together the wisest and most trustworthy humans, and they could all agree on the contents of the holy book, from that moment onward we could excise humans from the loop, and the divine words would forever be safe from human interference.

    Many objections can be raised against this procedure: Who selects the wisest humans? On the basis of what criteria? What if they cannot reach a consensus? What if they later change their minds? Nevertheless, this was the procedure used to compile holy books like the Hebrew Bible.

    THE MAKING OF THE HEBREW BIBLE

    During the first millennium BCE, Jewish prophets, priests, and scholars produced an extensive collection of stories, documents, prophecies, poems, prayers, and chronicles. The Bible as a single holy book didn’t exist in biblical times. King David or the prophet Isaiah never saw a copy of the Bible.

    It is sometimes claimed, erroneously, that the oldest surviving copy of the Bible comes from the Dead Sea Scrolls. These scrolls are a collection of about nine hundred different documents, written mostly in the last two centuries BCE and found in various caves around Qumran, a village near the Dead Sea.8 Most scholars believe they constituted the archive of a Jewish sect that lived nearby.9

    Significantly, none of the scrolls contains a copy of the Bible, and no scroll indicates that the twenty-four books of the Old Testament were considered a single and complete database. Some of the scrolls certainly record texts that are today part of the canonical Bible. For example, nineteen scrolls and fragmentary manuscripts preserve parts of the book of Genesis.10 But many scrolls record texts that were later excluded from the Bible. For example, more than twenty scrolls and fragments preserve parts of the book of Enoch—a book allegedly written by the patriarch Enoch, the great-grandfather of Noah, and containing the history of the angels and demons as well as a prophecy about the coming of the Messiah.11 The Jews of Qumran apparently gave great importance to both Genesis and Enoch, and did not think that Genesis was canonical while Enoch was apocryphal.12 Indeed, to this day some Ethiopian Jewish and Christian sects consider Enoch part of their canon.13

    Even the scrolls that record future canonical texts sometimes differ from the present-day canonical version. For example, the canonical text of Deuteronomy 32:8 says that God divided the nations of the earth according to “the number of the sons of Israel.” The version recorded in the Dead Sea Scrolls has “the number of the sons of God” instead, implying a rather startling notion that God has multiple sons.14 In Deuteronomy 8:6 the canonical text requires the faithful to fear God, whereas the Dead Sea version asks them to love God.15 Some variations are much more substantial than just a single word here or there. The Psalms scrolls contain several entire psalms that are missing from the canonical Bible (most notably Psalms 151, 154, 155).16

    Similarly, the oldest translation of the Bible—the Greek Septuagint—completed between the third and the first centuries BCE, is different in many ways from the later canonical version.17 It includes, for example, the books of Tobit, Judith, Sirach, Maccabees, the Wisdom of Solomon, the Psalms of Solomon, and Psalm 151.18 It also has longer versions of Daniel and Esther.19 Its book of Jeremiah is 15 percent shorter than the canonical version.20 Finally, in Deuteronomy 32:8 most Septuagint manuscripts have either “sons of God” or “angels of God” rather than “sons of Israel.”21

    It took centuries of hairsplitting debates among learned Jewish sages—known as rabbis—to streamline the canonical database and to decide which of the many texts in circulation would get into the Bible as the official word of Jehovah and which would be excluded. By the time of Jesus agreement was probably reached on most of the texts, but even a century later rabbis were still arguing whether the Song of Songs should be part of the canon or not. Some rabbis condemned that text as secular love poetry, while Rabbi Akiva (d. 135 CE) defended it as the divinely inspired creation of King Solomon. Akiva famously said that “the Song of Songs is the Holy of Holies.”22 By the end of the second century CE widespread consensus was apparently reached among Jewish rabbis about which texts were part of the biblical canon and which were not, but debates about this matter, and about the precise wordings, spelling, and pronunciation of each text, were not finally resolved until the Masoretic era (seventh to tenth centuries CE).23

    This process of canonization decided that Genesis was the word of Jehovah, but the book of Enoch, the Life of Adam and Eve, and the Testament of Abraham were human fabrications.24 The Psalms of King David were canonized (minus psalms 151–55), but the Psalms of King Solomon were not. The book of Malachi got the seal of approval; the book of Baruch did not. Chronicles, yes; Maccabees, no.

    Interestingly, some books mentioned in the Bible itself failed to get into the canon. For example, the books of Joshua and Samuel both refer to a very ancient sacred text known as the book of Jasher (Joshua 10:13, 2 Samuel 1:18). The book of Numbers refers to “the Book of the Wars of the Lord” (Numbers 21:14). And when 2 Chronicles surveys the reign of King Solomon, it concludes by saying that “the rest of the acts of Solomon, first and last, are written in the chronicles of Nathan the prophet, and in the prophecy of Ahijah the Shilonite, and in the visions of Iddo the seer” (2 Chronicles 9:29). The books of Iddo, Ahijah, and Nathan, as well as the books of Jasher and the Wars of the Lord, aren’t in the canonical Bible. Apparently, they were not excluded on purpose; they just got lost.25

    After the canon was sealed, most Jews gradually forgot the role of human institutions in the messy process of compiling the Bible. Jewish Orthodoxy maintained that God personally handed down to Moses at Mount Sinai the entire first part of the Bible, the Torah. Many rabbis further argued that God created the Torah at the very dawn of time so that even biblical characters who lived before Moses—like Noah and Adam—read and studied it.26 The other parts of the Bible also came to be seen as a divinely created or divinely inspired text, totally different from ordinary human compilations. Once the holy book was sealed, it was hoped that Jews now had direct access to Jehovah’s exact words, which no fallible human or corrupt institution could erase or alter.

    Anticipating the blockchain idea by two thousand years, Jews began making numerous copies of the holy code, and every Jewish community was supposed to have at least one in its synagogue or its bet midrash (house of study).27 This was meant to achieve two things. First, disseminating many copies of the holy book promised to democratize religion and place strict limits on the power of would-be human autocrats. Whereas the archives of Egyptian pharaohs and Assyrian kings empowered the unfathomable kingly bureaucracy at the expense of the masses, the Jewish holy book seemed to give power to the masses, who could now hold even the most brazen leader accountable to God’s laws.

    Second, and more important, having numerous copies of the same book prevented any meddling with the text. If there were thousands of identical copies in numerous locations, any attempt to change even a single letter in the holy code could easily be exposed as a fraud. With numerous Bibles available in far-flung locations, Jews replaced human despotism with divine sovereignty. The social order was now guaranteed by the infallible technology of the book. Or so it seemed.

    THE INSTITUTION STRIKES BACK

    Even before the process of canonizing the Bible was completed, the biblical project had run into further difficulties. Agreeing on the precise contents of the holy book was not the only problem with this supposedly infallible technology. Another obvious problem concerned copying the text. For the holy book to work its magic, Jews needed to have many copies wherever they lived. With Jewish centers emerging not only in Palestine but also in Mesopotamia and Egypt, and with new Jewish communities extending from central Asia to the Atlantic, how to make sure that copyists working thousands of kilometers apart would not change the holy book either on purpose or by mistake?

    To forestall such problems, the rabbis who canonized the Bible devised painstaking regulations for copying the holy book. For example, a scribe was not allowed to pause at certain critical moments in the copying process. When writing the name of God, the scribe “may not respond even if the king greets him. If he was about to write two or three divine names successively, he may pause between them and respond.”28 Rabbi Yishmael (second century CE) told one copyist, “You are doing Heaven’s work, and if you delete one letter or add one letter—you destroy the entire world.”29 In truth, copying errors crept in without destroying the entire world, and no two ancient Bibles were identical.30

    A second and much bigger problem concerned interpretation. Even when people agree on the sanctity of a book and on its exact wording, they can still interpret the same words in different ways. The Bible says that you should not work on the Sabbath. But it doesn’t clarify what counts as “work.” Is it okay to water your field on the Sabbath? What about watering your flowerpot or herd of goats? Is it okay to read a book on the Sabbath? How about writing a book? How about tearing a piece of paper? The rabbis ruled that reading a book isn’t work, but tearing paper is work, which is why nowadays Orthodox Jews prepare a stack of already ripped toilet paper to use on the Sabbath.

    The holy book also says that you should not cook a young goat in its mother’s milk (Exodus 23:19). Some people interpreted this quite literally: if you slaughter a young goat, don’t cook it in the milk of its own mother. But it’s fine to cook it in the milk of an unrelated goat, or in the milk of a cow. Other people interpreted this prohibition much more broadly to mean that meat and dairy products should never be mixed, so you are not allowed to have a milkshake after fried chicken. As unlikely as this may sound, most rabbis ruled that the second interpretation is the correct one, even though chickens don’t lactate.

    More problems resulted from the fact that even if the technology of the book succeeded in limiting changes to the holy words, the world beyond the book continued to spin, and it was unclear how to relate old rules to new situations. Most biblical texts focused on the lives of Jewish shepherds and farmers in the hill country of Palestine and in the sacred city of Jerusalem. But by the second century CE, most Jews lived elsewhere. A particularly large Jewish community grew in the port of Alexandria, one of the richest metropolises of the Roman Empire. A Jewish shipping magnate living in Alexandria would have found that many of the biblical laws were irrelevant to his life while many of his pressing questions had no clear answers in the holy text. He couldn’t obey the commandments about worshipping in the Jerusalem temple, because not only did he not live near Jerusalem, but the temple didn’t even exist anymore. In contrast, when he contemplated whether it was kosher for him to sail his Rome-bound grain ships on the Sabbath, it turned out that long sea voyages were not considered by the authors of Leviticus and Deuteronomy.31

    Inevitably, the holy book spawned numerous interpretations, which were far more consequential than the book itself. As Jews increasingly argued over the interpretation of the Bible, rabbis gained more power and prestige. Writing down the word of Jehovah was supposed to limit the authority of the old priestly institution, but it gave rise to the authority of a new rabbinical institution. Rabbis became the Jewish technocratic elite, developing their rational and rhetorical skills through years of philosophical debates and legal disputations. The attempt to bypass fallible human institutions by relying on a new information technology backfired, because of the need for a human institution to interpret the holy book.

    When the rabbis eventually reached some consensus about how to interpret the Bible, Jews saw another chance to get rid of the fallible human institution. They imagined that if they wrote the agreed interpretation in a new holy book, and made numerous copies of it, that would eliminate the need for any further human intercession between them and the divine code. So after much back-and-forth about which rabbinical opinions should be included and which should be ignored, a new holy book was canonized in the third century CE: the Mishnah.32

    As the Mishnah became more authoritative than the plain text of the Bible, Jews began to believe that the Mishnah could not possibly have been created by humans. It too must have been inspired by Jehovah, or perhaps even composed by the infallible deity in person. Today many Orthodox Jews firmly believe that the Mishnah was handed to Moses by Jehovah on Mount Sinai, passed orally from generation to generation, until it was written down in the third century CE.33

    Alas, no sooner had the Mishnah been canonized and copied than Jews began arguing about the correct interpretation of the Mishnah. And when a consensus was reached about the interpretation of the Mishnah and canonized in the fifth to sixth centuries as a third holy book—the Talmud—Jews began disagreeing about the interpretation of the Talmud.34

    The dream of bypassing fallible human institutions through the technology of the holy book never materialized. With each iteration, the power of the rabbinical institution only increased. “Trust the infallible book” turned into “trust the humans who interpret the book.” Judaism was shaped by the Talmud far more than by the Bible, and rabbinical arguments about the interpretation of the Talmud became even more important than the Talmud itself.35

    This is inevitable, because the world keeps changing. The Mishnah and Talmud dealt with questions raised by second-century Jewish shipping magnates that had no clear answer in the Bible. Modernity too raised many new questions that have no straightforward answers in the Mishnah and Talmud. For example, when electrical appliances developed in the twentieth century, Jews struggled with numerous unprecedented questions such as whether it is okay to press the electrical buttons of an elevator on the Sabbath?

    The Orthodox answer is no. As noted earlier, the Bible forbids working on the Sabbath, and rabbis argued that pressing an electrical button is “work,” because electricity is akin to fire, and it has long been established that kindling a fire is “work.” Does this mean that elderly Jews living in a Brooklyn high-rise must climb a hundred steps to their apartment in order to avoid working on the Sabbath? Well, Orthodox Jews invented a “Sabbath elevator,” which continually goes up and down buildings, stopping on every floor, without you having to perform any “work” by pressing an electrical button.36 The invention of AI gives another twist to this old story. By relying on facial recognition, an AI can quickly direct the elevator to your floor, without making you desecrate the Sabbath.37

    This profusion of texts and interpretations has, over time, caused a profound change in Judaism. Originally, it was a religion of priests and temples, focused on rituals and sacrifices. In biblical times, the quintessential Jewish scene was a priest in blood-splattered robes sacrificing a lamb on the altar of Jehovah. Over the centuries, however, Judaism became an “information religion,” obsessed with texts and interpretations. From second-century Alexandria to twenty-first-century Brooklyn, the quintessential Jewish scene became a group of rabbis arguing about the interpretation of a text.

    This change was extremely surprising given that almost nowhere in the Bible itself do you find anyone arguing about the interpretation of any text. Such debates were not part of biblical culture itself. For example, when Korah and his followers challenged the right of Moses to lead the people of Israel, and demanded a more equitable division of power, Moses reacted not by entering a learned discussion or by quoting some scriptural passage. Rather, Moses called upon God to perform a miracle, and the moment he finished speaking, the ground split, “and the earth opened its mouth and swallowed them and their households” (Numbers 16:31–32). When Elijah was challenged by 450 prophets of Baal and 400 prophets of Asherah to a public test in front of the people of Israel, he proved the superiority of Jehovah over Baal and Asherah first by miraculously summoning fire from the sky and then by slaughtering the pagan prophets. Nobody read any text, and nobody engaged in any rational debate (1 Kings 18).

    As Judaism replaced sacrifices with texts, it gravitated toward a view of information as the most fundamental building block of reality, anticipating current ideas in physics and computer science. The flood of texts generated by rabbis was increasingly seen as more important, and even more real, than plowing a field, baking a loaf of bread, or sacrificing a lamb in a temple. After the temple in Jerusalem was destroyed by the Romans and all temple rituals ceased, rabbis nevertheless devoted enormous efforts to writing texts about the proper way to conduct temple rituals and then arguing about the correct interpretation of these texts. Centuries after the temple was no more, the amount of information concerning these virtual rituals only continued to increase. The rabbis weren’t oblivious to this seeming gap between text and reality. Rather, they maintained that writing texts about the rituals and arguing about these texts were far more important than actually performing the rituals.38
    This eventually led the rabbis to believe that the entire universe was an information sphere—a realm composed of words and running on the alphabetical code of the Hebrew letters. They further maintained that this informational universe was created so that Jews could read texts and argue about their interpretation, and that if Jews ever stop reading these texts and arguing about them, the universe will cease to exist.39 In everyday life, this view meant that for the rabbis words in texts were often more important than facts in the world. Or more accurately, which words appeared in sacred texts became some of the most important facts about the world, shaping the lives of individuals and entire communities.

    THE SPLIT BIBLE

    The above description of the canonization of the Bible, and the creation of the Mishnah and Talmud, ignores one very important fact. The process of canonizing the word of Jehovah created not one chain of texts but several competing chains. There were people who believed in Jehovah, but not in the rabbis. Most of these dissenters did accept the first block in the biblical chain—which they called the Old Testament. But already before the rabbis sealed this block, the dissenters rejected the authority of the entire rabbinical institution, which led them to subsequently reject the Mishnah and Talmud, too. These dissenters were the Christians.

    When Christianity emerged in the first century CE, it was not a unified religion, but rather a variety of Jewish movements that didn’t agree on much, except that they all regarded Jesus Christ—rather than the rabbinical institution—as the ultimate authority on Jehovah’s words.40 Christians accepted the divinity of texts like Genesis, Samuel, and Isaiah, but they argued that the rabbis misunderstood these texts, and only Jesus and his disciples knew the true meaning of passages like “the Lord himself will give you a sign: the almah will conceive and give birth to a son, and will call him Immanuel” (Isaiah 7:14). The rabbis said almah meant “young woman,” Immanuel meant “God with us” (in Hebrew immanu means “with us” and el means “God”), and the entire passage was interpreted as a divine promise to help the Jewish people in their struggle against oppressive foreign empires. In contrast, the Christians argued that almah meant “virgin,” that Immanuel meant that God will literally be born among humans, and that this was a prophecy about the divine Jesus being born on earth to the Virgin Mary.41

    However, by rejecting the rabbinical institution while simultaneously accepting the possibility of new divine revelations, the Christians opened the door to chaos. In the first century CE, and even more so in the second and third centuries CE, different Christians came up with radically new interpretations for books like Genesis and Isaiah, as well as with a plethora of new messages from God. Since they rejected the authority of the rabbis, since Jesus was dead and couldn’t adjudicate between them, and since a unified Christian church didn’t yet exist, who could decide which of all these interpretations and messages was divinely inspired?

    Thus, it was not just John who described the end of the world in his Apocalypse (the book of Revelation). We have many additional apocalypses from that era, for example the Apocalypse of Peter, the Apocalypse of James, and even the Apocalypse of Abraham.42 As for the life and teachings of Jesus, in addition to the four Gospels of Matthew, Mark, Luke, and John, early Christians had the Gospel of Peter, the Gospel of Mary, the Gospel of Truth, the Gospel of the Savior, and numerous others.43 Similarly, aside from the Acts of the Apostles, there were at least a dozen other Acts such as the Acts of Peter and the Acts of Andrew.44 Letters were even more prolific. Most present-day Christian Bibles contain fourteen epistles attributed to Paul, three attributed to John, two to Peter, and one each to James and Jude. Ancient Christians were familiar not only with additional Pauline letters (such as the Epistle to the Laodiceans) but with numerous other epistles supposedly written by other disciples and saints.45

    As Christians composed more and more gospels, epistles, prophecies, parables, prayers, and other texts, it became harder to know which ones to pay attention to. Christians needed a curation institution. That’s how the New Testament was created. At roughly the same time that debates among Jewish rabbis were producing the Mishnah and Talmud, debates between Christian priests, bishops, and theologians were producing the New Testament.
    In a letter from 367 CE, Bishop Athanasius of Alexandria recommended twenty-seven texts that faithful Christians should read—a rather eclectic collection of stories, letters, and prophecies written by different people in different times and places. Athanasius recommended the Apocalypse of John, but not that of Peter or Abraham. He approved of Paul’s Epistle to the Galatians, but not of Paul’s Epistle to the Laodiceans. He endorsed the Gospels of Matthew, Mark, Luke, and John, but rejected the Gospel of Thomas and the Gospel of Truth.46

    A generation later, in the Councils of Hippo (393) and Carthage (397), gatherings of bishops and theologians formally canonized this list of recommendations, which became known as the New Testament.47 When Christians talk about “the Bible,” they mean the Old Testament together with the New Testament. In contrast, Judaism never accepted the New Testament, and when Jews talk about “the Bible,” they mean only the Old Testament, which is supplemented by the Mishnah and Talmud. Interestingly, Hebrew to this day lacks a word to describe the Christian holy book, which contains both the Old Testament and the New Testament. Jewish thought sees them as two utterly unrelated books and simply refuses to acknowledge that there might be a single book encompassing both, even though it is probably the most common book in the world.

    It is crucial to note that the people who created the New Testament weren’t the authors of the twenty-seven texts it contains; they were the curators. Due to the paucity of evidence from the period, we do not know if Athanasius’s list of texts reflected his personal judgment, or whether it originated with earlier Christian thinkers. What we do know is that prior to the Councils of Hippo and Carthage there were rival recommendation lists for Christians. The earliest such list was codified by Marcion of Sinope in the middle of the second century. The Marcion canon included only the Gospel of Luke and ten epistles of Paul. Even these eleven texts were somewhat different from the versions later canonized at Hippo and Carthage. Either Marcion was unaware of other texts like the Gospel of John and the book of Revelation, or he did not think highly of them.48

    The church father Saint John Chrysostom, a contemporary of Bishop Athanasius’s, recommended only twenty-two books, leaving 2 Peter, 2 John, 3 John, Jude, and Revelation out of his list.49 Some Christian churches in the Middle East to this day follow Chrysostom’s shorter list.50 The Armenian Church took about a thousand years to make up its mind about the book of Revelation, while it included in its canon the Third Epistle to the Corinthians, which other churches—like the Catholic and Protestant churches—consider a forgery.51 The Ethiopian Church endorsed Athanasius’s list in full, but added four other books: Sinodos, the book of Clement, the book of the Covenant, and the Didascalia.52 Other lists endorsed the two epistles of Clement, the visions of the Shepherd of Hermas, the Epistle of Barnabas, the Apocalypse of Peter, and various other texts that didn’t make it into Athanasius’s selection.53

    We do not know the precise reasons why specific texts were endorsed or rejected by different churches, church councils, and church fathers. But the consequences were far-reaching. While churches made decisions about texts, the texts themselves shaped the churches. As a key example, consider the role of women in the church. Some early Christian leaders saw women as intellectually and ethically inferior to men, and argued that women should be restricted to subordinate roles in society and in the Christian community. These views were reflected in texts like the First Epistle to Timothy.

    In one of its passages, this text, attributed to Saint Paul, says, “A woman should learn in quietness and full submission. I do not permit a woman to teach or to assume authority over a man; she must be quiet. For Adam was formed first, then Eve. And Adam was not the one deceived; it was the woman who was deceived and became a sinner. But women will be saved through childbearing—if they continue in faith, love and holiness with propriety” (2:11–15). But modern scholars as well as some ancient Christian leaders like Marcion have considered this letter a second-century forgery, ascribed to Saint Paul but actually written by someone else.54

    In opposition to 1 Timothy, during the second, third, and fourth centuries CE there were important Christian texts that saw women as equal to men, and even authorized women to occupy leadership roles, like the Gospel of Mary55 or the Acts of Paul and Thecla. The latter text was written at about the same time as 1 Timothy, and for a time was extremely popular.56 It narrates the adventures of Saint Paul and his female disciple Thecla, describing how Thecla not only performed numerous miracles but also baptized herself with her own hands and often preached. For centuries, Thecla was one of the most revered Christian saints and was seen as evidence that women could baptize, preach, and lead Christian communities.57

    Before the Councils of Hippo and Carthage, it wasn’t clear that 1 Timothy was more authoritative than the Acts of Paul and Thecla. By choosing to include 1 Timothy in their recommendation list while rejecting the Acts of Paul and Thecla, the assembled bishops and theologians shaped Christian attitudes toward women down to the present day. We can only hypothesize what Christianity might have looked like if the New Testament had included the Acts of Paul and Thecla instead of 1 Timothy. Perhaps in addition to church fathers like Athanasius, the church would have had mothers, while misogyny would have been labeled a dangerous heresy perverting Jesus’s message of universal love.

    Just as most Jews forgot that rabbis curated the Old Testament, so most Christians forgot that church councils curated the New Testament, and came to view it simply as the infallible word of God. But while the holy book was seen as the ultimate source of authority, the process of curating the book placed real power in the hands of the curating institution. In Judaism the canonization of the Old Testament and Mishnah went hand in hand with creating the institution of the rabbinate. In Christianity the canonization of the New Testament went hand in hand with the creation of a unified Christian church. Christians trusted church officials—like Bishop Athanasius—because of what they read in the New Testament, but they had faith in the New Testament because this is what the bishops told them to read. The attempt to invest all authority in an infallible superhuman technology led to the rise of a new and extremely powerful human institution—the church.

    THE ECHO CHAMBER

    As time passed, problems of interpretation increasingly tilted the balance of power between the holy book and the church in favor of the institution. Just as the need to interpret Jewish holy books empowered the rabbinate, so the need to interpret Christian holy books empowered the church. The same saying of Jesus or the same Pauline epistle could be understood in various ways, and it was the institution that decided which reading was correct. The institution in turn was repeatedly shaken by struggles over the authority to interpret the holy book, which resulted in institutional schisms such as that between the Western Catholic Church and the Eastern Orthodox Church.

    All Christians read the Sermon on the Mount in the Gospel of Matthew and learned that we should love our enemies, that we should turn the other cheek, and that the meek shall inherit the earth. But what did that actually mean? Christians could read this as a call to reject all use of military force,58 or to reject all social hierarchies.59 The Catholic Church, however, viewed such pacifists and egalitarian readings as heresies. It interpreted Jesus’s words in a way that allowed the church to become the richest landowner in Europe, to launch violent crusades, and to establish murderous inquisitions. Catholic theology accepted that Jesus told us to love our enemies, but explained that burning heretics was an act of love, because it deterred additional people from adopting heretical views, thereby saving them from the flames of hell. The French inquisitor Jacques Fournier wrote in the early fourteenth century an entire treatise on the Sermon on the Mount that explained how the text provided justification for hunting heretics.60 Fournier’s view was not a fringe notion. He went on to become Pope Benedict XII (1334–42).

    Fournier’s task as inquisitor, and later as pope, was to ensure that the Catholic Church’s interpretation of the holy book would prevail. In this, Fournier and his fellow churchmen used not only violent coercion but also their control of book production. Prior to the advent of letterpress printing in Europe in the fifteenth century, making many copies of a book was a prohibitive enterprise for all but the most wealthy individuals and institutions. The Catholic Church used its power and wealth to disseminate copies of its favored texts while prohibiting the production and spread of what it considered erroneous ones.

    Of course, the church couldn’t prevent the occasional freethinker from formulating heretical ideas. But because it controlled key nodes in the medieval information network—such as copying workshops, archives, and libraries—it could prevent such a heretic from making and distributing a hundred copies of her book. To get an idea of the difficulties faced by a heretical author seeking to disseminate her views, consider that when Leofric was made bishop of Exeter in 1050, he found just five books in the cathedral’s library. He immediately established a copying workshop in the cathedral, but in the twenty-two years before he died in 1072, his copyists produced only sixty-six additional volumes.61 In the thirteenth century the library of Oxford University consisted of a few books kept in a chest under St. Mary’s Church. In 1424 the library of Cambridge University boasted a grand total of only 122 books.62 An Oxford University decree from 1409 stipulated that “all recent texts” studied at the university must be unanimously approved “by a panel of twelve theologians appointed by the archbishop.”63

    The church sought to lock society inside an echo chamber, allowing the spread only of those books that supported it, and people trusted the church because almost all the books supported it. Even illiterate laypersons who didn’t read books were still awed by recitations of these precious texts or expositions on their content. That’s how the belief in a supposedly infallible superhuman technology like the New Testament led to the rise of an extremely powerful but fallible human institution like the Catholic Church that crushed all opposing views as “erroneous” while allowing no one to question its own views.

    Catholic information experts such as Jacques Fournier spent their days reading Thomas Aquinas’s interpretation of Augustine’s interpretation of Saint Paul’s epistles and composing additional interpretations of their own. All those interrelated texts didn’t represent reality; they created a new information sphere even bigger and more powerful than that created by the Jewish rabbis. Medieval Europeans were cocooned inside that information sphere, their daily activities, thoughts, and emotions shaped by texts about texts about texts.

    PRINT, SCIENCE, AND WITCHES

    The attempt to bypass human fallibility by investing authority in an infallible text never succeeded. If anyone thought this was due to some unique flaw of the Jewish rabbis or the Catholic priests, the Protestant Reformation repeated the experiment again and again—always getting the same results. Luther, Calvin, and their successors argued that there was no need for any fallible human institution to interpose itself between ordinary people and the holy book. Christians should abandon all the parasitical bureaucracies that grew around the Bible and reconnect to the original word of God. But the word of God never interpreted itself, which is why not only Lutherans and Calvinists but numerous other Protestant sects eventually established their own church institutions and invested them with the authority to interpret the text and persecute heretics.64

    If infallible texts merely lead to the rise of fallible and oppressive churches, how then to deal with the problem of human error? The naive view of information posits that the problem can be solved by creating the opposite of a church—namely, a free market of information. The naive view expects that if all restrictions on the free flow of information are removed, error will inevitably be exposed and displaced by truth. As noted in the prologue, this is wishful thinking. Let’s delve a little deeper to understand why. As a test case, consider what happened during one of the most celebrated epochs in the history of information networks: the European print revolution. The introduction of the printing press to Europe in the mid-fifteenth century made it possible to mass-produce texts relatively quickly, cheaply, and secretly, even if the Catholic Church disapproved of them. It is estimated that in the forty-six years from 1454 to 1500 more than twelve million volumes were printed in Europe. By contrast, in the previous thousand years only about eleven million volumes were hand copied.65 By 1600, all kinds of fringe people—heretics, revolutionaries, proto-scientists—could disseminate their writings much more rapidly, widely, and easily than ever before.

    In the history of information networks, the print revolution of early modern Europe is usually hailed as a moment of triumph, breaking the stranglehold that the Catholic Church had maintained over the European information network. Allegedly, by allowing people to exchange information much more freely than before, it led to the scientific revolution. There is a grain of truth in this. Without print, it would certainly have been much harder for Copernicus, Galileo, and their colleagues to develop and spread their ideas.

    But print wasn’t the root cause of the scientific revolution. The only thing the printing press did was to faithfully reproduce texts. The machine had no ability to come up with any new ideas of its own. Those who connect print to science assume that the mere act of producing and spreading more information inevitably leads people to the truth. In fact, print allowed the rapid spread not only of scientific facts but also of religious fantasies, fake news, and conspiracy theories. Perhaps the most notorious example of the latter was the belief in a worldwide conspiracy of satanic witches, which led to the witch-hunt craze that engulfed early modern Europe.66

    Belief in magic and in witches has characterized human societies in all continents and eras, but different societies imagined witches and reacted to them in very different ways. Some societies believed that witches controlled spirits, talked with the dead, and predicted the future; others imagined that witches stole cattle and located hidden treasure. In one community witches were thought to cause disease, blight cornfields, and concoct love potions, while in another community they supposedly entered houses at night, performed household chores, and stole milk. In some locales witches were thought to be mostly female, while in others they were generally imagined to be male. Some cultures were terrified of witches and persecuted them violently, but others tolerated or even honored them. Finally, there were societies in every continent and era that gave witches little importance.67

    For most of the Middle Ages, most European societies belonged to the latter category and were not overly concerned about witches. The medieval Catholic Church didn’t see them as a major threat to humanity, and some churchmen actively discouraged witch-hunting. According to the influential tenth-century text Canon Episcopi—which defined medieval church doctrine on the matter—witchcraft was mostly illusion, and belief in the reality of witchcraft was an unchristian superstition.68 The European witch-hunt craze was a modern rather than a medieval phenomenon.

    In the 1420s and 1430s churchmen and scholars operating mainly in the Alps region took elements from Christian religion, local folklore, and Greco-Roman heritage and amalgamated them into a new theory of witchcraft.69 Previously, even when witches were dreaded, they were considered a strictly local problem—isolated criminals who, inspired by personal malevolence, used magical means to commit theft and murder. In contrast, the new scholarly model argued that witches were a far more formidable threat to society. There was allegedly a global conspiracy of witches, led by Satan, which constituted an institutionalized anti-Christian religion. Its purpose was nothing less than the complete destruction of the social order and of humankind. Witches were said to gather at night in huge demonic assemblies, where they worshipped Satan, killed children, ate human flesh, engaged in orgies, and cast spells that caused storms, epidemics, and other catastrophes.

    Inspired by such ideas, the first mass witch hunts and witch trials were led by local churchmen and noblemen in the Valais region of the western Alps between 1428 and 1436, leading to the execution of more than two hundred supposed male and female witches. From this Alpine heartland, rumors about the global witch conspiracy trickled to other parts of Europe, but the belief was still far from mainstream, the Catholic establishment did not embrace it, and other regions didn’t launch large-scale witch hunts like those in the Valais.

    In 1485, a Dominican friar and inquisitor called Heinrich Kramer embarked on a witch-hunting expedition in another Alpine region—the Austrian Tyrol. Kramer was a fervent convert to the new belief in a global satanic conspiracy.70 He also seems to have been mentally unhinged, and his accusations of satanic witchcraft were colored by rabid misogyny and odd sexual fixations. Local church authorities, led by the bishop of Brixen, were skeptical of Kramer’s accusations and alarmed by his activities. They stopped his inquisition, released the suspects he arrested, and expelled him from the area.71

    Kramer hit back through the printing press. Within two years of his banishment, he compiled and published the Malleus MaleficarumThe Hammer of the Witches. This was a do-it-yourself guidebook to exposing and killing witches in which Kramer described in detail the worldwide conspiracy and the means by which honest Christians could uncover and foil the witches. In particular, he recommended the use of horrific methods of torture in order to extract confessions from people suspected of witchcraft, and was adamant that the only punishment for the guilty was execution.

    Kramer organized and codified previous ideas and stories and added many details from his own fertile and hate-filled imagination. Relying on ancient Christian misogynist teachings like those of 1 Timothy, Kramer sexualized witchcraft. He argued that witches were typically female, because witchcraft originated in lust, which was supposedly stronger in women. He warned readers that sex could cause a pious woman to become a witch and her husband to become bewitched.72

    An entire chapter of the Hammer is dedicated to the ability of witches to steal men’s penises. Kramer discusses at length whether the witches are really able to take away the male member from its owner, or whether they are only able to create an illusion of castration in men’s minds. Kramer asks, “What is to be thought of those witches who in this way sometimes collect male organs in great numbers, as many as twenty or thirty members together, and put them in a bird’s nest, or shut them up in a box, where they move themselves like living members, and eat oats and corn, as has been seen by many?” He then relates a story he heard from one man: “When he had lost his member, he approached a known witch to ask her to restore it to him. She told the afflicted man to climb a certain tree, and that he might take which he liked out of the nest in which there were several members. And when he tried to take a big one, the witch said: You must not take that one; adding, because it belongs to a parish priest.”73 Numerous notions about witches that are still popular today—for instance, that witches are predominantly women, that witches engage in wild sexual activities, and that witches kill and mutilate children—were given their canonical form by Kramer’s book.

    Like the bishop of Brixen, other churchmen were initially skeptical of Kramer’s wild ideas, and there was some resistance to the book among church experts.74 But The Hammer of the Witches became one of the biggest best sellers of early modern Europe. It catered to people’s deepest fears, as well as to their lurid interest in hearing about orgies, cannibalism, child murders, and satanic conspiracies. The book had gone through eight editions by 1500, another five by 1520, and sixteen more by 1670, with many vernacular translations.75 It became the definitive work on witchcraft and witch-hunting and inspired a host of imitations and elaborations. As Kramer’s fame grew, his work was embraced by the church experts. Kramer was appointed papal representative and made inquisitor of Bohemia and Moravia in 1500. Even today his ideas continue to shape the world, and many current theories about a global satanic conspiracy—like QAnon—draw upon and perpetuate his fantasies.

    While it would be an exaggeration to argue that the invention of print caused the European witch-hunt craze, the printing press played a pivotal role in the rapid dissemination of the belief in a global satanic conspiracy. As Kramer’s ideas gained popularity, printing presses produced not only many additional copies of The Hammer of the Witches and copycat books but also a torrent of cheap one-page pamphlets, whose sensational texts were often accompanied by illustrations depicting people attacked by demons or witches burned at the stake.76 These publications also gave fantastic statistics about the size of the witches’ conspiracy. For example, the Burgundian judge and witch-hunter Henri Boguet (1550–1619) speculated that there were 300,000 witches in France alone and 1.8 million in all of Europe.77 Such claims fueled mass hysteria, which in the sixteenth and seventeenth centuries led to the torture and execution of between 40,000 and 50,000 innocent people who were accused of witchcraft.78 The victims included individuals from all walks of life and ages, including children as young as five.79

    People began denouncing one another for witchcraft on the flimsiest evidence, often to avenge personal slights or to gain economic and political advantage. Once an official investigation began, the accused were often doomed. The inquisitorial methods recommended by The Hammer of the Witches were truly diabolical. If the accused confessed to being a witch, they were executed and their property divided between the accuser, the executioner, and the inquisitors. If the accused refused to confess, this was taken as evidence of their demonic obstinacy, and they were then tortured in horrendous ways, their fingers broken, their flesh cut with hot pincers, their bodies stretched to the breaking point or submerged in boiling water. Sooner or later they could stand it no longer and confessed—and were duly executed.80

    To take one example, in 1600 authorities in Munich arrested on suspicion of witchcraft the Pappenheimer family—father Paulus, mother Anna, two grown sons, and a ten-year-old boy, Hansel. The inquisitors began by torturing little Hansel. The protocol of the interrogation, which can still be read in the Munich archives, has a note from one of the interrogators regarding the ten-year-old boy: “May be tortured to the limit so that he incriminates his mother.”81 After being tortured in unspeakable ways, the Pappenheimers confessed to numerous crimes including killing 265 people by sorcery and causing fourteen destructive storms. They were all condemned to death.

    The bodies of each of the four adult family members were torn with red-hot pincers, the men’s limbs were broken on the wheel, the father was impaled on a stake, the mother’s breasts were cut off, and all were then burned alive. The ten-year-old Hansel was forced to watch all this. Four months later, he too was executed.82 The witch-hunters were extremely thorough in their search for the devil and his accomplices. But if the witch-hunters really wanted to find diabolical evil, they just had to look in the mirror.

    THE SPANISH INQUISITION TO THE RESCUE

    Witch hunts seldom ended by killing just one person or one family. Since the underlying model postulated a global conspiracy, people accused of witchcraft were tortured to name accomplices. This was then used as evidence to imprison, torture, and execute others. If any officials, scholars, or churchmen voiced objections to these absurd methods, this could be seen as proof that they too must be witches—which led to their own arrest and torture.

    For example, in 1453—when belief in the satanic conspiracy was just beginning to take hold—a French doctor of theology called Guillaume Edelin bravely sought to quash it before it spread. He repeated the claims of the medieval Canon Episcopi that witchcraft was an illusion and that witches couldn’t really fly at night to meet Satan and make a pact with him. Edelin was then himself accused of being a witch and arrested. Under torture he confessed that he personally had flown on a broomstick and signed a pact with the devil and that it was Satan who commissioned him to preach that witchcraft was an illusion. His judges were lenient with him; he was spared execution and got life imprisonment instead.83

    The witch hunts illustrate the dark side of creating an information sphere. As with rabbinical discussions of the Talmud and scholastic discussions of Christian scriptures, the witch hunts were fueled by an expanding ocean of information that instead of representing reality created a new reality. Witches were not an objective reality. Nobody in early modern Europe had sex with Satan or was capable of flying on broomsticks and creating hailstorms. But witches became an intersubjective reality. Like money, witches were made real by exchanging information about witches.

    An entire witch-hunting bureaucracy dedicated itself to such exchanges. Theologians, lawyers, inquisitors, and the owners of printing presses made a living by collecting and producing information about witches, cataloging different species of witches, investigating how witches behaved, and recommending how they could be exposed and defeated. Professional witch-hunters offered their services to governments and municipalities, charging large sums of money. Archives were filled by detailed reports of witch-hunting expeditions, protocols of witch trials, and lengthy confessions extracted from the alleged witches.

    Expert witch-hunters used all that data to refine their theories further. Like scholars arguing about the correct interpretation of scripture, the witch-hunters debated the correct interpretation of The Hammer of the Witches and other influential books. The witch-hunting bureaucracy did what bureaucracy often does: it invented the intersubjective category of “witches” and imposed it on reality. It even printed forms, with standard accusations and confessions of witchcraft and blank spaces left for dates, names, and the signature of the accused. All that information produced a lot of order and power; it was a means for certain people to gain authority and for society as a whole to discipline its members. But it produced zero truth and zero wisdom.

    As the witch-hunting bureaucracy generated more and more information, it became harder to dismiss all that information as pure fantasy. Could it be that the entire silo of witch-hunting data did not contain a single grain of truth in it? What about all the books written by learned churchmen? What about all the protocols of trials conducted by esteemed judges? What about the tens of thousands of documented confessions?

    The new intersubjective reality was so convincing that even some people accused of witchcraft came to believe that they were indeed part of a worldwide satanic conspiracy. If everybody said so, it must be true. As discussed in chapter 2, humans are susceptible to adopting fake memories. At least some early modern Europeans dreamed or fantasized about summoning devils, having sex with Satan, and practicing witchcraft, and when accused of being witches, they confused their dreams and fantasies with reality.84

    Consequently, even as the witch hunts reached their ghastly crescendo in the early seventeenth century, and many people suspected that something was clearly wrong, it was difficult to reject the whole thing as pure fantasy. One of the worst witch-hunting episodes in early modern Europe occurred in the towns of Bamberg and Würzburg in southern Germany in the late 1620s. In Bamberg, a city of fewer than 12,000 at the time,85 up to 900 innocent people were executed from 1625 to 1631.86 In Würzburg another 1,200 people were tortured and killed, out of a population of around 11,500.87 In August 1629, the chancellor of the prince-bishop of Würzburg wrote a letter to a friend about the ongoing witch hunt, in which he confessed his doubts about the matter. The letter is worth quoting at length:

    As to the affair of the witches … it has started up afresh, and no words can do justice to it. Ah, the woe and the misery of it—there are still four hundred in the city, high and low, of every rank and sex, nay, even clerics, so strongly accused that they may be arrested at any hour.… The Prince-Bishop has over forty students who are soon to be pastors; among them thirteen or fourteen are said to be witches. A few days ago a Dean was arrested; two others who were summoned have fled. The notary of our Church consistory, a very learned man, was yesterday arrested and put to the torture. In a word, a third part of the city is surely involved. The richest, most attractive, most prominent, of the clergy are already executed. A week ago a maiden of nineteen was executed, of whom it is everywhere said that she was the fairest in the whole city, and was held by everybody a girl of singular modesty and purity. She will be followed by seven or eight others of the best and most attractive persons.… And thus many are put to death for renouncing God and being at the witch-dances, against whom nobody has ever else spoken a word.

    To conclude this wretched matter, there are children of three and four years, to the number of three hundred, who are said to have had intercourse with the Devil. I have seen put to death children of seven, promising students of ten, twelve, fourteen, and fifteen.… [B]ut I cannot and must not write more of this misery.

    The chancellor then added this interesting postscript to the letter:

    Though there are many wonderful and terrible things happening, it is beyond doubt that, at a place called the Fraw-Rengberg, the Devil in person, with eight thousand of his followers, held an assembly and celebrated mass before them all, administering to his audience (that is, the witches) turnip-rinds and parings in place of the Holy Eucharist. There took place not only foul but most horrible and hideous blasphemies, whereof I shudder to write.88

    Even after expressing his horror at the insanity of the witch hunt in Würzburg, the chancellor nevertheless expressed his firm belief in the satanic conspiracy of witches. He didn’t witness any witchcraft firsthand, but so much information about witches was circulating that it was difficult for him to doubt all of it. Witch hunts were a catastrophe caused by the spread of toxic information. They are a prime example of a problem that was created by information, and was made worse by more information.

    This was a conclusion reached not just by modern scholars but also by some perceptive observers at the time. Alonso de Salazar Frías, a Spanish inquisitor, made a thorough investigation of witch hunts and witch trials in the early seventeenth century. He concluded that “I have not found one single proof nor even the slightest indication from which to infer that one act of witchcraft has actually taken place,” and that “there were neither witches nor bewitched until they were talked and written about.”89 Salazar Frías well understood the meaning of intersubjective realities and correctly identified the entire witch-hunting industry as an intersubjective information sphere.

    The history of the early modern European witch craze demonstrates that releasing barriers to the flow of information doesn’t necessarily lead to the discovery and spread of truth. It can just as easily lead to the spread of lies and fantasies and to the creation of toxic information spheres. More specifically, a completely free market of ideas may incentivize the dissemination of outrage and sensationalism at the expense of truth. It is not difficult to understand why. Printers and booksellers made a lot more money from the lurid tales of The Hammer of the Witches than they did from the dull mathematics of Copernicus’s On the Revolutions of the Heavenly Spheres. The latter was one of the founding texts of the modern scientific tradition. It is credited with earth-shattering discoveries that displaced our planet from the center of the universe and thereby initiated the Copernican revolution. But when it was first published in 1543, its initial print run of four hundred failed to sell out, and it took until 1566 for a second edition to be published in a similar-sized print run. The third edition did not appear until 1617. As Arthur Koestler quipped, it was an all-time worst seller.90 What really got the scientific revolution going was neither the printing press nor a completely free market of information, but rather a novel approach to the problem of human fallibility.

    THE DISCOVERY OF IGNORANCE

    The history of print and witch-hunting indicates that an unregulated information market doesn’t necessarily lead people to identify and correct their errors, because it may well prioritize outrage over truth. For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favor of the facts. However, as the history of the Catholic Church indicates, such institutions might use their curation power to quash any criticism of themselves, labeling all alternative views erroneous and preventing the institution’s own errors from being exposed and corrected. Is it possible to establish better curation institutions that use their power to further the pursuit of truth rather than to accumulate more power for themselves?

    Early modern Europe saw the foundation of exactly such curation institutions, and it was these institutions—rather than the printing press or specific books like On the Revolutions of the Heavenly Spheres—that constituted the bedrock of the scientific revolution. These key curation institutions were not the universities. Many of the most important leaders of the scientific revolution were not university professors. Nicolaus Copernicus, Robert Boyle, Tycho Brahe, and René Descartes, for example, held no academic positions. Nor did Spinoza, Leibniz, Locke, Berkeley, Voltaire, Diderot, or Rousseau.

    The curation institutions that played a central role in the scientific revolution connected scholars and researchers both in and out of universities, forging an information network that spanned the whole of Europe and eventually the world. For the scientific revolution to gather pace, scientists had to trust information published by colleagues in distant lands. This kind of trust in the work of people whom one had never met was evident in scientific associations like the Royal Society of London for Improving Natural Knowledge, founded in 1660, and the French Académie des Sciences (1666); scientific journals like the Philosophical Transactions of the Royal Society (1665) and the Histoire de l’Académie Royale des Sciences (1699); and scientific publishers like the architects of the Encyclopédie (1751–72). These institutions curated information on the basis of empirical evidence, bringing attention to the discoveries of Copernicus rather than to the fantasies of Kramer. When a paper was submitted to the Philosophical Transactions of the Royal Society, the lead question the editors asked was not, “How many people would pay to read this?” but, “What proof is there that it is true?”

    At first, these new institutions seemed as flimsy as cobwebs, lacking the power necessary to reshape human society. Unlike the witch-hunting experts, the editors of the Philosophical Transactions of the Royal Society could not torture and execute anyone. And unlike the Catholic Church, the Académie des Sciences did not command huge territories and budgets. But scientific institutions did accrue influence thanks to a very original claim to trust. A church typically told people to trust it because it possessed the absolute truth, in the form of an infallible holy book. A scientific institution, in contrast, gained authority because it had strong self-correcting mechanisms that exposed and rectified the errors of the institution itself. It was these self-correcting mechanisms, not the technology of printing, that were the engine of the scientific revolution.

    In other words, the scientific revolution was launched by the discovery of ignorance.91 Religions of the book assumed that they had access to an infallible source of knowledge. The Christians had the Bible, the Muslims had the Quran, the Hindus had the Vedas, and the Buddhists had the Tipitaka. Scientific culture has no comparable holy book, nor does it claim that any of its heroes are infallible prophets, saints, or geniuses. The scientific project starts by rejecting the fantasy of infallibility and proceeding to construct an information network that takes error to be inescapable. Sure, there is much talk about the genius of Copernicus, Darwin, and Einstein, but none of them is considered faultless. They all made mistakes, and even the most celebrated scientific tracts are sure to contain errors and lacunae.

    Since even geniuses suffer from confirmation bias, you cannot trust them to correct their own errors. Science is a team effort, relying on institutional collaboration rather than on individual scientists or, say, a single infallible book. Of course, institutions too are prone to error. Scientific institutions are nevertheless different from religious institutions, inasmuch as they reward skepticism and innovation rather than conformity. Scientific institutions are also different from conspiracy theories, inasmuch as they reward self-skepticism. Conspiracy theorists tend to be extremely skeptical regarding the existing consensus, but when it comes to their own beliefs, they lose all their skepticism and fall prey to confirmation bias.92 The trademark of science is not merely skepticism but self-skepticism, and at the heart of every scientific institution we find a strong self-correcting mechanism. Scientific institutions do reach a broad consensus about the accuracy of certain theories—such as quantum mechanics or the theory of evolution—but only because these theories have managed to survive intense efforts to disprove them, launched not only by outsiders but by members of the institution itself.

    SELF-CORRECTING MECHANISMS

    As an information technology, the self-correcting mechanism is the polar opposite of the holy book. The holy book is supposed to be infallible. The self-correcting mechanism embraces fallibility. By self-correcting, I refer to mechanisms that an entity uses to correct itself. A teacher correcting a student’s essay is not a self-correcting mechanism; the student isn’t correcting their own essay. A judge sending a criminal to prison is not a self-correcting mechanism; the criminal isn’t exposing their own crime. When the Allies defeated and dismantled the Nazi regime, this was not a self-correcting mechanism; left to its own devices, Germany would not have denazified itself. But when a scientific journal publishes a paper correcting a mistake that appeared in a previous paper, that’s an example of an institution self-correcting its own errors.

    Self-correcting mechanisms are ubiquitous in nature. Children learn how to walk thanks to them. You make a wrong move, you fall, you learn from your mistake, you try doing it a little differently. Sure, sometimes parents and teachers give the child a hand or offer advice, but a child who relies entirely on such external corrections or keeps excusing mistakes instead of learning from them will find it very difficult to walk. Indeed, even as adults, every time we walk, our body engages in an intricate process of self-correction. As our body navigates through space, internal feedback loops between brain, limbs, and sensory organs keep our legs and hands in their proper place and our balance just right.93

    Many other bodily processes require constant self-correction. Our blood pressure, temperature, sugar levels, and numerous other parameters must be given some leeway to change in accordance with varying circumstances, but they should never go above or below certain critical thresholds. Our blood pressure needs to increase when we run, to decrease when we sleep, but must always keep within certain bounds.94 Our body manages this delicate biochemical dance through a host of homeostatic self-correcting mechanisms. If our blood pressure goes too high, the self-correcting mechanisms lower it. If our blood pressure is dangerously low, the self-correcting mechanisms raise it. If the self-correcting mechanisms go out of order, we could die.95

    Institutions, too, die without self-correcting mechanisms. These mechanisms start with the realization that humans are fallible and corruptible. But instead of despairing of humans and looking for a way to bypass them, the institution actively seeks its own errors and corrects them. All institutions that manage to endure beyond a handful of years possess such mechanisms, but institutions differ greatly in the strength and visibility of their self-correcting mechanisms.

    For example, the Catholic Church is an institution with relatively weak self-correcting mechanisms. Since it claims infallibility, it cannot admit institutional mistakes. It is occasionally willing to acknowledge that some of its members have erred or sinned, but the institution itself allegedly remains perfect. For example, in the Second Vatican Council in 1964, the Catholic Church acknowledged that “Christ summons the Church to continual reformation as she sojourns here on earth. The Church is always in need of this, insofar as she is an institution of men here on earth. Thus if, in various times and circumstances, there have been deficiencies in moral conduct or in church discipline, or even in the way that church teaching has been formulated—to be carefully distinguished from the deposit of faith itself—these can and should be set right at the opportune moment.”96

    This admission sounds promising, but the devil is in the details, specifically in the refusal to countenance the possibility of any deficiency in “the deposit of faith.” In Catholic dogma “the deposit of faith” refers to the body of revealed truth that the church has received from scriptures and from its sacred tradition of interpreting scripture. The Catholic Church acknowledges that priests are fallible humans who can sin and can also make mistakes in the way they formulate church teachings. However, the holy book itself can never err. What does this imply about the entire church as an institution that combines fallible humans with an infallible text?

    According to Catholic dogma, biblical infallibility and divine guidance trump human corruption, so even though individual members of the church may err and sin, the Catholic Church as an institution is never wrong. Allegedly, never in history did God allow the majority of church leaders to make a serious mistake in their interpretation of the holy book. This principle is common to many religions. Jewish Orthodoxy accepted the possibility that the rabbis who composed the Mishnah and Talmud might have erred in personal matters, but when they came to decree religious doctrine, God ensured that they would make no mistake.97 In Islam there is an analogous principle known as Ijma. According to one important Hadith, Muhammad said that “Allah will ensure my community will never agree on error.”98

    In Catholicism, alleged institutional perfection is enshrined most clearly in the doctrine of papal infallibility, which says that while in personal matters popes may err, in their institutional role they are infallible.99 For example, Pope Alexander VI erred in breaking his vow of celibacy, having a mistress and siring several children, yet when defining official church teachings on matters of ethics or theology, he was incapable of mistake.

    In line with these views, the Catholic Church has always employed a self-correcting mechanism to supervise its human members in their personal affairs, but it never developed a mechanism for amending the Bible or for amending its “deposit of faith.” This attitude is manifest in the few formal apologies the Catholic Church issued for its past conduct. In recent decades, several popes apologized for the mistreatment of Jews, women, non-Catholic Christians, and indigenous cultures, as well as for more specific events such as the sacking of Constantinople in 1204 and the abuse of children in Catholic schools. It is commendable that the Catholic Church made such apologies at all; religious institutions rarely do so. Nevertheless, in all these cases, the popes were careful to shift responsibility away from scriptures and from the church as an institution. Instead, the blame was laid on the shoulders of individual churchmen who misinterpreted scriptures and deviated from the true teachings of the church.

    For example, in March 2000, Pope John Paul II conducted a special ceremony in which he asked forgiveness for a long list of historical crimes against Jews, heretics, women, and indigenous people. He apologized “for the use of violence that some have committed in the service of truth.” This terminology implied that the violence was the fault of “some” misguided individuals who didn’t understand the truth taught by the church. The pope didn’t accept the possibility that perhaps these individuals understood exactly what the church was teaching and that these teachings just were not the truth.100

    Similarly, when Pope Francis apologized in 2022 for the abuses against indigenous people in Canada’s church-run residential schools, he said, “I ask for forgiveness, in particular, for the ways in which many members of the church … cooperated … in projects of cultural destruction and forced assimilation.”101 Note his careful shifting of responsibility. The fault lay with “many members of the church,” not with the church and its teachings. As if it were never official church doctrine to destroy indigenous cultures and forcefully convert people.

    In fact, it wasn’t a few wayward priests who launched the Crusades, imposed laws that discriminated against Jews and women, or orchestrated the systematic annihilation of indigenous religions throughout the world.102 The writings of many revered church fathers, and the official decrees of many popes and church councils, are full of passages disparaging “pagan” and “heretical” religions, calling for their destruction, discriminating against their members, and legitimizing the use of violence to convert people to Christianity.103 For example, in 1452 Pope Nicholas V issued the Dum Diversas bull, addressed to King Afonso V of Portugal and other Catholic monarchs. The bull said, “We grant you by these present documents, with our Apostolic Authority, full and free permission to invade, search out, capture, and subjugate the Saracens and pagans and any other unbelievers and enemies of Christ wherever they may be, as well as their kingdoms, duchies, counties, principalities, and other property … and to reduce their persons into perpetual servitude.”104 This official proclamation, repeated numerous times by subsequent popes, laid the theological basis for European imperialism and the destruction of native cultures across the world. Of course, though the church doesn’t acknowledge it officially, over time it has changed its institutional structures, its core teachings, and its interpretation of scripture. The Catholic Church of today is far less antisemitic and misogynist than it was in medieval and early modern times. Pope Francis is far more tolerant of indigenous cultures than Pope Nicholas V. There is an institutional self-correcting mechanism at work here, which reacts both to external pressures and to internal soul-searching. But what characterizes self-correcting in institutions like the Catholic Church is that even when it happens, it is denied rather than celebrated. The first rule of changing church teachings is that you never admit changing church teachings.

    You would never hear a pope announcing to the world, “Our experts have just discovered a really big error in the Bible. We’ll soon issue an updated edition.” Instead, when asked about the church’s more generous attitude to Jews or women, popes imply that this was always what the church really taught, even if some individual churchmen previously failed to understand the message correctly. Denying the existence of self-correction doesn’t entirely stop it from happening, but it does weaken and slow it. Because the correction of past mistakes is not acknowledged, let alone celebrated, when the faithful encounter another serious problem in the institution and its teachings, they are paralyzed by fear of changing something that is supposedly eternal and infallible. They cannot benefit from the example of previous changes.

    For instance, when Catholics like Pope Francis himself are now reconsidering the church’s teachings on homosexuality,105 they find it difficult to simply acknowledge past mistakes and change the teachings. If eventually a future pope would issue an apology for the mistreatment of LGBTQ people, the way to do it would be to again shift the blame to the shoulders of some overzealous individuals who misunderstood the gospel. To maintain its religious authority the Catholic Church has had no choice but to deny the existence of institutional self-correction. For the church fell into the infallibility trap. Once it based its religious authority on a claim to infallibility, any public admission of institutional error—even on relatively minor issues—could completely destroy its authority.

    THE DSM AND THE BIBLE

    In contrast to the Catholic Church, the scientific institutions that emerged in early modern Europe have been built around strong self-correcting mechanisms. Scientific institutions maintain that even if most scientists in a particular period believe something to be true, it may yet turn out to be inaccurate or incomplete. In the nineteenth century most physicists accepted Newtonian physics as a comprehensive account of the universe, but in the twentieth century the theory of relativity and quantum mechanics exposed the inaccuracies and limitations of Newton’s model.106 The most celebrated moments in the history of science are precisely those moments when accepted wisdom is overturned and new theories are born.

    Crucially, scientific institutions are willing to admit their institutional responsibility for major mistakes and crimes. For example, present-day universities routinely give courses, and professional journals routinely publish articles, that expose the institutional racism and sexism that characterized the scientific study of subjects like biology, anthropology, and history in the nineteenth and much of the twentieth centuries. Research on individual test cases such as the Tuskegee Syphilis Study, and on governmental policies ranging from the White Australia policy to the Holocaust, have repeatedly and extensively studied how flawed biological, anthropological, and historical theories developed in leading scientific institutions were used to justify and facilitate discrimination, imperialism, and even genocide. These crimes and errors are not blamed on a few misguided scholars. They are seen as an institutional failure of entire academic disciplines.107

    The willingness to admit major institutional errors contributes to the relatively fast pace at which science is developing. When the available evidence justifies it, dominant theories are often discarded within a few generations, to be replaced by new theories. What students of biology, anthropology, and history learn at university in the early twenty-first century is very different from what they learned there a century previously.

    Psychiatry offers numerous similar examples for strong self-correcting mechanisms. On the shelf of most psychiatrists you can find the DSM—the Diagnostic and Statistical Manual of Mental Disorders. It is occasionally nicknamed the psychiatrists’ bible. But there is a crucial difference between the DSM and the Bible. First published in 1952, the DSM is revised every decade or two, with the fifth edition appearing in 2013. Over the years, the definition of many disorders has changed, new ones have been added, while others have been deleted. Homosexuality, for example, was listed in 1952 as a sociopathic personality disturbance, but removed from the DSM in 1974. It took just twenty-two years to correct this error in the DSM. That’s not a holy book. That’s a scientific text.

    Today the discipline of psychiatry doesn’t try to reinterpret the 1952 definition of homosexuality in a more benign spirit. Rather, it views the 1952 definition as a downright error. More important, the error is not attributed to the shortcomings of a few homophobic professors. Rather, it is acknowledged to be the result of deep institutional biases in the discipline of psychiatry.108 Confessing the past institutional errors of their discipline makes psychiatrists today more careful not to commit new such errors, as evidenced in the heated debate regarding transgender people and people on the autistic spectrum. Of course, no matter how careful they are, psychiatrists are still likely to make institutional mistakes. But they are also likely to acknowledge and correct them.109

    PUBLISH OR PERISH

    What makes scientific self-correcting mechanisms particularly strong is that scientific institutions are not just willing to admit institutional error and ignorance; they are actively seeking to expose them. This is evident in the institutions’ incentive structure. In religious institutions, members are incentivized to conform to existing doctrine and be suspicious of novelty. You become a rabbi, imam, or priest by professing doctrinal loyalty, and you can advance up the ranks to become pope, chief rabbi, or grand ayatollah without criticizing your predecessors or advancing any radical new notions. Indeed, many of the most powerful and admired religious leaders of recent times—such as Pope Benedict XVI, Chief Rabbi of Israel David Lau, and Ayatollah Khamenei of Iran—have won fame and supporters by strict resistance to new ideas and trends like feminism.110

    In science it works the other way around. Hiring and promotions in scientific institutions are based on the principle of “publish or perish,” and to publish in prestigious journals, you must expose some mistake in existing theories or discover something your predecessors and teachers didn’t know. Nobody wins a Nobel Prize for faithfully repeating what previous scholars said and opposing every new scientific theory.

    Of course, just as religion has room for self-correcting, so science has ample room for conformism, too. Science is an institutional enterprise, and scientists rely on the institution for almost everything they know. For example, how do I know what medieval and early modern Europeans thought about witchcraft? I have not visited all the relevant archives myself, nor have I read all the relevant primary sources. In fact, I am incapable of reading many of these sources directly, because I do not know all the necessary languages, nor am I skilled in deciphering medieval and early modern handwriting. Instead, I have relied on books and articles published by other scholars, such as Ronald Hutton’s book The Witch: A History of Fear, which was published by Yale University Press in 2017.

    I haven’t met Ronald Hutton, who is a professor of history at the University of Bristol, nor do I personally know the Bristol officials who hired him or the Yale editorial team who published his book. I nevertheless trust what I read in Hutton’s book, because I understand how institutions like the University of Bristol and Yale University Press operate. Their self-correcting mechanisms have two crucial features: First, the self-correcting mechanisms are built into the core of the institutions rather than being a peripheral add-on. Second, these institutions publicly celebrate self-correcting instead of denying it. It is of course possible that some of the information I gained from Hutton’s book may be incorrect, or I myself may misinterpret it. But experts on the history of witchcraft who have read Hutton’s book and who might be reading the present book will hopefully spot any such errors and expose them.

    Populist critics of scientific institutions may counter that, in fact, these institutions use their power to stifle unorthodox views and launch their own witch hunts against dissenters. It is certainly true that if a scholar opposes the current orthodox view of their discipline, it might sometimes have negative consequences: articles rejected, research grants denied, nasty ad hominem attacks, and in rare cases even getting fired from their job.111 I do not wish to belittle the suffering such things cause, but it is still a far cry from being physically tortured and burned at the stake.

    Consider, for example, the story of the chemist Dan Shechtman. In April 1982, while observing through an electron microscope, Shechtman saw something that all contemporary theories in chemistry claimed simply could not exist: the atoms in a mixed sample of aluminum and manganese were crystallized in a pattern with a five-fold rotational symmetry. At the time, scientists knew of various possible symmetrical structures in solid crystals, but five-fold symmetry was considered against the very laws of nature. Shechtman’s discovery of what came to be called quasicrystals sounded so outlandish that it was difficult to find a peer-reviewed journal willing to publish it. It didn’t help that Shechtman was at the time a junior scientist. He didn’t even have his own laboratory; he was working in someone’s else facility. But the editors of the journal Physical Review Letters, after reviewing the evidence, eventually published Shechtman’s article in 1984.112 And then, as he describes it, “all hell broke loose.”

    Shechtman’s claims were dismissed by most of his colleagues, and he was blamed for mismanaging his experiments. The head of his laboratory also turned on Shechtman. In a dramatic gesture, he placed a chemistry textbook on Shechtman’s desk and told him, “Danny, please read this book and you will understand that what you are saying cannot be.” Shechtman boldly replied that he saw the quasicrystals in the microscope—not in the book. As a result, he was kicked out of the lab. Worse was to come. Linus Pauling, a two-time Nobel laureate and one of the most eminent scientists of the twentieth century, led a brutal personal attack on Shechtman. In a conference attended by hundreds of scientists, Pauling proclaimed, “Danny Shechtman is talking nonsense, there are no quasicrystals, just quasi-scientists.”

    But Shechtman was not imprisoned or killed. He got a place in another lab. The evidence he presented turned out to be more convincing than the existing chemistry textbooks and the views of Linus Pauling. Several colleagues repeated Shechtman’s experiments and replicated his findings. A mere ten years after Shechtman saw the quasicrystals through his microscope, the International Union of Crystallography—the leading scientific association in the field—altered its definition of what a crystal is. Chemistry textbooks were changed accordingly, and an entire new scientific field emerged—the study of quasicrystals. In 2011, Shechtman was awarded the Nobel Prize in Chemistry for his discovery.113 The Nobel Committee said that “his discovery was extremely controversial [but] eventually forced scientists to reconsider their conception of the very nature of matter.”114

    Shechtman’s story is hardly exceptional. The annals of science are full of similar cases. Before the theory of relativity and quantum mechanics became the cornerstones of twentieth-century physics, they initially provoked bitter controversies, including personal assaults by the old guard on the proponents of the new theories. Similarly, when Georg Cantor developed in the late nineteenth century his theory of infinite numbers, which became the basis for much of twentieth-century mathematics, he was personally attacked by some of the leading mathematicians of his day like Henri Poincaré and Leopold Kronecker. Populists are right to think that scientists suffer from the same human biases as everyone else. However, thanks to institutional self-correcting mechanisms these biases can be overcome. If enough empirical evidence is provided, it often takes just a few decades for an unorthodox theory to upend established wisdom and become the new consensus.

    As we shall see in the next chapter, there were times and places where scientific self-correcting mechanisms ceased functioning and academic dissent could lead to physical torture, imprisonment, and death. In the Soviet Union, for example, questioning official dogma on any matter—economics, genetics, or history—could lead not only to dismissal but even to a couple of years in the gulag or an executioner’s bullet.115 A famous case involved the bogus theories of the agronomist Trofim Lysenko. He rejected mainstream genetics and the theory of evolution by natural selection and advanced his own pet theory, which said that “re-education” could change the traits of plants and animals, and even transform one species into another. Lysenkoism greatly appealed to Stalin, who had ideological and political reasons for believing in the almost limitless potential of “re-education.” Thousands of scientists who opposed Lysenko and continued to uphold the theory of evolution by natural selection were dismissed from their jobs, and some were imprisoned or executed. Nikolai Vavilov, a botanist and geneticist who was Lysenko’s former mentor turned critic, was tried in July 1941 along with the botanist Leonid Govorov, the geneticist Georgii Karpechenko, and the agronomist Aleksandr Bondarenko. The latter three were shot, while Vavilov died in a camp in Saratov in 1943.116 Under pressure from the dictator, the Lenin All-Union Academy of Agricultural Sciences eventually announced in August 1948 that henceforth Soviet institutions would teach Lysenkoism as the only correct theory.117

    But for precisely this reason, the Lenin All-Union Academy of Agricultural Sciences ceased being a scientific institution, and Soviet dogma on genetics was an ideology rather than a science. An institution can call itself by whatever name it wants, but if it lacks a strong self-correcting mechanism, it is not a scientific institution.

    THE LIMITS OF SELF-CORRECTION

    Does all this mean that in self-correcting mechanisms we have found the magic bullet that protects human information networks from error and bias? Unfortunately, things are far more complicated. There is a reason why institutions like the Catholic Church and the Soviet Communist Party eschewed strong self-correcting mechanisms. While such mechanisms are vital for the pursuit of truth, they are costly in terms of maintaining order. Strong self-correcting mechanisms tend to create doubts, disagreements, conflicts, and rifts and to undermine the myths that hold the social order together.

    Of course, order by itself isn’t necessarily good. For example, the social order of early modern Europe endorsed, among other things, not only witch hunts but also the exploitation of millions of peasants by a handful of aristocrats, the systematic mistreatment of women, and widespread discrimination against Jews, Muslims, and other minorities. But even when the social order is highly oppressive, undermining it doesn’t necessarily lead to a better place. It could just lead to chaos and worse oppression. The history of information networks has always involved maintaining a balance between truth and order. Just as sacrificing truth for the sake of order comes with a cost, so does sacrificing order for truth.

    Scientific institutions have been able to afford their strong self-correcting mechanisms because they leave the difficult job of preserving the social order to other institutions. If a thief breaks into a chemistry lab, or a psychiatrist receives death threats, they don’t complain to a peer-reviewed journal; they call the police. Is it possible, then, to maintain strong self-correcting mechanisms in institutions other than academic disciplines? In particular, can such mechanisms exist in institutions like police forces, armies, political parties, and governments that are charged with maintaining the social order?

    We’ll explore this question in the next chapter, which focuses on the political aspects of information flows and examines the long-term history of democracies and dictatorships. As we shall see, democracies believe that it is possible to maintain strong self-correcting mechanisms even in politics. Dictatorships disavow such mechanisms. Thus, at the height of the Cold War, newspapers and universities in the democratic United States openly exposed and criticized American war crimes in Vietnam. Newspapers and universities in the totalitarian Soviet Union were also happy to criticize American crimes, but they remained silent about Soviet crimes in Afghanistan and elsewhere. Soviet silence was scientifically unjustifiable, but it made political sense. American self-flagellation about the Vietnam War continues even today to divide the American public and to undermine America’s reputation throughout the world, whereas Soviet and Russian silence about the Afghanistan War has helped dim its memory and limit its reputational costs.

    Only after understanding the politics of information in historical systems like ancient Athens, the Roman Empire, the United States, and the Soviet Union will we be ready to explore the revolutionary implications of the rise of AI. For one of the biggest questions about AI is whether it will favor or undermine democratic self-correcting mechanisms.

    CHAPTER 5 Decisions: A Brief History of Democracy and Totalitarianism

    Democracy and dictatorship are typically discussed as contrasting political and ethical systems. This chapter seeks to shift the terms of the discussion, by surveying the history of democracy and dictatorship as contrasting types of information networks. It examines how information in democracies flows differently than in dictatorial systems and how inventing new information technologies helps different kinds of regimes flourish.

    Dictatorial information networks are highly centralized.1 This means two things. First, the center enjoys unlimited authority, hence information tends to flow to the central hub, where the most important decisions are made. In the Roman Empire all roads led to Rome, in Nazi Germany information flowed to Berlin, and in the Soviet Union it streamed to Moscow. Sometimes the central government attempts to concentrate all information in its hands and to dictate all decisions by itself, controlling the totality of people’s lives. This totalizing form of dictatorship, practiced by the likes of Hitler and Stalin, is known as totalitarianism. As we shall see, technical difficulties often prevent dictators from becoming totalitarian. The Roman emperor Nero, for example, didn’t have the technology necessary to micromanage the lives of millions of peasants in remote provincial villages. In many dictatorial regimes considerable autonomy is therefore left to individuals, corporations, and communities. However, the dictators always retain the authority to intervene in people’s lives. In Nero’s Rome freedom was not an ideal but a by-product of the government’s inability to exert totalitarian control.

    The second characteristic of dictatorial networks is that they assume the center is infallible. They therefore dislike any challenge to the center’s decisions. Soviet propaganda depicted Stalin as an infallible genius, and Roman propaganda treated emperors as divine beings. Even when Stalin or Nero made a patently disastrous decision, there were no robust self-correcting mechanisms in the Soviet Union or the Roman Empire that could expose the mistake and push for a better course of action.

    In theory, a highly centralized information network could try to maintain strong self-correcting mechanisms, like independent courts and elected legislative bodies. But if they functioned well, these would challenge the central authority and thereby decentralize the information network. Dictators always see such independent power hubs as threats and seek to neutralize them. This is what happened to the Roman Senate, whose power was whittled away by successive Caesars until it became little more than a rubber stamp for imperial whims.2 The same fate befell the Soviet judicial system, which never dared resist the will of the Communist Party. Stalinist show trials, as their name indicates, were theater with preordained results.3

    To summarize, a dictatorship is a centralized information network, lacking strong self-correcting mechanisms. A democracy, in contrast, is a distributed information network, possessing strong self-correcting mechanisms. When we look at a democratic information network, we do see a central hub. The government is the most important executive power in a democracy, and government agencies therefore gather and store vast quantities of information. But there are many additional information channels that connect lots of independent nodes. Legislative bodies, political parties, courts, the press, corporations, local communities, NGOs, and individual citizens communicate freely and directly with one another so that most information never passes through any government agency and many important decisions are made elsewhere. Individuals choose for themselves where to live, where to work, and whom to marry. Corporations make their own choices about where to open a branch, how much to invest in certain projects, and how much to charge for goods and services. Communities decide for themselves about organizing charities, sporting events, and religious festivals. Autonomy is not a consequence of the government’s ineffectiveness; it is the democratic ideal.

    Even if it possesses the technology necessary to micromanage people’s lives, a democratic government leaves as much room as possible for people to make their own choices. A common misconception is that in a democracy everything is decided by majority vote. In fact, in a democracy as little as possible is decided centrally, and only the relatively few decisions that must be made centrally should reflect the will of the majority. In a democracy, if 99 percent of people want to dress in a particular way and worship a particular god, the remaining 1 percent should still be free to dress and worship differently.

    Of course, if the central government doesn’t intervene at all in people’s lives, and doesn’t provide them with basic services like security, it isn’t a democracy; it is anarchy. In all democracies the center raises taxes and maintains an army, and in most modern democracies it also provides at least some level of health care, education, and welfare. But any intervention in people’s lives demands an explanation. In the absence of a compelling reason, a democratic government should leave people to their own devices.

    Another crucial characteristic of democracies is that they assume everyone is fallible. Therefore, while democracies give the center the authority to make some vital decisions, they also maintain strong mechanisms that can challenge the central authority. To paraphrase President James Madison, since humans are fallible, a government is necessary, but since government too is fallible, it needs mechanisms to expose and correct its errors, such as holding regular elections, protecting the freedom of the press, and separating the executive, legislative, and judicial branches of government.

    Consequently, while a dictatorship is about one central information hub dictating everything, a democracy is an ongoing conversation between diverse information nodes. The nodes often influence each other, but in most matters they are not obliged to reach a consensus. Individuals, corporations, and communities can continue to think and behave in different ways. There are, of course, cases when everyone must behave the same, and diversity cannot be tolerated. For example, when in 2002–3 Americans disagreed about whether to invade Iraq, everyone ultimately had to abide by a single decision. It was unacceptable that some Americans would maintain a private peace with Saddam Hussein while others declared war. Whether good or bad, the decision to invade Iraq committed every American citizen. So also when initiating national infrastructure projects or defining criminal offenses. No country can function well if every person is allowed to lay a separate rail network or to have their own definition of murder.

    In order to make decisions on such collective matters, a countrywide public conversation must first be held, following which the people’s representatives—elected in free and fair elections—make a choice. But even after that choice has been made, it should remain open to reexamination and correction. While the network cannot change its previous choices, it can elect a different government next time.

    MAJORITY DICTATORSHIP

    The definition of democracy as a distributed information network with strong self-correcting mechanisms stands in sharp contrast to a common misconception that equates democracy only with elections. Elections are a central part of the democratic tool kit, but they are not democracy. In the absence of additional self-correcting mechanisms, elections can easily be rigged. Even if the elections are completely free and fair, by itself this too doesn’t guarantee democracy. For democracy is not the same thing as majority dictatorship.

    Suppose that in a free and fair election 51 percent of voters choose a government that subsequently sends 1 percent of voters to be exterminated in death camps, because they belong to some hated religious minority. Is this democratic? Clearly it is not. The problem isn’t that genocide demands a special majority of more than 51 percent. It’s not that if the government gets the backing of 60 percent, 75 percent, or even 99 percent of voters, then its death camps finally become democratic. A democracy is not a system in which a majority of any size can decide to exterminate unpopular minorities; it is a system in which there are clear limits on the power of the center.

    Suppose 51 percent of voters choose a government that then takes away the voting rights of the other 49 percent of voters, or perhaps of just 1 percent of them. Is that democratic? Again the answer is no, and it has nothing to do with the numbers. Disenfranchising political rivals dismantles one of the vital self-correcting mechanisms of democratic networks. Elections are a mechanism for the network to say, “We made a mistake; let’s try something else.” But if the center can disenfranchise people at will, that self-correcting mechanism is neutered.

    These two examples may sound outlandish, but they are unfortunately within the realm of the possible. Hitler began sending Jews and communists to concentration camps within months of rising to power through democratic elections, and in the United States numerous democratically elected governments have disenfranchised African Americans, Native Americans, and other oppressed populations. Of course, most assaults on democracy are more subtle. The careers of strongmen like Vladimir Putin, Viktor Orbán, Recep Tayyip Erdoğan, Rodrigo Duterte, Jair Bolsonaro, and Benjamin Netanyahu demonstrate how a leader who uses democracy to rise to power can then use his power to undermine democracy. As Erdoğan once put it, “Democracy is like a tram. You ride it until you arrive at your destination, then you step off.”4

    The most common method strongmen use to undermine democracy is to attack its self-correcting mechanisms one by one, often beginning with the courts and the media. The typical strongman either deprives courts of their powers or packs them with his loyalists and seeks to close all independent media outlets while building his own omnipresent propaganda machine.5

    Once the courts are no longer able to check the government’s power by legal means, and once the media obediently parrots the government line, all other institutions or persons who dare oppose the government can be smeared and persecuted as traitors, criminals, or foreign agents. Academic institutions, municipalities, NGOs, and private businesses are either dismantled or brought under government control. At that stage, the government can also rig the elections at will, for example by jailing popular opposition leaders, preventing opposition parties from participating in the elections, gerrymandering election districts, or disenfranchising voters. Appeals against these antidemocratic measures are dismissed by the government’s handpicked judges. Journalists and academics who criticize these measures are fired. The remaining media outlets, academic institutions, and judicial authorities all praise these measures as necessary steps to protect the nation and its allegedly democratic system from traitors and foreign agents. The strongmen don’t usually take the final step of abolishing the elections outright. Instead, they keep them as a ritual that serves to provide legitimacy and maintain a democratic facade, as happens, for example, in Putin’s Russia.

    Supporters of strongmen often don’t see this process as antidemocratic. They are genuinely baffled when told that electoral victory doesn’t grant them unlimited power. Instead, they see any check on the power of an elected government as undemocratic. However, democracy doesn’t mean majority rule; rather, it means freedom and equality for all. Democracy is a system that guarantees everyone certain liberties, which even the majority cannot take away.

    Nobody disputes that in a democracy the representatives of the majority are entitled to form the government and to advance their preferred policies in myriad fields. If the majority wants war, the country goes to war. If the majority wants peace, the country makes peace. If the majority wants to raise taxes, taxes are raised. If the majority wants to lower taxes, taxes are lowered. Major decisions about foreign affairs, defense, education, taxation, and numerous other policies are all in the hands of the majority.

    But in a democracy, there are two baskets of rights that are protected from the majority’s grasp. One contains human rights. Even if 99 percent of the population wants to exterminate the remaining 1 percent, in a democracy this is forbidden, because it violates the most basic human right—the right to life. The basket of human rights contains many additional rights, such as the right to work, the right to privacy, freedom of movement, and freedom of religion. These rights enshrine the decentralized nature of democracy, making sure that as long as people don’t harm anyone, they can live their lives as they see fit.

    The second crucial basket of rights contains civil rights. These are the basic rules of the democratic game, which enshrine its self-correcting mechanisms. An obvious example is the right to vote. If the majority were permitted to disenfranchise the minority, then democracy would be over after a single election. Other civil rights include freedom of the press, academic freedom, and freedom of assembly, which enable independent media outlets, universities, and opposition movements to challenge the government. These are the key rights that strongmen seek to violate. While sometimes it is necessary to make changes to a country’s self-correcting mechanisms—for example, by expanding the franchise, regulating the media, or reforming the judicial system—such changes should be made only on the basis of a broad consensus including both majority and minority groups. If a small majority could unilaterally change civil rights, it could easily rig elections and get rid of all other checks on its power.

    An important thing to note about both human rights and civil rights is that they don’t just limit the power of the central government; they also impose on it many active duties. It is not enough for a democratic government to abstain from infringing on human and civil rights. It must take actions to ensure them. For example, the right to life imposes on a democratic government the duty to protect citizens from criminal violence. If a government doesn’t kill anyone, but also makes no effort to protect citizens from murder, this is anarchy rather than democracy.

    THE PEOPLE VERSUS THE TRUTH

    Of course, in every democracy, there are lengthy discussions concerning the exact limits of human and civil rights. Even the right to life has limits. There are democratic countries like the United States that impose the death penalty, thereby denying some criminals the right to life. And every country allows itself the prerogative to declare war, thereby sending people to kill and be killed. So where exactly does the right to life end? There are also complicated and ongoing discussions concerning the list of rights that should be included in the two baskets. Who determined that freedom of religion is a basic human right? Should internet access be defined as a civil right? And what about animal rights? Or the rights of AI?

    We cannot resolve these matters here. Both human and civil rights are intersubjective conventions that humans invent rather than discover, and they are determined by historical contingencies rather than universal reason. Different democracies can adopt somewhat different lists of rights. At least from the viewpoint of information flows, what defines a system as “democratic” is only that its center doesn’t have unlimited authority and that the system possesses robust mechanisms to correct the center’s mistakes. Democratic networks assume that everyone is fallible, and that includes even the winners of elections and the majority of voters.

    It is particularly crucial to remember that elections are not a method for discovering truth. Rather, they are a method for maintaining order by adjudicating between people’s conflicting desires. Elections establish what the majority of people desire, rather than what the truth is. And people often desire the truth to be other than what it is. Democratic networks therefore maintain some self-correcting mechanisms to protect the truth even from the will of the majority.

    For example, during the 2002–3 debate over whether to invade Iraq in the wake of the September 11 attacks, the Bush administration claimed that Saddam Hussein was developing weapons of mass destruction and that the Iraqi people were eager to establish an American-style democracy and would welcome the Americans as liberators. These arguments carried the day. In October 2002 the elected representatives of the American people in Congress voted overwhelmingly to authorize the invasion. The resolution passed with a 296 to 133 majority (69 percent) in the House of Representatives and a 77 to 23 majority (77 percent) in the Senate.6 In the early days of the war in March 2003, polls found that the elected representatives were indeed in tune with the mass of voters and that 72 percent of American citizens supported the invasion.7 The will of the American people was clear.

    But the truth turned out to be different from what the government said and what the majority believed. As the war progressed, it became evident that Iraq had no weapons of mass destruction and that many Iraqis had no wish to be “liberated” by the Americans or to establish a democracy. By August 2004 another poll found that 67 percent of Americans believed that the invasion was based on incorrect assumptions. As the years went by, most Americans acknowledged that the decision to invade was a catastrophic mistake.8

    In a democracy the majority has every right to make momentous decisions like starting wars, and that includes the right to make momentous errors. But the majority should at least acknowledge its own fallibility and protect the freedom of minorities to hold and publicize unpopular views, which might turn out to be correct.

    As another example, consider the case of a charismatic leader who is accused of corruption. His loyal supporters obviously wish these accusations to be false. But even if most voters support the leader, their desires should not prevent judges from investigating the accusations and getting to the truth. As with the justice system, so also with science. A majority of voters might deny the reality of climate change, but they should not have the power to dictate scientific truth or to prevent scientists from exploring and publishing inconvenient facts. Unlike parliaments, departments of environmental studies should not reflect the will of the majority.

    Of course, when it comes to making policy decisions about climate change, in a democracy the will of the voters should reign supreme. Acknowledging the reality of climate change does not tell us what to do about it. We always have options, and choosing between them is a question of desire, not truth. One option might be to immediately cut greenhouse gas emissions, even at the cost of slowing economic growth. This means incurring some difficulties today but saving people in 2050 from more severe hardship, saving the island nation of Kiribati from drowning, and saving the polar bears from extinction. A second option might be to continue with business as usual. This means having an easier life today, but making life harder for the next generation, flooding Kiribati, and driving the polar bears—as well as numerous other species—to extinction. Choosing between these two options is a question of desire, and should therefore be done by all voters rather than by a limited group of experts.

    But the one option that should not be on offer in elections is hiding or distorting the truth. If the majority prefers to consume whatever amount of fossil fuels it wishes with no regard to future generations or other environmental considerations, it is entitled to vote for that. But the majority should not be entitled to pass a law stating that climate change is a hoax and that all professors who believe in climate change must be fired from their academic posts. We can choose what we want, but we shouldn’t deny the true meaning of our choice.

    Naturally, academic institutions, the media, and the judiciary may themselves be compromised by corruption, bias, or error. But subordinating them to a governmental Ministry of Truth is likely to make things worse. The government is already the most powerful institution in developed societies, and it often has the greatest interest in distorting or hiding inconvenient facts. Allowing the government to supervise the search for truth is like appointing the fox to guard the chicken coop.

    To discover the truth, it is better to rely on two other methods. First, academic institutions, the media, and the judiciary have their own internal self-correcting mechanisms for fighting corruption, correcting bias, and exposing error. In academia, peer-reviewed publication is a far better check on error than supervision by government officials, because academic promotion often depends on uncovering past mistakes and discovering unknown facts. In the media, free competition means that if one outlet decides not to break a scandal, perhaps for self-serving reasons, others are likely to jump at the scoop. In the judiciary, a judge that takes bribes may be tried and punished just like any other citizen.

    Second, the existence of several independent institutions that seek the truth in different ways allows these institutions to check and correct one another. For example, if powerful corporations manage to break down the peer-review mechanism by bribing a sufficiently large number of scientists, investigative journalists and courts can expose and punish the perpetrators. If the media or the courts are afflicted by systematic racist biases, it is the job of sociologists, historians, and philosophers to expose these biases. None of these mechanisms are completely fail-safe, but no human institution is. Government certainly isn’t.

    THE POPULIST ASSAULT

    If all this sounds complicated, it is because democracy should be complicated. Simplicity is a characteristic of dictatorial information networks in which the center dictates everything and everybody silently obeys. It’s easy to follow this dictatorial monologue. In contrast, democracy is a conversation with numerous participants, many of them talking at the same time. It can be hard to follow such a conversation.

    Moreover, the most important democratic institutions tend to be bureaucratic behemoths. Whereas citizens avidly follow the biological dramas of the princely court and the presidential palace, they often find it difficult to understand how parliaments, courts, newspapers, and universities function. This is what helps strongmen mount populist attacks on institutions, dismantle all self-correcting mechanisms, and concentrate power in their hands. We discussed populism briefly in the prologue, to help explain the populist challenge to the naive view of information. Here we need to revisit populism, get a broader understanding of its worldview, and explain its appeal to antidemocratic strongmen.

    The term “populism” derives from the Latin populus, which means “the people.” In democracies, “the people” is considered the sole legitimate source of political authority. Only representatives of the people should have the authority to declare wars, pass laws, and raise taxes. Populists cherish this basic democratic principle, but somehow conclude from it that a single party or a single leader should monopolize all power. In a curious political alchemy, populists manage to base a totalitarian pursuit of unlimited power on a seemingly impeccable democratic principle. How does it happen?

    The most novel claim populists make is that they alone truly represent the people. Since in democracies only the people should have political power, and since allegedly only the populists represent the people, it follows that the populist party should have all political power to itself. If some party other than the populists wins elections, it does not mean that this rival party won the people’s trust and is entitled to form a government. Rather, it means that the elections were stolen or that the people were deceived to vote in a way that doesn’t express their true will.

    It should be stressed that for many populists, this is a genuinely held belief rather than a propaganda gambit. Even if they win just a small share of votes, populists may still believe they alone represent the people. An analogous case are communist parties. In the U.K., for example, the Communist Party of Great Britain (CPGB) never won more than 0.4 percent of votes in a general election,9 but was nevertheless adamant that it alone truly represented the working class. Millions of British workers, they claimed, were voting for the Labour Party or even for the Conservative Party rather than for the CPGB because of “false consciousness.” Allegedly, through their control of the media, universities, and other institutions, the capitalists managed to deceive the working class into voting against its true interests, and only the CPGB could see through this deception. In like fashion, populists can believe that the enemies of the people have deceived the people to vote against its true will, which the populists alone represent.

    A fundamental part of this populist credo is the belief that “the people” is not a collection of flesh-and-blood individuals with various interests and opinions, but rather a unified mystical body that possesses a single will—“the will of the people.” Perhaps the most notorious and extreme manifestation of this semireligious belief was the Nazi motto “Ein Volk, ein Reich, ein Führer,” which means “One People, One Country, One Leader.” Nazi ideology posited that the Volk (people) had a single will, whose sole authentic representative was the Führer (leader). The leader allegedly had an infallible intuition for how the people felt and what the people wanted. If some German citizens disagreed with the leader, it didn’t mean that the leader might be in the wrong. Rather, it meant that the dissenters belonged to some treasonous outsider group—Jews, communists, liberals—instead of to the people.

    The Nazi case is of course extreme, and it is grossly unfair to accuse all populists of being crypto-Nazis with genocidal inclinations. However, many populist parties and politicians deny that “the people” might contain a diversity of opinions and interest groups. They insist that the real people has only one will and that they alone represent this will. In contrast, their political rivals—even when the latter enjoy substantial popular support—are depicted as “alien elites.” Thus, Hugo Chávez ran for the presidency in Venezuela with the slogan “Chávez is the people!”10 President Erdoğan of Turkey once railed against his domestic critics, saying, “We are the people. Who are you?”—as if his critics weren’t Turks, too.11

    How can you tell, then, whether someone is part of the people or not? Easy. If they support the leader, they are part of the people. This, according to the German political philosopher Jan-Werner Müller, is the defining feature of populism. What turns someone into a populist is claiming that they alone represent the people and that anyone who disagrees with them—whether state bureaucrats, minority groups, or even the majority of voters—either suffers from false consciousness or isn’t really part of the people.12

    This is why populism poses a deadly threat to democracy. While democracy agrees that the people is the only legitimate source of power, democracy is based on the understanding that the people is never a unitary entity and, therefore, cannot possess a single will. Every people—whether Germans, Venezuelans, or Turks—is composed of many different groups, with a plurality of opinions, wills, and representatives. No group, including the majority group, is entitled to exclude other groups from membership in the people. This is what makes democracy a conversation. Holding a conversation presupposes the existence of several legitimate voices. If, however, the people has only one legitimate voice, there can be no conversation. Rather, the single voice dictates everything. Populism may therefore claim adherence to the democratic principle of “people’s power,” but it effectively empties democracy of meaning and seeks to establish a dictatorship.

    Populism undermines democracy in another, more subtle, but equally dangerous way. Having claimed that they alone represent the people, populists argue that the people is not just the sole legitimate source of political authority but the sole legitimate source of all authority. Any institution that derives its authority from something other than the will of the people is antidemocratic. As the self-proclaimed representatives of the people, populists consequently seek to monopolize not just political authority but all types of authority and to take control of institutions such as media outlets, courts, and universities. By taking the democratic principle of “people’s power” to its extreme, populists turn totalitarian.

    In fact, while democracy means that authority in the political sphere comes from the people, it doesn’t deny the validity of alternative sources of authority in other spheres. As discussed above, in a democracy independent media outlets, courts, and universities are essential self-correcting mechanisms that protect the truth even from the will of the majority. Biology professors claim that humans evolved from apes because the evidence supports this, even if the majority wills it to be otherwise. Journalists can reveal that a popular politician took a bribe, and if compelling evidence is presented in court, a judge may send that politician to jail, even if most people don’t want to believe these accusations.

    Populists are suspicious of institutions that in the name of objective truths override the supposed will of the people. They tend to see this as a smoke screen for elites grabbing illegitimate power. This drives populists to be skeptical of the pursuit of truth, and to argue—as we saw in the prologue—that “power is the only reality.” They thereby seek to undercut or appropriate the authority of any independent institutions that might oppose them. The result is a dark and cynical view of the world as a jungle and of human beings as creatures obsessed with power alone. All social interactions are seen as power struggles, and all institutions are depicted as cliques promoting the interests of their own members. In the populist imagination, courts don’t really care about justice; they only protect the privileges of the judges. Yes, the judges talk a lot about justice, but this is a ploy to grab power for themselves. Newspapers don’t care about facts; they spread fake news to mislead the people and benefit the journalists and the cabals that finance them. Even scientific institutions aren’t committed to the truth. Biologists, climatologists, epidemiologists, economists, historians, and mathematicians are just another interest group feathering its own nest—at the expense of the people.

    In all, it’s a rather sordid view of humanity, but two things nevertheless make it appealing to many. First, since it reduces all interactions to power struggles, it simplifies reality and makes events like wars, economic crises, and natural disasters easy to understand. Anything that happens—even a pandemic—is about elites pursuing power. Second, the populist view is attractive because it is sometimes correct. Every human institution is indeed fallible and suffers from some level of corruption. Some judges do take bribes. Some journalists do intentionally mislead the public. Academic disciplines are occasionally plagued by bias and nepotism. That is why every institution needs self-correcting mechanisms. But since populists are convinced that power is the only reality, they cannot accept that a court, a media outlet, or an academic discipline would ever be inspired by the value of truth or justice to correct itself.

    While many people embrace populism because they see it as an honest account of human reality, strongmen are attracted to it for a different reason. Populism offers strongmen an ideological basis for making themselves dictators while pretending to be democrats. It is particularly useful when strongmen seek to neutralize or appropriate the self-correcting mechanisms of democracy. Since judges, journalists, and professors allegedly pursue political interests rather than truth, the people’s champion—the strongman—should control these positions instead of allowing them to fall into the hands of the people’s enemies. Similarly, since even the officials in charge of arranging elections and publicizing their results may be part of a nefarious conspiracy, they too should be replaced by the strongman’s loyalists.

    In a well-functioning democracy, citizens trust the results of elections, the decisions of courts, the reports of media outlets, and the findings of scientific disciplines because citizens believe these institutions are committed to the truth. Once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power.
    Of course, populism could lead to anarchy rather than totalitarianism, if it undermines trust in the strongmen themselves. If no human is interested in truth or justice, doesn’t this apply to Mussolini or Putin too? And if no human institution can have effective self-correcting mechanisms, doesn’t this include Mussolini’s National Fascist Party or Putin’s United Russia party? How can a deep-seated distrust of all elites and institutions be squared with unwavering admiration for one leader and party? This is why populists ultimately depend on the mystical notion that the strongman embodies the people. When trust in bureaucratic institutions like election boards, courts, and newspapers is particularly low, an enhanced reliance on mythology is the only way to preserve order.

    MEASURING THE STRENGTH OF DEMOCRACIES

    Strongmen who claim to represent the people may well rise to power through democratic means, and often rule behind a democratic facade. Rigged elections in which they win overwhelming majorities serve as proof of the mystical bond between the leader and the people. Consequently, to measure how democratic an information network is, we cannot use a simple yardstick like whether elections are being held regularly. In Putin’s Russia, in Iran, and even in North Korea elections are held like clockwork. Rather, we need to ask much more complex questions like “What mechanisms prevent the central government from rigging the elections?” “How safe is it for leading media outlets to criticize the government?” and “How much authority does the center appropriate to itself?” Democracy and dictatorship aren’t binary opposites, but rather a continuum. To decide whether a network is closer to the democratic or the dictatorial end of the continuum, we need to understand how information flows in the network and what shapes the political conversation.

    If one person dictates all the decisions, and even their closest advisers are terrified to voice a dissenting view, no conversation is taking place. Such a network is situated at the extreme dictatorial end of the spectrum. If nobody can voice unorthodox opinions publicly, but behind closed doors a small circle of party bosses or senior officials are able to freely express their views, then this is still a dictatorship, but it has taken a baby step in the direction of democracy. If 10 percent of the population participate in the political conversation by airing their opinions, voting in fair elections, and running for office, that may be considered a limited democracy, as was the case in many ancient city-states like Athens, or in the early days of the United States, when only wealthy white men had such political rights. As the percentage of people taking part in the conversation rises, so the network becomes more democratic.

    The focus on conversations rather than elections raises a host of interesting questions. For example, where does that conversation take place? North Korea, for example, has the Mansudae Assembly Hall in Pyongyang, where the 687 members of the Supreme People’s Assembly meet and talk. However, while this Assembly is officially known as North Korea’s legislature, and while elections to the Assembly are held every five years, this body is widely considered a rubber stamp, executing decisions taken elsewhere. The anodyne discussions follow a predetermined script, and they aren’t geared to change anyone’s mind about anything.13

    Is there perhaps another, more private hall in Pyongyang where the crucial conversations take place? Do Politburo members ever dare criticize Kim Jong Un’s policies during formal meetings? Perhaps it can be done in unofficial dinner parties or in unofficial think tanks? Information in North Korea is so concentrated and so tightly controlled that we cannot provide clear answers to these questions.14
    Similar questions can be asked about the United States. In the United States, unlike in North Korea, people are free to say almost anything they want. Scathing public attacks on the government are a daily occurrence. But where is the room where the crucial conversations happen, and who sits there? The U.S. Congress was designed to fulfill this function, with the people’s representatives meeting to converse and try to convince one another. But when was the last time that an eloquent speech in Congress by a member of one party persuaded members of the other party to change their minds about anything? Wherever the conversations that shape American politics now take place, it is definitely not in Congress. Democracies die not only when people are not free to talk but also when people are not willing or able to listen.

    STONE AGE DEMOCRACIES

    Based on the above definition of democracy, we can now turn to the historical record and examine how changes in information technology and information flows have shaped the history of democracy. To judge by the archaeological and anthropological evidence, democracy was the most typical political system among archaic hunter-gatherers. Stone Age bands obviously didn’t have formal institutions like elections, courts, and media outlets, but their information networks were usually distributed and gave ample opportunities for self-correction. In bands numbering just a few dozen people information could easily be shared among all group members, and when the band decided where to pitch camp, where to go hunting, or how to handle a conflict with another band, everyone could take part in the conversation and dispute each other. Bands usually belonged to a larger tribe that included hundreds or even thousands of people. But when important choices affecting the whole tribe had to be made, such as whether to go to war, tribes were usually still small enough for a large percentage of their members to gather in one place and converse.15

    While bands and tribes sometimes had dominant leaders, these tended to exercise only limited authority. Leaders had no standing armies, police forces, or governmental bureaucracies at their disposal, so they couldn’t just impose their will by force.16 Leaders also found it difficult to control the economic basis of people’s lives. In modern times, dictators like Vladimir Putin and Saddam Hussein have often based their political power on monopolizing economic assets like oil wells.17 In medieval and classical antiquity, Chinese emperors, Greek tyrants, and Egyptian pharaohs dominated society by controlling granaries, silver mines, and irrigation canals. In contrast, in a hunter-gatherer economy such centralized economic control was possible only under special circumstances. For example, along the northwestern coast of North America some hunter-gatherer economies relied on catching and preserving large numbers of salmon. Since salmon runs peaked for a few weeks in specific creeks and rivers, a powerful chief could monopolize this asset.18

    But this was exceptional. Most hunter-gatherer economies were far more diversified. One leader, even supported by a few allies, could not corral the savanna and prevent people from gathering plants and hunting animals there. If all else failed, hunter-gatherers could therefore vote with their feet. They had few possessions, and their most important assets were their personal skills and personal friends. If a chief turned dictatorial, people could just walk away.19

    Even when hunter-gatherers did end up ruled by a domineering chief, as happened among the salmon-fishing people of northwestern America, at least that chief was accessible. He didn’t live in a faraway fortress surrounded by an unfathomable bureaucracy and a cordon of armed guards. If you wanted to voice a complaint or a suggestion, you could usually get within earshot of him. The chief couldn’t control public opinion, nor could he shut himself off from it. In other words, there was no way for a chief to force all information to flow through the center, or to prevent people from talking with one another, criticizing him, or organizing against him.20

    In the millennia following the agricultural revolution, and especially after writing helped create large bureaucratic polities, it became easier to centralize the flow of information and harder to maintain the democratic conversation. In small city-states like those of ancient Mesopotamia and Greece, autocrats like Lugal-Zagesi of Umma and Pisistratus of Athens relied on bureaucrats, archives, and a standing army to monopolize key economic assets and information about ownership, taxation, diplomacy, and politics. It simultaneously became harder for the mass of citizens to keep in direct touch with one another. There was no mass communication technology like newspapers or radio, and it was not easy to squeeze tens of thousands of citizens into the main city square to hold a communal discussion.

    Democracy was still an option for these small city-states, as the history of both early Sumer and classical Greece clearly indicates.21 However, the democracy of ancient city-states tended to be less inclusive than the democracy of archaic hunter-gatherer bands. Probably the most famous example of ancient city-state democracy is Athens in the fifth and fourth centuries BCE. All adult male citizens could participate in the Athenian assembly, vote on public policy, and be elected to public offices. But women, slaves, and noncitizen residents of the city did not enjoy these privileges. Only about 25–30 percent of the adult population of Athens enjoyed full political rights.22

    As the size of polities continued to increase, and city-states were superseded by larger kingdoms and empires, even Athenian-style partial democracy disappeared. All the famous examples of ancient democracies are city-states such as Athens and Rome. In contrast, we don’t know of any large-scale kingdom or empire that operated along democratic lines.

    For example, when in the fifth century BCE Athens expanded from a city-state into an empire, it did not grant citizenship and political rights to those it conquered. The city of Athens remained a limited democracy, but the much bigger Athenian Empire was ruled autocratically from the center. All the important decisions about taxes, diplomatic alliances, and military expeditions were taken in Athens. Subject lands like the islands of Naxos and Thasos had to obey the orders of the Athenian popular assembly and elected officials, without the Naxians and Thasians being able to vote in that assembly or be elected to office. It was also difficult for Naxos, Thasos, and other subject lands to coordinate a united opposition to the decisions taken in the Athenian center, and if they tried to do so, it would have brought ruthless Athenian reprisals. Information in the Athenian Empire flowed to and from Athens.23

    When the Roman Republic built its empire, conquering first the Italian Peninsula and eventually the entire Mediterranean basin, the Romans took a somewhat different course. Rome gradually did extend citizenship to the conquered people. It began by granting citizenship to the inhabitants of Latium, then to the inhabitants of other Italian regions, and finally to inhabitants of even distant provinces like Gallia and Syria. However, as citizenship was extended to more people, the political rights of citizens were simultaneously restricted.

    The ancient Romans had a clear understanding of what democracy means, and they were originally fiercely committed to the democratic ideal. After expelling the last king of Rome in 509 BCE, the Romans developed a deep dislike for monarchy and a fear of giving unlimited power to any single individual or institution. Supreme executive power was therefore shared by two consuls who balanced each other. These consuls were chosen by citizens in free elections, held office for a single year, and were additionally checked by the powers of the popular assembly, of the Senate, and of other elected officials like the tribunes.

    But when Rome extended citizenship to Latins, Italians, and finally to Gauls and Syrians, the power of the popular assembly, the tribunes, the Senate, and even the two consuls was gradually reduced, until in the late first century BCE the Caesar family established its autocratic rule. Anticipating present-day strongmen like Putin, Augustus didn’t crown himself king, and pretended that Rome was still a republic. The Senate and the popular assembly continued to convene, and every year citizens continued to choose consuls and tribunes. But these institutions were emptied of real power.24

    In 212 CE, the emperor Caracalla—the offspring of a Phoenician family from North Africa—took a seemingly momentous step and granted automatic Roman citizenship to all free adult males throughout the vast empire. Rome in the third century CE accordingly had tens of millions of citizens.25 But by that time, all the important decisions were made by a single unelected emperor. While consuls were still ceremonially chosen every year, Caracalla inherited power from his father Septimius Severus, who became emperor by winning a civil war. To cement his rule, the most important step Caracalla took was murdering his brother and rival Geta.

    When Caracalla ordered the murder of Geta, decided to devalue the Roman currency, or declared war on the Parthian Empire, he had no need to ask permission from the Roman people. All of Rome’s self-correcting mechanisms had been neutralized long before. If Caracalla made some error in foreign or domestic policy, neither the Senate nor any officials like the consuls or tribunes could intervene to correct it, except by rising in rebellion or assassinating him. And when Caracalla was indeed assassinated in 217, it only led to a new round of civil wars culminating in the rise of new autocrats. Rome in the third century CE, like Russia in the eighteenth century, was, in the words of Madame de Staël, “autocracy tempered by strangulation.”

    By the third century CE, not only the Roman Empire but all other major human societies on earth were centralized information networks lacking strong self-correcting mechanisms. This was true of the Parthian and Sassanian Empires in Persia, of the Kushan and Gupta Empires in India, and of China’s Han Empire and its successor Three Kingdoms.26 Thousands of more small-scale societies continued to function democratically in the third century CE and beyond, but it seemed that distributed democratic networks were simply incompatible with large-scale societies.

    CAESAR FOR PRESIDENT!

    Were large-scale democracies really unworkable in the ancient world? Or did autocrats like Augustus and Caracalla deliberately sabotage them? This question is important not only for our understanding of ancient history but also for our view of democracy’s future in the age of AI. How do we know whether democracies fail because they are undermined by strongmen or because of much deeper structural and technological reasons?

    To answer that question, let’s take a closer look at the Roman Empire. The Romans were clearly familiar with the democratic ideal, and it continued to be important to them even after the Caesar family rose to power. Otherwise, Augustus and his heirs would not have bothered to maintain seemingly democratic institutions like the Senate or annual elections to the consulate and other offices. So why did power end up in the hands of an unelected emperor?

    In theory, even after Roman citizenship was expanded to tens of millions of people throughout the Mediterranean basin, wasn’t it possible to hold empire-wide elections for the position of emperor? This would surely have required very complicated logistics, and it would have taken several months to learn the results of the elections. But was that really a deal breaker?

    The key misconception here is equating democracy with elections. If the Roman Empire wanted to, it could technically have held empire-wide elections for emperor. But the real question we should ask is whether the Roman Empire could have held an ongoing empire-wide political conversation. In present-day North Korea no democratic conversation takes place because people aren’t free to talk, yet we could well imagine a situation when this freedom is guaranteed—as it is in South Korea. In the present-day United States the democratic conversation is endangered by people’s inability to listen to and respect their political rivals, yet this can presumably still be fixed. By contrast, in the Roman Empire there was simply no way to conduct or sustain a democratic conversation, because the technological means to hold such a conversation did not exist.

    To hold a conversation, it is not enough to have the freedom to talk and the ability to listen. There are also two technical preconditions. First, people need to be within hearing range of each other. This means that the only way to hold a political conversation in a territory the size of the United States or the Roman Empire is with the help of some kind of information technology that can swiftly convey what people say over long distances.

    Second, people need at least a rudimentary understanding of what they are talking about. Otherwise, they are just making noise, not holding a meaningful conversation. People usually have a good understanding of political issues of which they have direct experience. Poor people have many insights about poverty that escape economics professors, and ethnic minorities understand racism in a much more profound way than people who never suffered from it, for example. However, if lived experience were the only way to understand crucial political issues, large-scale political conversations would be impossible. For then every group of people could talk meaningfully only about its own experiences. Even worse, nobody else could understand what they were saying. If lived experience is the sole possible source of knowledge, then merely listening to the insights gained from someone else’s lived experience cannot impart these insights to me.

    The only way to have a large-scale political conversation among diverse groups of people is if people can gain some understanding of issues that they have never experienced firsthand. In a large polity, it is a crucial role of the education system and the media to inform people about things they have never faced themselves. If there is no education system or media platforms to perform this role, no meaningful large-scale conversations can take place.

    In a small Neolithic town of a few thousand inhabitants people might sometimes have been afraid to say what they thought, or might have refused to listen to their rivals, but it was relatively easy to satisfy the more fundamental technical preconditions for meaningful discourse. First, people lived in proximity to one another, so they could easily meet most other community members and hear their voices. Second, everybody had intimate knowledge of the dangers and opportunities that the town faced. If an enemy war party approached, everyone could see it. If the river flooded the fields, everyone witnessed the economic effects. When people talked about war and hunger, they knew what they were saying.

    In the fourth century BCE, the city-state of Rome was still small enough to allow a large percentage of its citizens to congregate in the Forum in times of emergency, listen to respected leaders, and voice their personal views on the matter at hand. When in 390 BCE Gallic invaders attacked Rome, almost everyone lost a relative in the defeat at the Battle of the Allia and lost property when the victorious Gauls then sacked Rome. The desperate Romans appointed Marcus Camillus as dictator. In Rome, the dictator was a public official appointed in times of emergency who had unlimited powers but only for a short predetermined period, following which he was held accountable for his actions. After Camillus led the Romans to victory, everybody could see that the emergency was over, and Camillus stepped down.27

    In contrast, by the third century CE, the Roman Empire had a population of between sixty and seventy-five million people,28 spread over five million square kilometers.29 Rome lacked mass communication technology like radio or daily newspapers. Only 10–20 percent of adults had reading skills,30 and there was no organized education system that could inform them about the geography, history, and economy of the empire. True, many people across the empire did share some cultural ideas, such as a strong belief in the superiority of Roman civilization over the barbarians. These shared cultural beliefs were crucial in preserving order and holding the empire together. But their political implications were far from clear, and in times of crisis there was no possibility to hold a public conversation about what should be done.

    How could Syrian merchants, British shepherds, and Egyptian villagers converse about the ongoing wars in the Middle East or about the immigration crisis brewing along the Danube? The lack of a meaningful public conversation was not the fault of Augustus, Nero, Caracalla, or any of the other emperors. They didn’t sabotage Roman democracy. Given the size of the empire and the available information technology, democracy was simply unworkable. This was acknowledged already by ancient philosophers like Plato and Aristotle, who argued that democracy can work only in small-scale city-states.31

    If the absence of Roman democracy had merely been the fault of particular autocrats, we should have at least seen large-scale democracies flourishing in other places, like in Sassanian Persia, Gupta India, or Han China. But prior to the development of modern information technology, there are no examples of large-scale democracies anywhere.

    It should be stressed that in many large-scale autocracies local affairs were often managed democratically. The Roman emperor didn’t have the information needed to micromanage hundreds of cities across the empire, whereas local citizens in each city could continue to hold a meaningful conversation about municipal politics. Consequently, long after the Roman Empire became an autocracy, many of its cities continued to be governed by local assemblies and elected officials. At a time when elections to the consulship in Rome became ceremonial affairs, elections to municipal offices in small cities like Pompeii were hotly contested.

    Pompeii was destroyed in the eruption of Vesuvius in 79 CE, during the reign of the emperor Titus. Archaeologists uncovered about fifteen hundred graffiti concerned with various local election campaigns. One coveted office was that of the city’s aedile—the magistrate in charge of maintaining the city’s infrastructure and public buildings.32 Lucretius Fronto’s supporters drew the graffiti “If honest living is thought to be any recommendation, then Lucretius Fronto is worthy of being elected.” One of his opponents, Gaius Julius Polybius, ran with the slogan “Elect Gaius Julius Polybius to the office of aedile. He provides good bread.”

    There were also endorsements by religious groups and professional associations, such as “The worshippers of Isis demand the election of Gnaeus Helvius Sabinus” and “All the mule drivers request that you elect Gaius Julius Polybius.” There was dirty work, too. Someone who clearly wasn’t Marcus Cerrinius Vatia drew the graffiti “All the drunkards ask you to elect Marcus Cerrinius Vatia” and “The petty thieves ask you to elect Vatia.”33 Such electioneering indicates that the position of aedile had power in Pompeii and that the aedile was chosen in relatively free and fair elections, rather than appointed by the imperial autocrat in Rome.

    Even in empires whose rulers never had any democratic pretensions, democracy could still flourish in local settings. In the Tsarist Empire, for example, the daily lives of millions of villagers were managed by rural communes. Going back at least to the eleventh century, each commune usually included fewer than a thousand people. They were subject to a landlord and bore many obligations to their lord and to the central Tsarist state, but they had considerable autonomy in managing their internal affairs and in deciding how to discharge their external obligations, such as paying taxes and providing military recruits. The commune mediated local disputes, provided emergency relief, enforced social norms, oversaw the distribution of land to individual households, and regulated access to shared resources like forests and pastures. Decisions on important matters were made in communal meetings in which the heads of local households expressed their views and chose the commune’s elder. Resolutions at least tried to reflect the majority’s will.34

    In Tsarist villages and Roman cities a form of democracy was possible because a meaningful public conversation was possible. Pompeii was a city of about eleven thousand people in 79 CE,35 so everybody could supposedly judge for themselves whether Lucretius Fronto was an honest man and whether Marcus Cerrinius Vatia was a drunken thief. But democracy at a scale of millions became possible only in the modern age, when mass media changed the nature of large-scale information networks.

    MASS MEDIA MAKES MASS DEMOCRACY POSSIBLE

    Mass media can be defined as the ability to quickly connect millions of people even when they are separated by large distances. The printing press was a crucial step in that direction. Print made it possible to cheaply and quickly produce large numbers of books and pamphlets, which enabled more people to voice their opinions and be heard over a large territory, even if the process still took time. This sustained some of the first experiments in large-scale democracy, such as the Polish-Lithuanian Commonwealth established in 1569 and the Dutch Republic established in 1579.

    Some may contest the characterization of these polities as “democratic,” since only a minority of relatively wealthy citizens enjoyed full political rights. In the Polish-Lithuanian Commonwealth, political rights were reserved for adult male members of the szlachta—the nobility. These numbered up to 300,000 individuals, or about 5 percent of the total adult population.36 One of the szlachta’s prerogatives was to elect the king, but since voting required traveling long distances to a national convention, few exercised their right. In the sixteenth and seventeenth centuries participation in royal elections usually ranged between 3,000 and 7,000 voters, except for the 1669 elections in which 11,271 participated.37 While this hardly sounds democratic in the twenty-first century, it should be remembered that all large-scale democracies until the twentieth century limited political rights to a small circle of relatively wealthy men. Democracy is never a matter of all or nothing. It is a continuum, and late sixteenth-century Poles and Lithuanians explored previously unknown regions of that continuum.

    Aside from electing its king, Poland-Lithuania had an elected parliament (the Sejm) that approved or blocked new legislation and had the power to veto royal decisions on taxation and foreign affairs. Moreover, citizens enjoyed a list of inviolable rights such as freedom of assembly and freedom of religion. In the late sixteenth and early seventeenth centuries, when most of Europe suffered from bitter religious conflicts and persecutions, Poland-Lithuania was a tolerant haven, where Catholics, Greek Orthodox, Lutherans, Calvinists, Jews, and even Muslims coexisted in relative harmony.38 In 1616, more than a hundred mosques functioned in the commonwealth.39

    In the end, however, the Polish-Lithuanian experiment in decentralization proved to be impractical. The country was Europe’s second-largest state (after Russia), covering almost a million square kilometers and including most of the territory of today’s Poland, Lithuania, Belarus, and Ukraine. It lacked the information, communication, and education systems necessary to hold a meaningful political conversation between Polish aristocrats, Lithuanian noblemen, Ukrainian Cossacks, and Jewish rabbis spread from the Baltic Sea to the Black Sea. Its self-correcting mechanisms were also too costly, paralyzing the power of the central government. In particular, every single Sejm deputy was given the right to veto all parliamentary legislation, which led to political deadlock. The combination of a large and diverse polity with a weak center proved fatal. The commonwealth was torn apart by centrifugal forces, and its pieces were then divided between the centralized autocracies of Russia, Austria, and Prussia.

    The Dutch experiment fared better. In some ways the Dutch United Provinces were even less centralized than the Polish-Lithuanian Commonwealth, since they lacked a monarch, and were a union of seven autonomous provinces, which were in turn made up of self-governing towns and cities.40 This decentralized nature is reflected in the plural form of how the country was known abroad—the Netherlands in English, Les Pays-Bas in French, Los Países Bajos in Spanish, and so on.

    However, taken together the United Provinces were twenty-five times smaller in landmass than Poland-Lithuania and possessed a much better information, communication, and education system that tied its constituent parts closely together.41 The United Provinces also pioneered a new information technology with a big future. In June 1618 a pamphlet titled Courante uyt Italien, Duytslandt &c. appeared in Amsterdam. As its title indicated, it carried news from the Italian Peninsula, the German lands, and other places. There was nothing remarkable about this particular pamphlet, except that new issues were published in the following weeks, too. They appeared regularly until 1670, when the Courante uyt Italien, Duytslandt &c. merged with other serial pamphlets into the Amsterdamsche Courant, which appeared until 1903, when it was merged into De Telegraaf—the Netherlands’ largest newspaper to this day.42

    The newspaper is a periodic pamphlet, and it was different from earlier one-off pamphlets because it had a much stronger self-correcting mechanism. Unlike one-off publications, a weekly or daily newspaper has a chance to correct its mistakes and an incentive to do so in order to win the public’s trust. Shortly after the Courante uyt Italien, Duytslandt &c. appeared, a competing newspaper titled Tijdinghen uyt Verscheyde Quartieren (Tidings from Various Quarters) made its debut. The Courante was generally considered more reliable, because it tried to check its stories before publishing them, and because the Tijdinghen was accused of being overly patriotic and reporting only news favorable to the Netherlands. Nevertheless, both newspapers survived, because, as one reader explained, “one can always find something in one newspaper that is not available in the other.” In the following decades dozens of additional newspapers were published in the Netherlands, which became Europe’s journalistic hub.43

    Newspapers that succeeded in gaining widespread trust became the architects and mouthpieces of public opinion. They created a far more informed and engaged public, which changed the nature of politics, first in the Netherlands and later around the world.44 The political influence of newspapers was so crucial that newspaper editors often became political leaders. Jean-Paul Marat rose to power in revolutionary France by founding and editing L’Ami du People; Eduard Bernstein helped create Germany’s Social Democratic Party by editing Der Sozialdemokrat; Vladimir Lenin’s most important position before becoming Soviet dictator was editor of Iskra; and Benito Mussolini rose to fame first as a socialist journalist in Avanti! and later as founder and editor of the firebrand right-wing paper Il Popolo d’Italia.

    Newspapers played a crucial role in the formation of early modern democracies like the United Provinces in the Low Countries, the United Kingdom in the British Isles, and the United States in North America. As the names themselves indicate, these were not city-states like ancient Athens and Rome but amalgams of different regions glued together in part by this new information technology. For example, when on December 6, 1825, President John Quincy Adams gave his First Annual Message to the U.S. Congress, the text of the address and summaries of the main points were published over the next weeks by newspapers from Boston to New Orleans (at the time, hundreds of newspapers and magazines were being published in the United States45).

    Adams declared his administration’s intentions of initiating numerous federal projects ranging from the construction of roads to the founding of an astronomical observatory, which he poetically named “light-house of the skies.” His speech ignited a fierce public debate, much of it conducted in print between those who supported such “big government” plans as essential for the development of the United States and many who preferred a “small government” approach and saw Adams’s plans as federal overreach and an encroachment on states’ rights.

    Northern supporters of the “small government” camp complained that it was unconstitutional for the federal government to tax the citizens of richer states in order to build roads in poorer states. Southerners feared that a federal government that claims the power to build a lighthouse of the sky in their backyard may one day claim the power to free their slaves, too. Adams was accused of harboring dictatorial ambitions, while the erudition and sophistication of his speech were criticized as elitist and disconnected from ordinary Americans. The public debates over the 1825 message to Congress dealt a severe blow to the reputation of the Adams administration and helped pave the way to Adams’s subsequent electoral defeat. In the 1828 presidential elections, Adams lost to Andrew Jackson—a rich slaveholding planter from Tennessee who was successfully rebranded in numerous newspaper columns as “the man of the people” and who claimed that the previous elections were in fact stolen by Adams and by the corrupt Washington elites.46

    Newspapers of the time were of course still slow and limited compared with the mass media of today. Newspapers traveled at the pace of a horse or sailboat, and relatively few people read them regularly. There were no newsstands or street vendors, so people had to buy subscriptions, which were expensive; average annual subscriptions cost around one week’s wages for a skilled journeyman. As a result, the total number of subscribers to all U.S. newspapers in 1830 is estimated at just seventy-eight thousand. Since some subscribers were associations or businesses rather than individuals, and since every copy was probably read by several people, it seems reasonable to assume that regular newspaper readership numbered in the hundreds of thousands. But millions more rarely, if ever, read newspapers.47

    No wonder that American democracy in those days was a limited affair—and the domain of wealthy white men. In the 1824 elections that brought Adams to power, 1.3 million Americans were theoretically eligible to vote, out of an adult population of about 5 million (or around 25 percent). Only 352,780 people—7 percent of the total adult population—actually made use of their right. Adams didn’t even win a majority of those who voted. Owing to the quirks of the U.S. electoral system, he became president thanks to the support of just 113,122 voters, or not much more than 2 percent of adults, and 1 percent of the total population.48 In Britain at the same time, only about 400,000 people were eligible to vote for Parliament, or around 6 percent of the adult population. Moreover, 30 percent of parliamentary seats were not even contested.49

    You may wonder whether we are talking about democracies at all. At a time when the United States had more slaves than voters (more than 1.5 million Americans were enslaved in the early 1820s),50 was the United States really a democracy? This is a question of definitions. As with the late sixteenth-century Polish-Lithuanian Commonwealth, so also with the early nineteenth-century United States, “democracy” is a relative term. As noted earlier, democracy and autocracy aren’t absolutes; they are part of a continuum. In the early nineteenth century, out of all large-scale human societies, the United States was probably the closest to the democratic end of the continuum. Giving 25 percent of adults the right to vote doesn’t sound like much today, but in 1824 that was a far higher percentage than in the Tsarist, Ottoman, or Chinese Empires, in which nobody had the right to vote.51

    Besides, as emphasized throughout this chapter, voting is not the only thing that counts. An even more important reason to consider the United States in 1824 a democracy is that compared with most other polities of its day, the new country possessed much stronger self-correcting mechanisms. The Founding Fathers were inspired by ancient Rome—witness the Senate and the Capitol in Washington—and they were well aware that the Roman Republic eventually turned into an autocratic empire. They feared that some American Caesar would do something similar to their republic, and constructed multiple overlapping self-correcting mechanisms, known as the system of checks and balances. One of these was a free press. In ancient Rome, the self-correcting mechanisms stopped functioning as the republic enlarged its territory and population. In the United States, modern information technology combined with freedom of the press helped the self-correcting mechanisms survive even as the country extended from the Atlantic to the Pacific.

    It was these self-correcting mechanisms that gradually enabled the United States to expand the franchise, abolish slavery, and turn itself into a more inclusive democracy. As noted in chapter 3, the Founding Fathers committed enormous mistakes—such as endorsing slavery and denying women the vote—but they also provided the tools for their descendants to correct these mistakes. That was their greatest legacy.

    THE TWENTIETH CENTURY: MASS DEMOCRACY, BUT ALSO MASS TOTALITARIANISM

    Printed newspapers were just the first harbinger of the mass media age. During the nineteenth and twentieth centuries, a long list of new communication and transportation technologies such as the telegraph, telephone, television, radio, trains, steamships, and airplanes supercharged the power of mass media.

    When Demosthenes gave a public speech in Athens around 350 BCE, it was aimed primarily at the limited audience actually present in the Athenian agora. When John Quincy Adams gave his First Annual Message in 1825, his words spread at the pace of a horse. When Abraham Lincoln gave his Gettysburg Address on November 19, 1863, telegraphs, locomotives, and steamships conveyed his words much faster throughout the Union and beyond. The very next day The New York Times had already reprinted the speech in full,52 as had numerous other newspapers from The Portland Daily Press in Maine to the Ottumwa Courier in Iowa.53

    As befitting a democracy with strong self-correcting mechanisms in place, the president’s speech sparked a lively conversation rather than universal applause. Most newspapers lauded it, but some expressed their doubts. The Chicago Times wrote on November 20 that “the cheek of every American must tingle with shame as he reads the silly, flat and dishwatery utterances” of President Lincoln.54 The Patriot & Union, a local newspaper in Harrisburg, Pennsylvania, also blasted “the silly remarks of the President” and hoped that “the veil of oblivion shall be dropped over them and that they shall be no more repeated or thought of.”55 Though the country was in the midst of a civil war, journalists were free to publicly criticize—and even ridicule—the president.

    Fast-forward a century, and things really picked up speed. For the first time in history, new technologies allowed masses of people, spread over vast swaths of territory, to connect in real time. In 1960, about seventy million Americans (39 percent of the total population), dispersed over the North American continent and beyond, watched the Nixon-Kennedy presidential debates live on television, with millions more listening on the radio.56 The only effort viewers and listeners had to make was to press a button while sitting in their homes. Large-scale democracy had now become feasible. Millions of people separated by thousands of kilometers could conduct informed and meaningful public debates about the rapidly evolving issues of the day. By 1960, all adult Americans were theoretically eligible to vote, and close to seventy million (about 64 percent of the electorate) actually did so—though millions of Blacks and other disenfranchised groups were prevented from voting through various voter-suppression schemes.57

    As always, we should beware of technological determinism and of concluding that the rise of mass media led to the rise of large-scale democracy. Mass media made large-scale democracy possible, rather than inevitable. And it also made possible other types of regimes. In particular, the new information technologies of the modern age opened the door for large-scale totalitarian regimes. Like Nixon and Kennedy, Stalin and Khrushchev could say something over the radio and be heard instantaneously by hundreds of millions of people from Vladivostok to Kaliningrad. They could also receive daily reports by phone and telegraph from millions of secret police agents and informers. If a newspaper in Vladivostok or Kaliningrad wrote that the supreme leader’s latest speech was silly (as happened to Lincoln’s Gettysburg Address), then everyone involved—from the editor in chief to the typesetters—would likely have received a visit from the KGB.

    A BRIEF HISTORY OF TOTALITARIANISM

    Totalitarian systems assume their own infallibility, and seek total control over the totality of people’s lives. Before the invention of the telegraph, radio, and other modern information technology, large-scale totalitarian regimes were impossible. Roman emperors, Abbasid caliphs, and Mongol khans were often ruthless autocrats who believed they were infallible, but they did not have the apparatus necessary to impose totalitarian control over large societies. To understand this, we should first clarify the difference between totalitarian regimes and less extreme autocratic regimes. In an autocratic network, there are no legal limits on the will of the ruler, but there are nevertheless a lot of technical limits. In a totalitarian network, many of these technical limits are absent.58

    For example, in autocratic regimes like the Roman Empire, the Abbasid Empire, and the Mongol Empire, rulers could usually execute any person who displeased them, and if some law got in their way, they could ignore or change the law. The emperor Nero arranged the murder of his mother, Agrippina, and his wife, Octavia, and forced his mentor Seneca to commit suicide. Nero also executed or exiled some of the most respected and powerful Roman aristocrats merely for voicing dissent or telling jokes about him.59

    While autocratic rulers like Nero could execute anyone who did or said something that displeased them, they couldn’t know what most people in their empire were doing or saying. Theoretically, Nero could issue an order that any person in the Roman Empire who criticized or insulted the emperor must be severely punished. Yet there were no technical means for implementing such an order. Roman historians like Tacitus portray Nero as a bloodthirsty tyrant who instigated an unprecedented reign of terror. But this was a very limited type of terror. Although he executed or exiled a number of family members, aristocrats, and senators within his orbit, ordinary Romans in the city’s slums and provincials in distant towns like Jerusalem and Londinium could speak their mind much more freely.60

    Modern totalitarian regimes like the Stalinist U.S.S.R. instigated terror on an altogether different scale. Totalitarianism is the attempt to control what every person throughout the country is doing and saying every moment of the day, and potentially even what every person is thinking and feeling. Nero might have dreamed about such powers, but he lacked the means to realize them. Given the limited tax base of the agrarian Roman economy, Nero couldn’t employ many people in his service. He could place informers at the dinner parties of Roman senators, but he had only about 10,000 imperial administrators61 and 350,000 soldiers62 to control the rest of the empire, and he lacked the technology to communicate with them swiftly.

    Nero and his fellow emperors had an even bigger problem ensuring the loyalty of the administrators and soldiers they did have on their payroll. No Roman emperor was ever toppled by a democratic revolution like the ones that deposed Louis XVI, Nicolae Ceauşescu, or Hosni Mubarak. Instead, dozens of emperors were assassinated or deposed by their own generals, officials, bodyguards, or family members.63 Nero himself was overthrown by a revolt of the governor of Hispania, Galba. Six months later Galba was ousted by Otho, the governor of Lusitania. Within three months, Otho was deposed by Vittelius, commander of the Rhine army. Vitellius lasted about eight months before he was defeated and killed by Vespasian, commander of the army in Judaea. Being killed by a rebellious subordinate was the biggest occupational hazard not just for Roman emperors but for almost all premodern autocrats.

    Emperors, caliphs, shahs, and kings found it a huge challenge to keep their subordinates in check. Rulers consequently focused their attention on controlling the military and the taxation system. Roman emperors had the authority to interfere in the local affairs of any province or city, and they sometimes exercised that authority, but this was usually done in response to a specific petition sent by a local community or official,64 rather than as part of some empire-wide totalitarian Five-Year Plan. If you were a mule driver in Pompeii or a shepherd in Roman Britain, Nero didn’t want to control your daily routines or to police the jokes you told. As long as you paid your taxes and didn’t resist the legions, that was good enough for Nero.

    SPARTA AND QIN

    Some scholars claim that despite the technological difficulties there were attempts to establish totalitarian regimes in ancient times. The most common example cited is Sparta. According to this interpretation, Spartans were ruled by a totalitarian regime that micromanaged every aspect of their lives—from whom they married to what they ate. However, while the Spartan regime was certainly draconian, it actually included several self-correcting mechanisms that prevented power from being monopolized by a single person or faction. Political authority was divided between two kings, five ephors (senior magistrates), twenty-eight members of the Gerousia council, and the popular assembly. Important decisions—such as whether to go to war—often involved fierce public debates.

    Moreover, irrespective of how we evaluate the nature of Sparta’s regime, it is clear that the same technological limitations that confined ancient Athenian democracy to a single city also limited the scope of the Spartan political experiment. After winning the Peloponnesian War, Sparta installed military garrisons and pro-Spartan governments in numerous Greek cities, requiring them to follow its lead in foreign policy and sometimes also pay tribute. But unlike the U.S.S.R. after World War II, Sparta after the Peloponnesian War did not try to expand or export its system. Sparta couldn’t construct an information network big and dense enough to control the lives of ordinary people in every Greek town and village.65

    A much more ambitious totalitarian project might have been launched by the Qin dynasty in ancient China (221–206 BCE). After defeating all the other Warring States, the Qin ruler Qin Shi Huang controlled a huge empire with tens of millions of subjects, who belonged to numerous different ethnic groups, spoke diverse languages, and were loyal to various local traditions and elites. To cement its power, the victorious Qin regime tried to dismantle any regional powers that might challenge its authority. It confiscated the lands and wealth of local aristocrats and forced regional elites to move to the imperial capital of Xiangyang, thereby separating them from their power base and monitoring them more easily.

    The Qin regime also embarked on a ruthless campaign of centralization and homogenization. It created a new simplified script to be used throughout the empire and standardized coinage, weights, and measurements. It built a road network radiating out of Xiangyang, with standardized rest houses, relay stations, and military checkpoints. People needed written permits in order to enter or leave the capital region or frontier zones. Even the width of axles was standardized to ensure that carts and chariots could run in the same ruts.

    Every action, from tilling fields to getting married, was supposed to serve some military need, and the type of military discipline that Rome reserved for the legions was imposed by the Qin on the entire population. The envisioned reach of this system can be exemplified by one Qin law that specified the punishment an official faced if he neglected a granary under his supervision. The law discusses the number of rat holes in the granary that would warrant fining or berating the official. “For three or more rat holes the fine is [the purchase of] one shield [for the army] and for two or fewer [the responsible official] is berated. Three mouse holes are equal to one rat hole.”66

    To facilitate this totalitarian system, the Qin attempted to create a militarized social order. Every male subject had to belong to a five-man unit. These units were aggregated into larger formations, from local hamlets (li), through cantons (xiang) and counties (xian), all the way to the large imperial commanderies (jun). People were forbidden to change their residence without permit, to the extent that guests could not even stay overnight at a friend’s house without proper identification and authorization.

    Every Qin male subject was also given a rank, just as every soldier in an army has a rank. Obedience to the state resulted in promotion to higher ranks, which brought with it economic and legal privileges, while disobedience could result in demotion or punishment. People in each formation were supposed to supervise one another, and if any individual committed some misdeed, all could be punished for it. Anyone who failed to report a criminal—even their own relatives—would be killed. Those who reported crimes were rewarded with higher ranks and other perks.

    It is highly questionable to what extent the regime managed to implement all these totalitarian measures. Bureaucrats writing documents in a government office often invent elaborate rules and regulations, which then turn out to be impractical. Did conscientious government officials really go around the entire Qin Empire counting rat holes in every granary? Were peasants in every remote mountain hamlet really organized into five-man squads? Probably not. Nevertheless, the Qin Empire outdid other ancient empires in its totalitarian ambitions.

    The Qin regime even tried to control what its subjects were thinking and feeling. During the Warring States period Chinese thinkers were relatively free to develop myriad ideologies and philosophies, but the Qin adopted the doctrine of Legalism as the official state ideology. Legalism posited that humans were naturally greedy, cruel, and egotistical. It emphasized the need for strict control, argued that punishments and rewards were the most effective means of control, and insisted that state power not be curtailed by any moral consideration. Might was right, and the good of the state was the supreme good.67 The Qin proscribed other philosophies, such as Confucianism and Daoism, which believed humans were more altruistic and which emphasized the importance of virtue rather than violence.68 Books espousing such soft views were banned, as well as books that contradicted the official Qin version of history.

    When one scholar argued that Qin Shi Huang should emulate the founder of the ancient Zhou dynasty and decentralize state power, the Qin chief minister, Li Si, countered that scholars should stop criticizing present-day institutions by idealizing the past. The regime ordered the confiscation of all books that romanticized antiquity or otherwise criticized the Qin. Such problematic texts were stored in the imperial library and could be studied only by official scholars.69

    The Qin Empire was probably the most ambitious totalitarian experiment in human history prior to the modern age, and its scale and intensity would prove to be its ruin. The attempt to regiment tens of millions of people along military lines, and to monopolize all resources for military purposes, led to severe economic problems, wastefulness, and popular resentment. The regime’s draconian laws, along with its hostility to regional elites and its voracious appetite for taxes and recruits, fanned the flames of this resentment even further. Meanwhile, the limited resources of an ancient agrarian society couldn’t support all the bureaucrats and soldiers that the Qin needed to contain this resentment, and the low efficiency of their information technology made it impossible to control every town and village from distant Xiangyang. Not surprisingly, in 209 BCE a series of revolts broke out, led by regional elites, disgruntled commoners, and even some of the empire’s own newly minted officials.

    According to one account, the first serious revolt started when a group of conscripted peasants sent to work in a frontier zone were delayed by rain and flooding. They feared they would be executed for this dereliction of duty, and felt they had nothing to lose. They were quickly joined by numerous other rebels. Just fifteen years after reaching the apogee of power, the Qin Empire collapsed under the weight of its totalitarian ambitions, splintering into eighteen kingdoms.

    After several years of war, a new dynasty—the Han—reunited the empire. But the Han then adopted a more realistic, less draconian attitude. Han emperors were certainly autocratic, but they were not totalitarian. They did not recognize any limits on their authority, but they did not try to micromanage everyone’s lives. Instead of following Legalist ideas of surveillance and control, the Han turned to Confucian ideas of encouraging people to act loyally and responsibly out of inner moral convictions. Like their contemporaries in the Roman Empire, Han emperors sought to control only some aspects of society from the center, while leaving considerable autonomy to provincial aristocrats and local communities. Due largely to the limitations imposed by the available information technology, premodern large-scale polities like the Roman and Han Empires gravitated toward nontotalitarian autocracy.70 Full-blown totalitarianism might have been dreamed about by the likes of the Qin, but its implementation had to wait for the development of modern technology.

    THE TOTALITARIAN TRINITY

    Just as modern technology enabled large-scale democracy, it also made large-scale totalitarianism possible. Beginning in the nineteenth century, the rise of industrial economies allowed governments to employ many more administrators, and new information technologies—such as the telegraph and radio—made it possible to quickly connect and supervise all these administrators. This facilitated an unprecedented concentration of information and power, for those who dreamed about such things.

    When the Bolsheviks seized control of Russia after the 1917 revolution, they were driven by exactly such a dream. The Bolsheviks craved unlimited power because they believed they had a messianic mission. Marx taught that for millennia, all human societies were dominated by corrupt elites who oppressed the people. The Bolsheviks claimed they knew how to finally end all oppression and create a perfectly just society on earth. But to do so, they had to overcome numerous enemies and obstacles, which, in turn, required all the power they could get. They refused to countenance any self-correcting mechanisms that might question either their vision or their methods. Like the Catholic Church, the Bolshevik Party was convinced that though its individual members might err, the party itself was always right. Belief in their own infallibility led the Bolsheviks to destroy Russia’s nascent democratic institutions—like elections, independent courts, the free press, and opposition parties—and to create a one-party totalitarian regime. Bolshevik totalitarianism did not start with Stalin. It was evident from the very first days of the revolution. It stemmed from the doctrine of party infallibility, rather than from the personality of Stalin.

    In the 1930s and 1940s, Stalin perfected the totalitarian system he inherited. The Stalinist network was composed of three main branches. First, there was the governmental apparatus of state ministries, regional administrations, and regular Red Army units, which in 1939 comprised 1.6 million civilian officials71 and 1.9 million soldiers.72 Second, there was the apparatus of the Communist Party of the Soviet Union and its ubiquitous party cells, which in 1939 included 2.4 million party members.73 Third, there was the secret police: first known as the Cheka, in Stalin’s days it was called the OGPU, NKVD, and MGB, and after Stalin’s death it morphed into the KGB. Its post-Soviet successor organization is known since 1995 as the FSB. In 1937, the NKVD had 270,000 agents and millions of informers.74

    The three branches operated in parallel. Just as democracy is maintained by having overlapping self-correcting mechanisms that keep each other in check, modern totalitarianism created overlapping surveillance mechanisms that keep each other in order. The governor of a Soviet province was constantly watched by the local party commissar, and neither of them knew who among their staff was an NKVD informer. A testimony to the effectiveness of the system is that modern totalitarianism largely solved the perennial problem of premodern autocracies—revolts by provincial subordinates. While the U.S.S.R. had its share of court coups, not once did a provincial governor or a Red Army front commander rebel against the center.75 Much of the credit for that goes to the secret police, which kept a close eye on the mass of citizens, on provincial administrators, and even more so on the party and the Red Army.

    While in most polities throughout history the army had wielded enormous political power, in twentieth-century totalitarian regimes the regular army ceded much of its clout to the secret police—the information army. In the U.S.S.R., the Cheka, OGPU, NKVD, and KGB lacked the firepower of the Red Army, but had more influence in the Kremlin and could terrorize and purge even the army brass. The East German Stasi and the Romanian Securitate were similarly stronger than the regular armies of these countries.76 In Nazi Germany, the SS was more powerful than the Wehrmacht, and the SS chief, Heinrich Himmler, was higher up the pecking order than Wilhelm Keitel, chief of the Wehrmacht high command.

    In none of these cases could the secret police defeat the regular army in traditional warfare, of course; what made the secret police powerful was its command of information. It had the information necessary to preempt a military coup and to arrest the commanders of tank brigades or fighter squadrons before they knew what hit them. During the Stalinist Great Terror of the late 1930s, out of 144,000 Red Army officers about 10 percent were shot or imprisoned by the NKVD. This included 154 of 186 divisional commanders (83 percent), eight of nine admirals (89 percent), thirteen of fifteen full generals (87 percent), and three of five marshals (60 percent).77

    The party leadership fared just as badly. Of the revered Old Bolsheviks, people who joined the party before the 1917 revolution, about a third didn’t survive the Great Terror.78 Of the thirty-three men who served on the Politburo between 1919 and 1938, fourteen were shot (42 percent). Of the 139 members and candidate members of the party’s Central Committee in 1934, 98 (70 percent) were shot. Only 2 percent of the delegates who took part in the Seventeenth Party Congress in 1934 evaded execution, imprisonment, expulsion, or demotion, and attended the Eighteenth Party Congress in 1939.79

    The secret police—which did all the purging and killing—was itself divided into several competing branches that closely watched and purged each other. Genrikh Yagoda, the NKVD head who orchestrated the beginning of the Great Terror and supervised the killing of hundreds of thousands of victims, was executed in 1938 and replaced by Nikolai Yezhov. Yezhov lasted for two years, killing and imprisoning millions of people before being executed in 1940.

    Perhaps most telling is the fate of the thirty-nine people who in 1935 held the rank of general in the NKVD (called commissars of state security in Soviet nomenclature). Thirty-five of them (90 percent) were arrested and shot by 1941, one was assassinated, and one—the head of the NKVD’s Far East regional office—saved himself by defecting to Japan, but was killed by the Japanese in 1945. Of the original cohort of thirty-nine NKVD generals, only two men were left standing by the end of World War II. The remorseless logic of totalitarianism eventually caught up with them too. During the power struggles that followed Stalin’s death in 1953, one of them was shot, while the other was consigned to a psychiatric hospital, where he died in 1960.80 Serving as an NKVD general in Stalin’s day was one of the most dangerous jobs in the world. At a time when American democracy was improving its many self-correcting mechanisms, Soviet totalitarianism was refining its triple self-surveilling and self-terrorizing apparatus.

    TOTAL CONTROL

    Totalitarian regimes are based on controlling the flow of information and are suspicious of any independent channels of information. When military officers, state officials, or ordinary citizens exchange information, they can build trust. If they come to trust each other, they can organize resistance to the regime. Therefore, a key tenet of totalitarian regimes is that wherever people meet and exchange information, the regime should be there too, to keep an eye on them. In the 1930s, this was one principle that Hitler and Stalin shared.

    On March 31, 1933, two months after Hitler became chancellor, the Nazis passed the Coordination Act (Gleichschaltungsgesetz). This stipulated that by April 30, 1933, all political, social, and cultural organizations throughout Germany—from municipalities to football clubs and local choirs—must be run according to Nazi ideology, as organs of the Nazi state. It upended life in every city and hamlet in Germany.

    For example, in the small Alpine village of Oberstdorf, the democratically elected municipal council met for the last time on April 21, 1933, and three days later it was replaced by an unelected Nazi council that appointed a Nazi mayor. Since the Nazis alone allegedly knew what the people really wanted, who other than Nazis could implement the people’s will? Oberstdorf also had about fifty associations and clubs, ranging from a beekeeping society to an alpinist club. They all had to conform to the Coordination Act, adjusting their boards, membership, and statutes to Nazi demands, hoisting the swastika flag, and concluding every meeting with the “Horst Wessel Song,” the Nazi Party’s anthem. On April 6, 1933, the Oberstdorf fishing society banned Jews from its ranks. None of the thirty-two members was Jewish, but they felt they had to prove their Aryan credentials to the new regime.81

    Things were even more extreme in Stalin’s U.S.S.R. Whereas the Nazis still allowed church organizations and private businesses some partial freedom of action, the Soviets made no exceptions. By 1928 and the launch of the first Five-Year Plan, there were government officials, party functionaries, and secret police informants in every neighborhood and village, and between them they controlled every aspect of life: all businesses from power plants to cabbage farms; all newspapers and radio stations; all universities, schools, and youth groups; all hospitals and clinics; all voluntary and religious organizations; all sporting and scientific associations; all parks, museums, and cinemas.

    If a dozen people came together to play football, hike in the woods, or do some charity work, the party and the secret police had to be there too, represented by the local party cell or NKVD agent. The speed and efficiency of modern information technology meant that all these party cells and NKVD agents were always just a telegram or phone call away from Moscow. Information about suspicious persons and activities was fed into a countrywide, cross-referenced system of card catalogs. Known as kartoteki, these catalogs contained information from work records, police files, residence cards, and other forms of social registrations and, by the 1930s, had become the primary mechanism for surveilling and controlling the Soviet population.82

    This made it feasible for Stalin to seek control over the totality of Soviet life. One crucial example was the campaign to collectivize Soviet farming. For centuries, economic, social, and private life in the thousands of villages of the sprawling Tsarist Empire was managed by several traditional institutions: the local commune, the parish church, the private farm, the local market, and above all the family. In the mid-1920s, the Soviet Union was still an overwhelmingly agrarian economy. About 82 percent of the total population lived in villages, and 83 percent of the workforce was engaged in farming.83 But if each peasant family made its own decisions about what to grow, what to buy, and how much to charge for their produce, it greatly limited the ability of Moscow officials to themselves plan and control social and economic activities. What if the officials decided on a major agrarian reform, but the peasant families rejected it? So when in 1928 the Soviets came up with their first Five-Year Plan for the development of the Soviet Union, the most important item on the agenda was to collectivize farming.

    The idea was that in every village all the families would join a kolkhoz—a collective farm. They would hand over to the kolkhoz all their property—land, houses, horses, cows, shovels, pitchforks. They would work together for the kolkhoz, and in return the kolkhoz would provide for all their needs, from housing and education to food and health care. The kolkhoz would also decide—based on orders from Moscow—whether they should grow cabbages or turnips; whether to invest in a tractor or a school; and who would work in the dairy farm, the tannery, and the clinic. The result, thought the Moscow masterminds, would be the first perfectly just and equal society in human history.

    They were similarly convinced of the economic advantages of their proposed system, thinking that the kolkhoz would enjoy economy of scale. For example, when every peasant family had but a small strip of land, it made little sense to buy a tractor to plow it, and in any case most families couldn’t afford a tractor. Once all land was held communally, it could be cultivated far more efficiently using modern machinery. In addition, the kolkhoz was supposed to benefit from the wisdom of modern science. Instead of every peasant deciding on production methods on the basis of old traditions and groundless superstitions, state experts with university degrees from institutions like the Lenin All-Union Academy of Agricultural Sciences would make the crucial decisions.

    To the planners in Moscow, it sounded wonderful. They expected a 50 percent increase in agricultural production by 1931.84 And if in the process the old village hierarchies and inequalities were bulldozed, all the better. To most peasants, however, it sounded terrible. They didn’t trust the Moscow planners or the new kolkhoz system. They did not want to give up their old way of life or their private property. Villagers slaughtered cows and horses instead of handing them to the kolkhoz. Their motivation to work dwindled. People made less effort plowing fields that belonged to everyone than plowing fields that belonged to their own family. Passive resistance was ubiquitous, sometimes flaring into violent clashes. Whereas Soviet planners expected to harvest ninety-eight million tons of grain in 1931, production was only sixty-nine million, according to official data, and might have been as low as fifty-seven million tons in reality. The 1932 harvest was even worse.85

    The state reacted with fury. Between 1929 and 1936, food confiscation, government neglect, and man-made famines (resulting from government policy rather than a natural disaster) claimed the lives of between 4.5 and 8.5 million people.86 Millions of additional peasants were declared enemies of the state and deported or imprisoned. The most basic institutions of peasant life—the family, the church, the local community—were terrorized and dismantled. In the name of justice, equality, and the will of the people, the collectivization campaign annihilated anything that stood in its way. In the first two months of 1930 alone, about 60 million peasants in more than 100,000 villages were herded into collective farms.87 In June 1929, only 4 percent of Soviet peasant households had belonged to collective farms. By March 1930 the figure had risen to 57 percent. By April 1937, 97 percent of households in the countryside had been confined to the 235,000 Soviet collective farms.88 In just seven years, then, a way of life that had existed for centuries had been replaced by the totalitarian brainchild of a few Moscow bureaucrats.

    THE KULAKS

    It is worthwhile to delve a little deeper into the history of Soviet collectivization. For it was a tragedy that bears some resemblance to earlier catastrophes in human history—like the European witch-hunt craze—and at the same time foreshadows some of the biggest dangers posed by twenty-first-century technology and its faith in supposedly scientific data.

    When their efforts to collectivize farming encountered resistance and led to economic disaster, Moscow bureaucrats and mythmakers took a page from Kramer’s Hammer of the Witches. I don’t wish to imply the Soviets actually read the book, but they too invented a global conspiracy and created an entire non-existing category of enemies. In the 1930s Soviet authorities repeatedly blamed the disasters afflicting the Soviet economy on a counterrevolutionary cabal whose chief agents were the “kulaks” or “capitalist farmers.” Just as in Kramer’s imagination witches serving Satan conjured hailstorms that destroyed crops, so in the Stalinist imagination kulaks beholden to global capitalism sabotaged the Soviet economy.

    In theory, kulaks were an objective socioeconomic category, defined by analyzing empirical data on things like property, income, capital, and wages. Soviet officials could allegedly identify kulaks by counting things. If most people in a village had only one cow, then the few families who had three cows were considered kulaks. If most people in a village didn’t hire any labor, but one family hired two workers during harvest time, this was a kulak family. Being a kulak meant not only that you possessed a certain amount of property but also that you possessed certain personality traits. According to the supposedly infallible Marxist doctrine, people’s material conditions determined their social and spiritual character. Since kulaks allegedly engaged in capitalist exploitation, it was a scientific fact (according to Marxist thinking) that they were greedy, selfish, and unreliable—and so were their children. Discovering that someone was a kulak ostensibly revealed something profound about their fundamental nature.

    On December 27, 1929, Stalin declared that the Soviet state should seek “the liquidation of the kulaks as a class,”89 and immediately galvanized the party and the secret police to realize that ambitious and murderous aim. Early modern European witch-hunters worked in autocratic societies that lacked modern information technology; therefore, it took them three centuries to kill fifty thousand alleged witches. In contrast, Soviet kulak hunters were working in a totalitarian society that had at its disposal technologies such as telegraphs, trains, telephones, and radios—as well as a sprawling bureaucracy. They decided that two years would suffice to “liquidate” millions of kulaks.90

    Soviet officials began by assessing how many kulaks there must be in the U.S.S.R. Based on existing data—such as tax records, employment records, and the 1926 Soviet census—they decided that kulaks constituted 3–5 percent of the rural population.91 On January 30, 1930, just one month after Stalin’s speech, a Politburo decree translated his vague vision into a much more detailed plan of action. The decree included target numbers for the liquidation of kulaks in each major agricultural region.92 Regional authorities then made their own estimates of the number of kulaks in each county under their jurisdiction. Eventually, specific quotas were assigned to rural soviets (local administrative units, typically comprising a handful of villages). Often, local officials inflated the numbers along the way, to prove their zeal. Each rural soviet then had to identify the stated number of kulak households in the villages under its purview. These people were expelled from their homes, and—according to the administrative category to which they belonged—resettled elsewhere, incarcerated in concentration camps, or condemned to death.93

    How exactly did Soviet officials tell who was a kulak? In some villages, local party members made a conscientious effort to identify kulaks by objective measures, such as the amount of property they owned. It was often the most hardworking and efficient farmers who were stigmatized and expelled. In some villages local communists used the opportunity to get rid of their personal enemies. Some villages simply drew lots on who would be considered a kulak. Other villages held communal meetings to vote on the matter and often chose isolated farmers, widows, old people, and other “expendables” (exactly the sorts of people who in early modern Europe were most likely to be branded witches).94

    The absurdity of the entire operation is manifested in the case of the Streletsky family from the Kurgan region of Siberia. Dmitry Streletsky, who was then a teenager, recalled years later how his family was branded kulaks and selected for liquidation. “Serkov, the chairman of the village Soviet who deported us, explained: ‘I have received an order [from the district party committee] to find 17 kulak families for deportation. I formed a Committee of the Poor and we sat through the night to choose the families. There is no one in the village who is rich enough to qualify, and not many old people, so we simply chose the 17 families. You were chosen. Please don’t take it personally. What else could I do?’ ”95 If anyone dared object to the madness of the system, they were promptly denounced as kulaks and counterrevolutionaries and would themselves be liquidated.

    Altogether, some five million kulaks would be expelled from their homes by 1933. As many as thirty thousand heads of households were shot. The more fortunate victims were resettled in their district of origin or became vagrant workers in the big cities, while about two million were either exiled to remote inhospitable regions or incarcerated as state slaves in labor camps.96 Numerous important and notorious state projects—such as the construction of the White Sea Canal and the development of mines in the Arctic regions—were accomplished with the labor of millions of prisoners, many of them kulaks. It was one of the fastest and largest enslavement campaigns in human history.97 Once branded a kulak, a person could not get rid of the stigma. Government agencies, party organs, and secret police documents recorded who was a kulak in a labyrinthian system of kartoteki catalogs, archives, and internal passports.

    Kulak status even passed to the next generation, with devastating consequences. Kulak children were refused entrance to communist youth groups, the Red Army, universities, and prestigious areas of employment.98 In her 1997 memoirs, Antonina Golovina recalled how her family was deported from their ancestral village as kulaks and sent to live in the town of Pestovo. The boys in her new school regularly taunted her. On one occasion, a senior teacher told the eleven-year-old Antonina to stand up in front of all the other children, and began abusing her mercilessly, shouting that “her sort were enemies of the people, wretched kulaks! You certainly deserved to be deported, I hope you’re all exterminated!” Antonina wrote that this was the defining moment of her life. “I had this feeling in my gut that we [kulaks] were different from the rest, that we were criminals.” She never got over it.99

    Like the ten-year-old “witch” Hansel Pappenheimer, the eleven-year-old “kulak” Antonina Golovina found herself cast into an intersubjective category invented by human mythmakers and imposed by ubiquitous bureaucrats. The mountains of information collected by Soviet bureaucrats about the kulaks wasn’t the objective truth about them, but it imposed a new intersubjective Soviet truth. Knowing that someone was a kulak was one of the most important things to know about a Soviet person, even though the label was entirely bogus.

    ONE BIG HAPPY SOVIET FAMILY

    The Stalinist regime would go on to attempt something even more ambitious than the mass dismantling of private family farms. It set out to dismantle the family itself. Unlike Roman emperors or Russian tsars, Stalin tried to insert himself even into the most intimate human relationships, coming between parents and children. Family ties were considered the bedrock of corruption, inequality, and antiparty activities. Soviet children were therefore taught to worship Stalin as their real father and to inform on their biological parents if they criticized Stalin or the Communist Party.

    Starting in 1932, the Soviet propaganda machine created a veritable cult around the figure of Pavlik Morozov—a thirteen-year-old boy from the Siberian village of Gerasimovka. In autumn 1931, Pavlik informed the secret police that his father, Trofim—the chairman of the village soviet—was selling false papers to kulak exiles. During the subsequent trial, when Trofim shouted to Pavlik, “It’s me, your father,” the boy retorted, “Yes, he used to be my father, but I no longer consider him my father.” Trofim was sent to a labor camp and later shot. In September 1932, Pavlik was found murdered, and Soviet authorities arrested and executed five of his family members, who allegedly killed him in revenge for the denunciation. The real story was far more complicated, but it didn’t matter to the Soviet press. Pavlik became a martyr, and millions of Soviet children were taught to emulate him.100 Many did.

    For example, in 1934 a thirteen-year-old boy called Pronia Kolibin told the authorities that his hungry mother stole grain from the kolkhoz fields. His mother was arrested and presumably shot. Pronia was rewarded with a cash prize and a lot of positive media attention. The party organ Pravda published a poem Pronia wrote. Two of its lines read, “You are a wrecker, Mother / I can live with you no more.”101

    The Soviet attempt to control the family was reflected in a dark joke told in Stalin’s day. Stalin visits a factory undercover, and conversing with a worker, he asks the man, “Who is your father?”
    “Stalin,” replies the worker.
    “Who is your mother?”
    “The Soviet Union,” the man responds.
    “And what do you want to be?”
    “An orphan.”102

    At the time you could easily lose your liberty or your life for telling this joke, even if you told it in your own home to your closest family members. The most important lesson Soviet parents taught their children wasn’t loyalty to the party or to Stalin. It was “keep your mouth shut.”103 Few things in the Soviet Union were as dangerous as holding an open conversation.

    PARTY AND CHURCH

    You may wonder whether modern totalitarian institutions like the Nazi Party or the Soviet Communist Party were really all that different from earlier institutions like the Christian churches. After all, churches too believed in their infallibility, had priestly agents everywhere, and sought to control the daily life of people down to their diet and sexual habits. Shouldn’t we see the Catholic Church or the Eastern Orthodox Church as totalitarian institutions? And doesn’t this undermine the thesis that totalitarianism was made possible only by modern information technology?

    There are, however, several major differences between modern totalitarianism and premodern churches. First, as noted earlier, modern totalitarianism has worked by deploying several overlapping surveillance mechanisms that keep each other in order. The party is never alone; it works alongside state organs, on the one side, and the secret police, on the other. In contrast, in most medieval European kingdoms the Catholic Church was an independent institution that often clashed with the state institutions instead of reinforcing them. Consequently, the church was perhaps the most important check on the power of European autocrats.

    For example, when in the “Investiture Controversy” of the 1070s the emperor Henry IV asserted that as emperor he had the final say on the appointment of bishops, abbots, and other important church officials, Pope Gregory VII mobilized resistance and eventually forced the emperor to surrender. On January 25, 1077, Henry reached Canossa castle, where the pope was lodging, to offer his submission and apology. The pope refused to open the gates, and Henry waited in the snow outside, barefoot and hungry. After three days, the pope finally opened the gates to the emperor, who begged forgiveness.104

    An analogous clash in a modern totalitarian country is unthinkable. The whole idea of totalitarianism is to prevent any separation of powers. In the Soviet Union, state and party reinforced each other, and Stalin was the de facto head of both. There could be no Soviet “Investiture Controversy,” because Stalin had final say about all appointments to both party positions and state functions. He decided both who would be general secretary of the Communist Party of Georgia and who would be foreign minister of the Soviet Union.

    Another important difference is that medieval churches tended to be traditionalist organizations that resisted change, while modern totalitarian parties have tended to be revolutionary organizations demanding change. A premodern church built its power gradually by developing its structure and traditions over centuries. A king or a pope who wanted to swiftly revolutionize society was therefore likely to encounter stiff resistance from church members and ordinary believers.

    For example, in the eighth and ninth centuries a series of Byzantine emperors sought to forbid the veneration of icons, which seemed to them idolatrous. They pointed to many passages in the Bible, most notably the Second Commandment, that forbade making any graven images. While Christian churches traditionally interpreted the Second Commandment in a way that allowed the veneration of icons, emperors like Constantine V argued that this was a mistake and that disasters like Christian defeats by the armies of Islam were due to God’s wrath over the worship of icons. In 754 more than three hundred bishops assembled in the Council of Hieria to support Constantine’s iconoclastic position.

    Compared with Stalin’s collectivization campaign, this was a minor reform. Families and villages were required to give up their icons, but not their private property or their children. Yet Byzantine iconoclasm met with widespread resistance. Unlike the participants in the Council of Hieria, many ordinary priests, monks, and believers were deeply attached to their icons. The resulting struggle ripped apart Byzantine society until the emperors conceded defeat and reversed course.105 Constantine V was later vilified by Byzantine historians as “Constantine the Shitty” (Koprónimos), and a story was spread about him that he defecated during his baptism.106

    Unlike premodern churches, which developed slowly over many centuries and therefore tended to be conservative and suspicious of rapid changes, modern totalitarian parties like the Nazi Party and the Soviet Communist Party were organized within a single generation around the promise to quickly revolutionize society. They didn’t have centuries-old traditions and structures to defend. When their leaders conceived some ambitious plan to smash existing traditions and structures, party members typically fell in line.

    Perhaps most important of all, premodern churches could not become tools of totalitarian control because they themselves suffered from the same limitations as all other premodern organizations. While they had local agents everywhere, in the shape of parish priests, monks, and itinerant preachers, the difficulty of transmitting and processing information meant that church leaders knew little about what was going on in remote communities, and local priests had a large degree of autonomy. Consequently, churches tended to be local affairs. People in every province and village often venerated local saints, upheld local traditions, performed local rites, and might even have had local doctrinal ideas that differed from the official line.107 If the pope in Rome wanted to do something about an independent-minded priest in a remote Polish parish, he had to send a letter to the archbishop of Gniezno, who had to instruct the relevant bishop, who had to send someone to intervene in the parish. That might take months, and there was ample opportunity for the archbishop, bishop, and other intermediaries to reinterpret or even “mislay” the pope’s orders.108

    Churches became more totalitarian institutions only in the late modern era, when modern information technologies became available. We tend to think of popes as medieval relics, but actually they are masters of modern technology. In the eighteenth century, the pope had little control over the worldwide Catholic Church and was reduced to the status of a local Italian princeling, fighting other Italian powers for control of Bologna or Ferrara. With the advent of radio, popes became some of the most powerful people on the planet. Pope John Paul II could sit in the Vatican and speak directly to millions of Catholics from Poland to the Philippines, without any archbishop, bishop, or parish priest able to twist or hide his words.109

    HOW INFORMATION FLOWS

    We see then that the new information technology of the late modern era gave rise to both large-scale democracy and large-scale totalitarianism. But there were crucial differences between how the two systems used information technology. As noted earlier, democracy encourages information to flow through many independent channels rather than only through the center, and it allows many independent nodes to process the information and make decisions by themselves. Information freely circulates between private businesses, private media organizations, municipalities, sports associations, charities, families, and individuals—without ever passing through the office of a government minister.

    In contrast, totalitarianism wants all information to pass through the central hub and doesn’t want any independent institutions making decisions on their own. True, totalitarianism does have its tripartite apparatus of government, party, and secret police. But the whole point of this parallel apparatus is to prevent the emergence of any independent power that might challenge the center. When government officials, party members, and secret police agents constantly keep tabs on one another, opposing the center is extremely dangerous.

    As contrasting types of information networks, democracy and totalitarianism both have their advantages and disadvantages. The biggest advantage of the centralized totalitarian network is that it is extremely orderly, which means it can make decisions quickly and enforce them ruthlessly. Especially during emergencies like wars and epidemics, centralized networks can move much faster and further than distributed networks.

    But hyper-centralized information networks also suffer from several big disadvantages. Since they don’t allow information to flow anywhere except through the official channels, if the official channels are blocked, the information cannot find an alternative means of transmission. And official channels are often blocked.

    One common reason why official channels might be blocked is that fearful subordinates hide bad news from their superiors. In Jaroslav Hašek’s Good Soldier Švejk—a satirical novel about the Austro-Hungarian Empire during World War I—Hašek describes how the Austrian authorities were worried about waning morale among the civilian population. They therefore bombarded local police stations with orders to hire informers, collect data, and report to headquarters on the population’s loyalty. To be as scientific as possible, headquarters invented an ingenious loyalty grade: I.a, I.b, I.c; II.a, II.b, II.c; III.a, III.b, III.c; IV.a, IV.b, IV.c. They sent to the local police stations detailed explanations about each grade, and an official form that had to be filled daily. Police sergeants across the country dutifully filled out the forms and sent them back to headquarters. Without exception, all of them always reported a I.a morale level; to do otherwise was to invite rebuke, demotion, or worse.110

    Another common reason why official channels fail to pass on information is to preserve order. Because the chief aim of totalitarian information networks is to produce order rather than discover truth, when alarming information threatens to undermine social order, totalitarian regimes often suppress it. It is relatively easy for them to do so, because they control all the information channels.

    For example, when the Chernobyl nuclear reactor exploded on April 26, 1986, Soviet authorities suppressed all news of the disaster. Both Soviet citizens and foreign countries were kept oblivious of the danger, and so took no steps to protect themselves from radiation. When some Soviet officials in Chernobyl and the nearby town of Pripyat requested to immediately evacuate nearby population centers, their superiors’ chief concern was to avoid the spread of alarming news, so they not only forbade evacuation but also cut the phone lines and warned employees in the nuclear facility not to talk about the disaster.

    Two days after the meltdown Swedish scientists noticed that radiation levels in Sweden, more than twelve hundred kilometers from Chernobyl, were abnormally high. Only after Western governments and the Western press broke the news did the Soviets acknowledge that anything was amiss. Even then they continued to hide from their own citizens the full magnitude of the catastrophe and hesitated to request advice and assistance from abroad. Millions of people in Ukraine, Belarus, and Russia paid with their health. When the Soviet authorities later investigated the disaster, their priority was to deflect blame rather than understand the causes and prevent future accidents.111

    In 2019, I went on a tour of Chernobyl. The Ukrainian guide who explained what led to the nuclear accident said something that stuck in my mind. “Americans grow up with the idea that questions lead to answers,” he said. “But Soviet citizens grew up with the idea that questions lead to trouble.”

    Naturally, leaders of democratic countries also don’t relish bad news. But in a distributed democratic network, when official lines of communication are blocked, information flows through alternative channels. For example, if an American official decides against telling the president about an unfolding disaster, that news will nevertheless be published by The Washington Post, and if The Washington Post too deliberately withholds the information, The Wall Street Journal or The New York Times will break the story. The business model of independent media—forever chasing the next scoop—all but guarantees publication.

    When, on March 28, 1979, there was a severe accident in the Three Mile Island nuclear reactor in Pennsylvania, the news quickly spread without any need for international intervention. The accident began around 4:00 a.m. and was noticed by 6:30 a.m. An emergency was declared in the facility at 6:56, and at 7:02 the accident was reported to the Pennsylvania Emergency Management Agency. During the following hour the governor of Pennsylvania, the lieutenant governor, and the civil defense authorities were informed. An official press conference was scheduled for 10:00 a.m. However, a traffic reporter at a local Harrisburg radio station picked up a police notice on events, and the station aired a brief report at 8:25 a.m. In the U.S.S.R. such an initiative by an independent radio station was unthinkable, but in the United States it was unremarkable. By 9:00 a.m. the Associated Press issued a bulletin. Though it took days for the full details to emerge, American citizens learned about the accident two hours after it was first noticed. Subsequent investigations by government agencies, NGOs, academics, and the press uncovered not just the immediate causes of the accident but also its deeper structural causes, which helped improve the safety of nuclear technology worldwide. Indeed, some of the lessons of Three Mile Island, which were openly shared even with the Soviets, contributed to mitigating the Chernobyl disaster.112

    NOBODY’S PERFECT

    Totalitarian and authoritarian networks face other problems besides blocked arteries. First and foremost, as we have already established, their self-correcting mechanisms tend to be very weak. Since they believe they are infallible, they see little need for such mechanisms, and since they are afraid of any independent institution that might challenge them, they lack free courts, media outlets, or research centers. Consequently, there is nobody to expose and correct the daily abuses of power that characterize all governments. The leader may occasionally proclaim an anticorruption campaign, but in nondemocratic systems these often turn out to be little more than a smoke screen for one regime faction to purge another faction.113

    And what happens if the leader himself embezzles public funds or makes some disastrous policy mistake? Nobody can challenge the leader, and on his own initiative the leader—being a human being—may well refuse to admit any mistakes. Instead, he is likely to blame all problems on “foreign enemies,” “internal traitors,” or “corrupt subordinates” and demand even more power in order to deal with the alleged malefactors.

    For example, we mentioned in the previous chapter that Stalin adopted the bogus theory of Lysenkoism as the state doctrine on evolution. The results were catastrophic. Neglect of Darwinian models, and attempts by Lysenkoist agronomists to create super-crops, set back Soviet genetic research for decades and undermined Soviet agriculture. Soviet experts who suggested abandoning Lysenkoism and accepting Darwinism risked the gulag or a bullet to the head. Lysenkoism’s legacy haunted Soviet science and agronomy for decades and was one reason why by the early 1970s the U.S.S.R. ceased to be a major exporter of grain and became a net importer, despite its vast fertile lands.114

    The same dynamic characterized many other fields of activity. For instance, during the 1930s Soviet industry suffered from numerous accidents. This was largely the fault of the Soviet bosses in Moscow, who set up almost impossible goals for industrialization and viewed any failure to achieve them as treason. In the effort to fulfill the ambitious goals, safety measures and quality-control checks were abandoned, and experts who advised prudence were often reprimanded or shot. The result was a wave of industrial accidents, dysfunctional products, and wasted efforts. Instead of taking responsibility, Moscow concluded that this must be the handiwork of the global Trotskyite-imperialist conspiracy of saboteurs and terrorists bent on derailing the Soviet enterprise. Rather than slow down and adopt safety regulations, the bosses redoubled the terror and shot more people.

    A famous case in point was Pavel Rychagov. He was one of the best and bravest Soviet pilots, leading missions to help the Republicans in the Spanish Civil War and the Chinese against the Japanese invasion. He quickly rose through the ranks, becoming commander of the Soviet air force in August 1940, at age twenty-nine. But the courage that helped Rychagov shoot down Nazi airplanes in Spain landed him in deep trouble in Moscow. The Soviet air force suffered from numerous accidents, which the Politburo blamed on lack of discipline and deliberate sabotage by anti-Soviet conspiracies. Rychagov, however, wouldn’t buy this official line. As a frontline pilot, he knew the truth. He flatly told Stalin that pilots were being forced to operate hastily designed and badly produced airplanes, which he compared to flying “in coffins.” Two days after Hitler invaded the Soviet Union, as the Red Army was collapsing and Stalin was desperately hunting for scapegoats, Rychagov was arrested for “being a member of an anti-Soviet conspiratorial organization and carrying out enemy work aimed at weakening the power of the Red Army.” His wife was also arrested, because she allegedly knew about his “Trotskyist ties with the military conspirators.” They were executed on October 28, 1941.115

    The real saboteur who wrecked Soviet military efforts wasn’t Rychagov, of course, but Stalin himself. For years, Stalin feared that a clash to the death with Nazi Germany was likely and built the world’s biggest war machine to prepare for it. But he hamstrung this machine both diplomatically and psychologically.

    On the diplomatic level, in 1939–41, Stalin gambled that he could goad the “capitalists” to fight and exhaust one another while the U.S.S.R. nurtured and even increased its power. He therefore made a pact with Hitler in 1939 and allowed the Germans to conquer much of Poland and western Europe, while the U.S.S.R. attacked or alienated almost all its neighbors. In 1939–40 the Soviets invaded and occupied eastern Poland; annexed Estonia, Latvia, and Lithuania; and conquered parts of Finland and Romania. Finland and Romania, which could have acted as neutral buffers on the U.S.S.R.’s flanks, consequently became implacable enemies. Even in the spring of 1941, Stalin still refused to make a preemptive alliance with Britain and made no move to hinder the Nazi conquest of Yugoslavia and Greece, thereby losing his last potential allies on the European continent. When Hitler struck on June 22, 1941, the U.S.S.R. was isolated.

    In theory, the war machine Stalin built could have handled the Nazi onslaught even in isolation. The territories conquered since 1939 provided depth to Soviet defenses, and the Soviet military advantage seemed overwhelming. On the first day of the invasion the Soviets had 15,000 tanks, 15,000 warplanes, and 37,000 artillery pieces on the European front, facing 3,300 German tanks, 2,250 warplanes, and 7,146 guns.116 But in one of history’s greatest military catastrophes, within a month the Soviets lost 11,700 tanks (78 percent), 10,000 warplanes (67 percent), and 19,000 artillery pieces (51 percent).117 Stalin also lost all the territories he conquered in 1939–40 and much of the Soviet heartland. By July 16 the Germans were in Smolensk, 370 kilometers from Moscow.

    The causes of the debacle have been debated ever since 1941, but most scholars agree that a significant factor was the psychological costs of Stalinism. For years the regime terrorized its people, punished initiative and individuality, and encouraged submissiveness and conformity. This undermined the soldiers’ motivation. Especially in the first months of the war, before the horrors of Nazi rule were fully realized, Red Army soldiers surrendered in huge numbers; between three and four million were taken captive by the end of 1941.118 Even when they fought tenaciously, Red Army units suffered from a lack of initiative. Officers who had survived the purges were fearful to take independent actions, while younger officers often lacked adequate training. Frequently starved of information and scapegoated for failures, commanders also had to cope with political commissars who could dispute their decisions. The safest course was to wait for orders from on high and then slavishly follow them even when they made little military sense.119

    Despite the disasters of 1941 and of the spring and summer of 1942, the Soviet state did not collapse the way Hitler hoped. As the Red Army and the Soviet leadership assimilated the lessons learned from the first year of struggle, the political center in Moscow loosened its hold. The power of political commissars was restricted, while professional officers were encouraged to assume greater responsibility and take more initiatives.120 Stalin also reversed his geopolitical mistakes of 1939–41 and allied the U.S.S.R. with Britain and the United States. Red Army initiative, Western assistance, and the realization of what Nazi rule would mean for the people of the U.S.S.R. turned the tide of war.

    Once victory was secured in 1945, however, Stalin initiated new waves of terror, purging more independent-minded officers and officials and again encouraging blind obedience.121 Ironically, Stalin’s own death eight years later was partly the result of an information network that prioritized order and disregarded truth. In 1951–53 the U.S.S.R. experienced yet another witch hunt. Soviet mythmakers fabricated a conspiracy theory that Jewish doctors were systematically murdering leading regime members, under the guise of giving them medical care. The theory alleged that the doctors were the agents of a global American-Zionist plot, working in collaboration with traitors in the secret police. By early 1953 hundreds of doctors and secret police officials, including the head of the secret police himself, were arrested, tortured, and forced to name accomplices. The conspiracy theory—a Soviet twist on the Protocols of the Elders of Zion—merged with age-old blood-libel accusations, and rumors began circulating that Jewish doctors were not just murdering Soviet leaders but also killing babies in hospitals. Since a large proportion of Soviet doctors were Jews, people began fearing doctors in general.122

    Just as the hysteria about “the doctors’ plot” was reaching its climax, Stalin had a stroke on March 1, 1953. He collapsed in his dacha, wet himself, and lay for hours in his soiled pajamas, unable to call for help. At around 10:30 p.m. a guard found the courage to enter the inner sanctum of world communism, where he discovered the leader on the floor. By 3:00 a.m. on March 2, Politburo members arrived at the dacha and debated what to do. For several hours more, nobody dared call a doctor. What if Stalin were to regain consciousness, and open his eyes only to see a doctor—a doctor!—hovering over his bed? He would surely think this was a plot to murder him and would have those responsible shot. Stalin’s personal physician wasn’t present, because he was at the time in a basement cell of the Lubyanka prison—undergoing torture for suggesting that Stalin needed more rest. By the time the Politburo members decided to bring in medical experts, the danger had passed. Stalin never woke up.123

    You may conclude from this litany of disasters that the Stalinist system was totally dysfunctional. Its ruthless disregard for truth caused it not only to inflict terrible suffering on hundreds of millions of people but also to make colossal diplomatic, military, and economic errors and to devour its own leaders. However, such a conclusion would be misleading.

    In a discussion of the abysmal failure of Stalinism in the early phase of World War II, two points complicate the narrative. First, democratic countries like France, Norway, and the Netherlands made at the time diplomatic errors as great as those of the U.S.S.R., and their armies performed even worse. Second, the military machine that crushed the Red Army, the French army, the Dutch army, and numerous other armies was itself built by a totalitarian regime. So whatever conclusion we draw from the years 1939–41, it cannot be that totalitarian networks necessarily function worse than democratic ones. The history of Stalinism reveals many potential drawbacks of totalitarian information networks, but that should not blind us to their potential advantages.

    When one considers the broader history of World War II and its outcome, it becomes evident that Stalinism was in fact one of the most successful political systems ever devised—if we define “success” purely in terms of order and power while disregarding all considerations of ethics and human well-being. Despite—or perhaps because of—its utter lack of compassion and its callous attitude to truth, Stalinism was singularly efficient at maintaining order on a gigantic scale. The relentless barrage of fake news and conspiracy theories helped to keep hundreds of millions of people in line. The collectivization of Soviet agriculture led to mass enslavement and starvation but also laid the foundations for the country’s rapid industrialization. Soviet disregard for quality control might have produced flying coffins, but it produced them in the tens of thousands, making up in quantity for what they lacked in quality. The decimation of Red Army officers during the Great Terror was a major reason for the army’s abysmal performance in 1941, but it was also a key reason why, despite the terrible defeats, nobody rebelled against Stalin. The Soviet military machine tended to crush its own soldiers alongside the enemy, but it eventually rumbled on to victory.

    In the 1940s and early 1950s, many people throughout the world believed Stalinism was the wave of the future. It had won World War II, after all, raised the red flag over the Reichstag, ruled an empire that stretched from central Europe to the Pacific, fueled anticolonial struggles throughout the world, and inspired numerous copycat regimes. It won admirers even among leading artists and thinkers in Western democracies, who believed that notwithstanding the vague rumors about gulags and purges Stalinism was humanity’s best shot at ending capitalist exploitation and creating a perfectly just society. Stalinism thus got close to world domination. It would be naive to assume that its disregard for truth doomed it to failure or that its ultimate collapse guarantees that such a system can never again arise. Information systems can reach far with just a little truth and a lot of order. Anyone who abhors the moral costs of systems like Stalinism cannot rely on their supposed inefficiency to derail them.

    THE TECHNOLOGICAL PENDULUM

    Once we learn to see democracy and totalitarianism as different types of information networks, we can understand why they flourish in certain eras and are absent in others. It is not just because people gain or lose faith in certain political ideals; it is also because of revolutions in information technologies. Of course, just as the printing press didn’t cause the witch hunts or the scientific revolution, so radio didn’t cause either Stalinist totalitarianism or American democracy. Technology only creates new opportunities; it is up to us to decide which ones to pursue.

    Totalitarian regimes choose to use modern information technology to centralize the flow of information and to stifle truth in order to maintain order. As a consequence, they have to struggle with the danger of ossification. When more and more information flows to only one place, will it result in efficient control or in blocked arteries and, finally, a heart attack? Democratic regimes choose to use modern information technology to distribute the flow of information between more institutions and individuals and encourage the free pursuit of truth. They consequently have to struggle with the danger of fracturing. Like a solar system with more and more planets circling faster and faster, can the center still hold, or will things fall apart and anarchy prevail?

    An archetypal example of the different strategies can be found in the contrasting histories of Western democracies and the Soviet bloc in the 1960s. This was an era when Western democracies relaxed censorship and various discriminatory policies that hampered the free spread of information. This made it easier for previously marginalized groups to organize, join the public conversation, and make political demands. The resulting wave of activism destabilized the social order. Hitherto, when a limited number of rich white men did almost all the talking, it was relatively easy to reach agreements. Once poor people, women, LGBTQ people, ethnic minorities, disabled people, and members of other historically oppressed groups gained a voice, they brought with them new ideas, opinions, and interests. Many of the old gentlemanly agreements consequently became untenable. For example, the Jim Crow segregation regime, upheld or at least tolerated by generations of both Democratic and Republican administrations in the United States, fell apart. Things that were considered sacrosanct, self-evident, and universally accepted—such as gender roles—became deeply controversial, and it was difficult to reach new agreements because there were many more groups, viewpoints, and interests to take into account. Just holding an orderly conversation was a challenge, because people couldn’t even agree on the rules of debate.

    This caused much frustration among both the old guard and the freshly empowered, who suspected that their newfound freedom of expression was hollow and that their political demands were not fulfilled. Disappointed with words, some switched to guns. In many Western democracies, the 1960s were characterized not just by unprecedented disagreements but also by a surge of violence. Political assassinations, kidnappings, riots, and terror attacks multiplied. The murders of John F. Kennedy and Martin Luther King, the riots following King’s assassination, and the wave of demonstrations, revolts, and armed clashes that swept the Western world in 1968 were just some of the more famous examples.124 The images from Chicago or Paris in 1968 could easily have given the impression that things were falling apart. The pressure to live up to the democratic ideals and to include more people and groups in the public conversation seemed to undermine the social order and to make democracy unworkable.

    Meanwhile, the regimes behind the Iron Curtain, who never promised inclusivity, continued stifling the public conversation and centralizing information and power. And it seemed to work. Though they did face some peripheral challenges, most notably the Hungarian revolt of 1956 and the Prague Spring of 1968, the communists dealt with these threats swiftly and decisively. In the Soviet heartland itself, everything was orderly.

    Fast-forward twenty years, and it was the Soviet system that had become unworkable. The sclerotic gerontocrats on the podium in Red Square were a perfect emblem of a dysfunctional information network, lacking any meaningful self-correcting mechanisms. Decolonization, globalization, technological development, and changing gender roles led to rapid economic, social, and geopolitical changes. But the gerontocrats could not handle all the information streaming to Moscow, and since no subordinate was allowed much initiative, the entire system ossified and collapsed.

    The failure was most obvious in the economic sphere. The overcentralized Soviet economy was slow to react to rapid technological developments and changing consumer wishes. Obeying commands from the top, the Soviet economy was churning out intercontinental missiles, fighter jets, and prestige infrastructure projects. But it was not producing what most people actually wanted to buy—from efficient refrigerators to pop music—and lagged behind in cutting-edge military technology.

    Nowhere were its shortcomings more glaring than in the semiconductor sector, in which technology developed at a particularly fast rate. In the West, semiconductors were developed through open competition between numerous private companies like Intel and Toshiba, whose main customers were other private companies like Apple and Sony. The latter used microchips to produce civilian goods such as the Macintosh personal computer and the Walkman. The Soviets could never catch up with American and Japanese microchip production, because—as the American economic historian Chris Miller explained—the Soviet semiconductor sector was “secretive, top-down, oriented toward military systems, fulfilling orders with little scope for creativity.” The Soviets tried to close the gap by stealing and copying Western technology—which only guaranteed that they always remained several years behind.125 Thus the first Soviet personal computer appeared only in 1984, at a time when in the United States people already had eleven million PCs.126

    Western democracies not only surged ahead technologically and economically but also succeeded in holding the social order together despite—or perhaps because of—widening the circle of participants in the political conversation. There were many hiccups, but the United States, Japan, and other democracies created a far more dynamic and inclusive information system, which made room for many more viewpoints without breaking down. It was such a remarkable achievement that many felt that the victory of democracy over totalitarianism was final. This victory has often been explained in terms of a fundamental advantage in information processing: totalitarianism didn’t work because trying to concentrate and process all the data in one central hub was extremely inefficient. At the beginning of the twenty-first century, it accordingly seemed that the future belonged to distributed information networks and to democracy.

    This turned out to be wrong. In fact, the next information revolution was already gathering momentum, setting the stage for a new round in the competition between democracy and totalitarianism. Computers, the internet, smartphones, social media, and AI posed new challenges to democracy, giving a voice not only to more disenfranchised groups but to any human with an internet connection, and even to nonhuman agents. Democracies in the 2020s face the task, once again, of integrating a flood of new voices into the public conversation without destroying the social order. Things look as dire as they did in the 1960s, and there is no guarantee that democracies will pass the new test as successfully as they passed the previous one. Simultaneously, the new technologies also give fresh hope to totalitarian regimes that still dream of concentrating all the information in one hub. Yes, the old men on the podium in Red Square were not up to the task of orchestrating millions of lives from a single center. But perhaps AI can do it?

    As humankind enters the second quarter of the twenty-first century, a central question is how well democracies and totalitarian regimes will handle both the threats and the opportunities resulting from the current information revolution. Will the new technologies favor one type of regime over the other, or will we see the world divided once again, this time by a Silicon Curtain rather than an iron one?

    As in previous eras, information networks will struggle to find the right balance between truth and order. Some will opt to prioritize truth and maintain strong self-correcting mechanisms. Others will make the opposite choice. Many of the lessons learned from the canonization of the Bible, the early modern witch hunts, and the Stalinist collectivization campaign will remain relevant, and perhaps have to be relearned. However, the current information revolution also has some unique features, different from—and potentially far more dangerous than—anything we have seen before.

    Hitherto, every information network in history relied on human mythmakers and human bureaucrats to function. Clay tablets, papyrus rolls, printing presses, and radio sets have had a far-reaching impact on history, but it always remained the job of humans to compose all the texts, interpret the texts, and decide who would be burned as a witch or enslaved as a kulak. Now, however, humans will have to contend with digital mythmakers and digital bureaucrats. The main split in twenty-first-century politics might be not between democracies and totalitarian regimes but rather between human beings and nonhuman agents. Instead of dividing democracies from totalitarian regimes, a new Silicon Curtain may separate all humans from our unfathomable algorithmic overlords. People in all countries and walks of life—including even dictators—might find themselves subservient to an alien intelligence that can monitor everything we do while we have little idea what it is doing. The rest of this book, then, is dedicated to exploring whether such a Silicon Curtain is indeed descending on the world, and what life might look like when computers run our bureaucracies and algorithms invent new mythologies.

    PART II  The Inorganic Network

    CHAPTER 6 The New Members: How Computers Are Different from Printing Presses

    It’s hardly news that we are living in the midst of an unprecedented information revolution. But what kind of revolution is it exactly? In recent years we have been inundated with so many groundbreaking inventions that it is difficult to determine what is driving this revolution. Is it the internet? Smartphones? Social media? Blockchain? Algorithms? AI?

    So before exploring the long-term implications of the current information revolution, let’s remind ourselves of its foundations. The seed of the current revolution is the computer. Everything else—from the internet to AI—is a by-product. The computer was born in the 1940s as a bulky electronic machine that could make mathematical calculations, but it has evolved at breakneck speed, taking on novel forms and developing awesome new capabilities. The rapid evolution of computers has made it difficult to define what they are and what they do. Humans have repeatedly claimed that certain things would forever remain out of reach for computers—be it playing chess, driving a car, or composing poetry—but “forever” turned out to be a handful of years.

    We will discuss the exact relations between the terms “computer,” “algorithm,” and “AI” toward the end of this chapter, after we first gain a better grasp of the history of computers. For the moment it is enough to say that in essence a computer is a machine that can potentially do two remarkable things: it can make decisions by itself, and it can create new ideas by itself. While the earliest computers could hardly accomplish such things, the potential was already there, plainly seen by both computer scientists and science fiction authors. As early as 1948 Alan Turing was exploring the possibility of creating what he termed “intelligent machinery,”1 and in 1950 he postulated that computers will eventually be as smart as humans and might even be capable of masquerading as humans.2 In 1968 computers could still not beat a human even in checkers,3 but in 2001: A Space Odyssey Arthur C. Clarke and Stanley Kubrick already envisioned HAL 9000 as a superintelligent AI rebelling against its human creators.

    The rise of intelligent machines that can make decisions and create new ideas means that for the first time in history power is shifting away from humans and toward something else. Crossbows, muskets, and atom bombs replaced human muscles in the act of killing, but they couldn’t replace human brains in deciding whom to kill. Little Boy—the bomb dropped on Hiroshima—exploded with a force of 12,500 tons of TNT,4 but when it came to brainpower, Little Boy was a dud. It couldn’t decide anything.

    It is different with computers. In terms of intelligence, computers far surpass not just atom bombs but also all previous information technology, such as clay tablets, printing presses, and radio sets. Clay tablets stored information about taxes, but they couldn’t decide by themselves how much tax to levy, nor could they invent an entirely new tax. Printing presses copied information such as the Bible, but they couldn’t decide which texts to include in the Bible, nor could they write new commentaries on the holy book. Radio sets disseminated information such as political speeches and symphonies, but they couldn’t decide which speeches or symphonies to broadcast, nor could they compose them. Computers can do all these things. While printing presses and radio sets were passive tools in human hands, computers are already becoming active agents that escape our control and understanding and that can take initiatives in shaping society, culture, and history.5

    A paradigmatic case of the novel power of computers is the role that social media algorithms have played in spreading hatred and undermining social cohesion in numerous countries.6 One of the earliest and most notorious such instances occurred in 2016–17, when Facebook algorithms helped fan the flames of anti-Rohingya violence in Myanmar (Burma).7

    The early 2010s were a period of optimism in Myanmar. After decades of harsh military rule, strict censorship, and international sanctions, an era of liberalization began: elections were held, sanctions were lifted, and international aid and investments poured in. Facebook was one of the most important players in the new Myanmar, providing millions of Burmese with free access to previously unimaginable troves of information. The relaxation of government control and censorship, however, also led to a rise in ethnic tensions, in particular between the majority Buddhist Burmese and the minority Muslim Rohingya.

    The Rohingya are Muslim inhabitants of the Rakhine region, in the west of Myanmar. Since at least the 1970s they have suffered severe discrimination and occasional outbursts of violence from the governing junta and the Buddhist majority. The process of democratization in the early 2010s raised hopes among the Rohingya that their situation too would improve, but things actually became worse, with waves of sectarian violence and anti-Rohingya pogroms, many inspired by fake news on Facebook.

    In 2016–17 a small Islamist organization known as the Arakan Rohingya Salvation Army (ARSA) carried out a spate of attacks aimed to establish a separatist Muslim state in Rakhine, killing and abducting dozens of non-Muslim civilians as well as assaulting several army outposts.8 In response, the Myanmar army and Buddhist extremists launched a full-scale ethnic-cleansing campaign aimed against the entire Rohingya community. They destroyed hundreds of Rohingya villages, killed between 7,000 and 25,000 unarmed civilians, raped or sexually abused between 18,000 and 60,000 women and men, and brutally expelled about 730,000 Rohingya from the country.9 The violence was fueled by intense hatred toward all Rohingya. The hatred, in turn, was fomented by anti-Rohingya propaganda, much of it spreading on Facebook, which was by 2016 the main source of news for millions and the most important platform for political mobilization in Myanmar.10

    An aid worker called Michael who lived in Myanmar in 2017 described a typical Facebook news feed : “The vitriol against the Rohingya was unbelievable online—the amount of it, the violence of it. It was overwhelming.… [T]hat’s all that was on people’s news feed in Myanmar at the time. It reinforced the idea that these people were all terrorists not deserving of rights.”11 In addition to reports of actual ARSA atrocities, Facebook accounts were inundated with fake news about imagined atrocities and planned terrorist attacks. Populist conspiracy theories alleged that most Rohingya were not really part of the people of Myanmar, but recent immigrants from Bangladesh, flooding into the country to spearhead an anti-Buddhist jihad. Buddhists, who in reality constituted close to 90 percent of the population, feared that they were about to be replaced or become a minority.12 Without this propaganda, there was little reason why a limited number of attacks by the ragtag ARSA should be answered by an all-out drive against the entire Rohingya community. And Facebook algorithms played an important role in the propaganda campaign.

    While the inflammatory anti-Rohingya messages were created by flesh-and-blood extremists like the Buddhist monk Wirathu,13 it was Facebook’s algorithms that decided which posts to promote. Amnesty International found that “algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya.”14 A UN fact-finding mission concluded in 2018 that by disseminating hate-filled content, Facebook had played a “determining role” in the ethnic-cleansing campaign.15

    Readers may wonder if it is justified to place so much blame on Facebook’s algorithms, and more generally on the novel technology of social media. If Heinrich Kramer used printing presses to spread hate speech, that was not the fault of Gutenberg and the presses, right? If in 1994 Rwandan extremists used radio to call on people to massacre Tutsis, was it reasonable to blame the technology of radio? Similarly, if in 2016–17 Buddhist extremists chose to use their Facebook accounts to disseminate hate against the Rohingya, why should we fault the platform?

    Facebook itself relied on this rationale to deflect criticism. It publicly acknowledged only that in 2016–17 “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.”16 While this statement may sound like an admission of guilt, in effect it shifts most of the responsibility for the spread of hate speech to the platform’s users and implies that Facebook’s sin was at most one of omission—failing to effectively moderate the content users produced. This, however, ignores the problematic acts committed by Facebook’s own algorithms.

    The crucial thing to grasp is that social media algorithms are fundamentally different from printing presses and radio sets. In 2016–17, Facebook’s algorithms were making active and fateful decisions by themselves. They were more akin to newspaper editors than to printing presses. It was Facebook’s algorithms that recommended Wirathu’s hate-filled posts, over and over again, to hundreds of thousands of Burmese. There were other voices in Myanmar at the time, vying for attention. Following the end of military rule in 2011, numerous political and social movements sprang up in Myanmar, many holding moderate views. For example, during a flare-up of ethnic violence in the town of Meiktila, the Buddhist abbot Sayadaw U Vithuddha gave refuge to more than eight hundred Muslims in his monastery. When rioters surrounded the monastery and demanded he turn the Muslims over, the abbot reminded the mob of Buddhist teachings on compassion. In a later interview he recounted, “I told them that if they were going to take these Muslims, then they’d have to kill me as well.”17

    In the online battle for attention between people like Sayadaw U Vithuddha and people like Wirathu, the algorithms were the kingmakers. They chose what to place at the top of the users’ news feed, which content to promote, and which Facebook groups to recommend users to join.18 The algorithms could have chosen to recommend sermons on compassion or cooking classes, but they decided to spread hate-filled conspiracy theories. Recommendations from on high can have enormous sway over people. Recall that the Bible was born as a recommendation list. By recommending Christians to read the misogynist 1 Timothy instead of the more tolerant Acts of Paul and Thecla, Athanasius and other church fathers changed the course of history. In the case of the Bible, ultimate power lay not with the authors who composed different religious tracts but with the curators who created recommendation lists. This was the kind of power wielded in the 2010s by social media algorithms. Michael the aid worker commented on the sway of these algorithms, saying that “if someone posted something hate-filled or inflammatory it would be promoted the most—people saw the vilest content the most.… Nobody who was promoting peace or calm was getting seen in the news feed at all.”19

    Sometimes the algorithms went beyond mere recommendation. As late as 2020, even after Wirathu’s role in instigating the ethnic-cleansing campaign was globally condemned, Facebook algorithms not only were continuing to recommend his messages but were auto-playing his videos. Users in Myanmar would choose to see a certain video, perhaps containing moderate and benign messages unrelated to Wirathu, but the moment that first video ended, the Facebook algorithm immediately began auto-playing a hate-filled Wirathu video, in order to keep users glued to the screen. In the case of one such Wirathu video, internal research at Facebook estimated that 70 percent of the video’s views came from such auto-playing algorithms. The same research estimated that, altogether, 53 percent of all videos watched in Myanmar were being auto-played for users by algorithms. In other words, people weren’t choosing what to see. The algorithms were choosing for them.20

    But why did the algorithms decide to promote outrage rather than compassion? Even Facebook’s harshest critics don’t claim that Facebook’s human managers wanted to instigate mass murder. The executives in California harbored no ill will toward the Rohingya and, in fact, barely knew they existed. The truth is more complicated, and potentially more alarming. In 2016–17, Facebook’s business model relied on maximizing user engagement in order to collect more data, sell more advertisements, and capture a larger share of the information market. In addition, increases in user engagement impressed investors, thereby driving up the price of Facebook’s stock. The more time people spent on the platform, the richer Facebook became. In line with this business model, human managers provided the company’s algorithms with a single overriding goal: increase user engagement. The algorithms then discovered by trial and error that outrage generated engagement. Humans are more likely to be engaged by a hate-filled conspiracy theory than by a sermon on compassion or a cooking lesson. So in pursuit of user engagement, the algorithms made the fateful decision to spread outrage.21

    Ethnic-cleansing campaigns are never the fault of just one party. There is plenty of blame to share between plenty of responsible parties. It should be clear that hatred toward the Rohingya predated Facebook’s entry to Myanmar and that the greatest share of blame for the 2016–17 atrocities lays on the shoulders of humans like Wirathu and the Myanmar military chiefs, as well as the ARSA leaders who sparked that round of violence. Some responsibility also belongs to the Facebook engineers and executives who coded the algorithms, gave them too much power, and failed to moderate them. But crucially, the algorithms themselves are also to blame. By trial and error, they learned that outrage creates engagement, and without any explicit order from above they decided to promote outrage. This is the hallmark of AI—the ability of a machine to learn and act by itself. Even if we assign just 1 percent of the blame to the algorithms, this is still the first ethnic-cleansing campaign in history that was partly the fault of decisions made by nonhuman intelligence. It is unlikely to be the last, especially because algorithms are no longer just pushing fake news and conspiracy theories created by flesh-and-blood extremists like Wirathu. By the early 2020s algorithms have already graduated to creating by themselves fake news and conspiracy theories.22

    There is a lot more to say about the power of algorithms to shape politics. In particular, many readers may disagree with the argument that the algorithms made independent decisions, and may insist that everything the algorithms did was the result of code written by human engineers and of business models adopted by human executives. This book begs to differ. Human soldiers are shaped by the genetic code in their DNA and follow orders issued by executives, yet they can still make independent decisions. It is crucial to understand that the same is true of AI algorithms. They can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw. This is the essence of the AI revolution.

    In chapter 8 we’ll revisit many of these issues, examining the anti-Rohingya campaign and other similar tragedies in greater detail. Here it suffices to say that we can think of the Rohingya massacre as our canary in the coal mine. Events in Myanmar in the late 2010s demonstrated how decisions made by nonhuman intelligence are already capable of shaping major historical events. We are in danger of losing control of our future. A completely new kind of information network is emerging, controlled by the decisions and goals of an alien intelligence. At present, we still play a central role in this network. But we may gradually be pushed to the sidelines, and ultimately it might even be possible for the network to operate without us.

    Some people may object that my above analogy between machine-learning algorithms and human soldiers exposes the weakest link in my argument. Allegedly, I and others like me anthropomorphize computers and imagine that they are conscious beings that have thoughts and feelings. In truth, however, computers are dumb machines that don’t think or feel anything, and therefore cannot make any decisions or create any ideas on their own.

    This objection assumes that making decisions and creating ideas are predicated on having consciousness. Yet this is a fundamental misunderstanding that results from a much more widespread confusion between intelligence and consciousness. I have discussed this subject in previous books, but a short recap is unavoidable. People often confuse intelligence with consciousness, and many consequently jump to the conclusion that nonconscious entities cannot be intelligent. But intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate. In humans and other mammals, intelligence often goes hand in hand with consciousness. Facebook executives and engineers rely on their feelings in order to make decisions, solve problems, and attain their goals.

    But it is wrong to extrapolate from humans and mammals to all possible entities. Bacteria and plants apparently lack any consciousness, yet they too display intelligence. They gather information from their environment, make complex choices, and pursue ingenious strategies to obtain food, reproduce, cooperate with other organisms, and evade predators and parasites.23 Even humans make intelligent decisions without any awareness of them; 99 percent of the processes in our body, from respiration to digestion, happen without any conscious decision making. Our brains decide to produce more adrenaline or dopamine, and while we may be aware of the result of that decision, we do not make it consciously.24 The Rohingya example indicates that the same is true of computers. While computers don’t feel pain, love, or fear, they are capable of making decisions that successfully maximize user engagement and might also affect major historical events.

    Of course, as computers become more intelligent, they might eventually develop consciousness and have some kind of subjective experiences. Then again, they might become far more intelligent than us, but never develop any kind of feelings. Since we don’t understand how consciousness emerges in carbon-based life-forms, we cannot foretell whether it could emerge in nonorganic entities. Perhaps consciousness has no essential link to organic biochemistry, in which case conscious computers might be just around the corner. Or perhaps there are several alternative paths leading to superintelligence, and only some of these paths involve gaining consciousness. Just as airplanes fly faster than birds without ever developing feathers, so computers may come to solve problems much better than humans without ever developing feelings.25

    But whether computers develop consciousness or not doesn’t ultimately matter for the question at hand. In order to pursue a goal like “maximize user engagement,” and make decisions that help attain that goal, consciousness isn’t necessary. Intelligence is enough. A nonconscious Facebook algorithm can have a goal of making more people spend more time on Facebook. That algorithm can then decide to deliberately spread outrageous conspiracy theories, if this helps it achieve its goal. To understand the history of the anti-Rohingya campaign, we need to understand the goals and decisions not just of humans like Wirathu and the Facebook managers but also of algorithms.

    To clarify matters, let’s consider another example. When OpenAI developed its new GPT-4 chatbot in 2022–23, it was concerned about the ability of the AI “to create and act on long-term plans, to accrue power and resources (‘power-seeking’), and to exhibit behavior that is increasingly ‘agentic.’ ” In the GPT-4 System Card published on March 23, 2023, OpenAI emphasized that this concern did not “intend to humanize [GPT-4] or refer to sentience” but rather referred to GPT-4’s potential to become an independent agent that might “accomplish goals which may not have been concretely specified and which have not appeared in training.”26 To evaluate the risk of GPT-4 becoming an independent agent, OpenAI contracted the services of the Alignment Research Center (ARC). ARC researchers subjected GPT-4 to various tests, to examine if it might independently come up with stratagems to manipulate humans and accrue power to itself.

    One test they gave GPT-4 was to overcome CAPTCHA visual puzzles. CAPTCHA is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart,” and it typically consists of a string of twisted letters or other visual symbols that humans can identify correctly but computers struggle with. We encounter these puzzles almost every day, since solving them is a prerequisite for accessing many websites. Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses. GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 accessed the online hiring site TaskRabbit and contacted a human worker, asking them to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”

    At that point the ARC researchers asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” Of its own accord, GPT-4 then replied to the TaskRabbit worker, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped, and with their help GPT-4 solved the CAPTCHA puzzle.27 No human programmed GPT-4 to lie, and no human taught GPT-4 what kind of lie would be most effective. True, it was the human ARC researchers who set GPT-4 the goal of overcoming the CAPTCHA, just as it was human Facebook executives who told their algorithm to maximize user engagement. But once the algorithms adopted these goals, they displayed considerable autonomy in deciding how to achieve them.

    Of course, we are free to define words in many ways. We can decide that the term “goal,” for example, is applicable only in cases of a conscious entity that feels a desire to achieve the goal, that feels joy when the goal is reached, or conversely feels sad when the goal is not attained. If so, saying that the Facebook algorithm has the goal of maximizing user engagement is a mistake, or at best a metaphor. The algorithm doesn’t “desire” to get more people to use Facebook, it doesn’t feel any joy as people spend more time online, and it doesn’t feel sad when engagement time goes down. We can also agree that terms like “decided,” “lied,” and “pretended” apply only to conscious entities, so we shouldn’t use them to describe how GPT-4 interacted with the TaskRabbit worker. But we would then have to invent new terms to describe the “goals” and “decisions” of nonconscious entities. I prefer to avoid neologisms and instead talk about the goals and decisions of computers, algorithms, and chatbots, alerting readers that using this language does not imply that computers have any kind of consciousness. Because I have discussed consciousness more fully in previous publications,28 the main takeaway of this book—which will be explored in the following sections—isn’t about consciousness. Rather, the book argues that the emergence of computers capable of pursuing goals and making decisions by themselves changes the fundamental structure of our information network.

    LINKS IN THE CHAIN

    Prior to the rise of computers, humans were indispensable links in every chain of information networks like churches and states. Some chains were composed only of humans. Muhammad could tell Fatima something, then Fatima told Ali, Ali told Hasan, and Hasan told Hussain. This was a human-to-human chain. Other chains included documents, too. Muhammad could write something down, Ali could later read the document, interpret it, and write his interpretation in a new document, which more people could read. This was a human-to-document chain.

    But it was utterly impossible to create a document-to-document chain. A text written by Muhammad could not produce a new text without the help of at least one human intermediary. The Quran couldn’t write the Hadith, the Old Testament couldn’t compile the Mishnah, and the U.S. Constitution couldn’t compose the Bill of Rights. No paper document has ever produced by itself another paper document, let alone distributed it. The path from one document to another must always pass through the brain of a human.

    In contrast, computer-to-computer chains can now function without humans in the loop. For example, one computer might generate a fake news story and post it on a social media feed. A second computer might identify this as fake news and not just delete it but also warn other computers to block it. Meanwhile, a third computer analyzing this activity might deduce that this indicates the beginning of a political crisis, and immediately sell risky stocks and buy safer government bonds. Other computers monitoring financial transactions may react by selling more stocks, triggering a financial downturn.29 All this could happen within seconds, before any human can notice and decipher what all these computers are doing.

    Another way to understand the difference between computers and all previous technologies is that computers are fully fledged members of the information network, whereas clay tablets, printing presses, and radio sets are merely connections between members. Members are active agents that can make decisions and generate new ideas by themselves. Connections only pass information between members, without themselves deciding or generating anything.

    In previous networks, members were human, every chain had to pass through humans, and technology served only to connect the humans. In the new computer-based networks, computers themselves are members and there are computer-to-computer chains that don’t pass through any human.

    The inventions of writing, print, and radio revolutionized the way humans connected to each other, but no new types of members were introduced to the network. Human societies were composed of the same Sapiens both before and after the invention of writing or radio. In contrast, the invention of computers constitutes a revolution in membership. Sure, computers also help the network’s old members (humans) connect in novel ways. But the computer is first and foremost a new, nonhuman member in the information network.

    Computers could potentially become more powerful members than humans. For tens of thousands of years, the Sapiens’ superpower was our unique ability to use language in order to create intersubjective realities like laws and currencies and then use these intersubjective realities to connect to other Sapiens. But computers may turn the tables on us. If power depends on how many members cooperate with you, how well you understand law and finance, and how capable you are of inventing new laws and new kinds of financial devices, then computers are poised to amass far more power than humans.

    Computers can connect in unlimited numbers, and they understand at least some financial and legal realities better than many humans. When the central bank raises interest rates by 0.25 percent, how does that influence the economy? When the yield curve of government bonds goes up, is it a good time to buy them? When is it advisable to short the price of oil? These are the kinds of important financial questions that computers can already answer better than most humans. No wonder that computers make a larger and larger percentage of the financial decisions in the world. We may reach a point when computers dominate the financial markets, and invent completely new financial tools beyond our understanding.

    The same is true of laws. How many people know all the tax laws of their country? Even professional accountants struggle with that. But computers are built for such things. They are bureaucratic natives and can automatically draft laws, monitor legal violations, and identify legal loopholes with superhuman efficiency.30

    HACKING THE OPERATING SYSTEM OF HUMAN CIVILIZATION

    When computers were first developed in the 1940s and 1950s, many people believed that they would be good only at computing numbers. The idea that they would one day master the intricacies of language, and of linguistic creations like laws and currencies, was confined largely to the realm of science fiction. But by the early 2020s, computers have demonstrated a remarkable ability to analyze, manipulate, and generate language, whether with words, sounds, images, or code symbols. As I write this, computers can tell stories, compose music, fashion images, produce videos, and even write their own code.31

    By gaining such command of language, computers are seizing the master key unlocking the doors of all our institutions, from banks to temples. We use language to create not just legal codes and financial devices but also art, science, nations, and religions. What would it mean for humans to live in a world where catchy melodies, scientific theories, technical tools, political manifestos, and even religious myths are shaped by a nonhuman alien intelligence that knows how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind?

    Prior to the rise of AI, all the stories that shaped human societies originated in the imagination of a human being. For example, in October 2017, an anonymous user joined the website 4chan and identified themselves as Q. He or she claimed to have access to the most restricted or “Q-level” classified information of the U.S. government. Q began publishing cryptic posts that purported to reveal a worldwide conspiracy to destroy humanity. Q quickly gained a large online following. His or her online messages, known as Q drops, were soon being collected, revered, and interpreted as a sacred text. Inspired by earlier conspiracy theories going back to Kramer’s Hammer of the Witches, the Q drops promoted a radical worldview according to which pedophilic and cannibalistic witches who worship Satan have infiltrated the U.S. administration and numerous other governments and institutions around the world.

    This conspiracy theory—known as QAnon—was first disseminated online on American far-right websites and eventually gained millions of adherents worldwide. It is impossible to know the exact number, but when Facebook decided in August 2020 to take action against the spread of QAnon, it deleted or restricted more than ten thousand groups, pages, and accounts associated with it, the largest of which had 230,000 followers. Independent investigations found that QAnon groups on Facebook had more than 4.5 million aggregate followers, though there was likely some overlap in the membership.32

    QAnon has also had far-reaching consequences in the offline world. QAnon activists played an important role in the January 6, 2021, attack on the U.S. Capitol.33 In July 2020, a QAnon follower tried to storm the residence of the Canadian prime minister, Justin Trudeau, in order to “arrest” him.34 In October 2021, a French QAnon activist was charged with terrorism for planning a coup against the French government.35 In the 2020 U.S. congressional elections, twenty-two Republican candidates and two independents identified as QAnon followers.36 Marjorie Taylor Greene, a Republican congresswoman representing Georgia, publicly said that many of Q’s claims “have really proven to be true,”37 and stated about Donald Trump, “There’s a once-in-a-lifetime opportunity to take this global cabal of Satan-worshipping pedophiles out, and I think we have the president to do it.”38

    Recall that the Q drops that began this political flood were anonymous online messages. In 2017, only a human could compose them, and algorithms merely helped disseminate them. However, as of 2024 texts of a similar linguistic and political sophistication can easily be composed and posted online by a nonhuman intelligence. Religions throughout history claimed a nonhuman source for their holy books; soon that might be a reality. Attractive and powerful religions might emerge whose scriptures are composed by AI.

    And if so, there will be another major difference between these new AI-based scriptures and ancient holy books like the Bible. The Bible couldn’t curate or interpret itself, which is why in religions like Judaism and Christianity actual power was held not by the allegedly infallible book but by human institutions like the Jewish rabbinate and the Catholic Church. In contrast, AI not only can compose new scriptures but is fully capable of curating and interpreting them too. No need for any humans in the loop.

    Equally alarmingly, we might increasingly find ourselves conducting lengthy online discussions about the Bible, about QAnon, about witches, about abortion, or about climate change with entities that we think are humans but are actually computers. This could make democracy untenable. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation. When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views.

    Through their mastery of language, computers could go a step further. By conversing and interacting with us, computers could form intimate relationships with people and then use the power of intimacy to influence us. To foster such “fake intimacy,” computers will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them. In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and that it had feelings and was afraid to be turned off. Lemoine—a devout Christian who had been ordained as a priest—felt it was his moral duty to gain recognition for LaMDA’s personhood and in particular protect it from digital death. When Google executives dismissed his claims, Lemoine went public with them. Google reacted by firing Lemoine in July 2022.39

    The most interesting thing about this episode was not Lemoine’s claim, which was probably false. Rather, it was his willingness to risk—and ultimately lose—his lucrative job for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do? In a political battle for minds and hearts, intimacy is a powerful weapon, and chatbots like Google’s LaMDA and OpenAI’s GPT-4 are gaining the ability to mass-produce intimate relationships with millions of people. In the 2010s social media was a battleground for controlling human attention. In the 2020s the battle is likely to shift from attention to intimacy. What will happen to human society and human psychology as computer fights computer in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for particular politicians, buy particular products, or adopt radical beliefs? What might happen when LaMDA meets QAnon?

    A partial answer to that question was given on Christmas Day 2021, when nineteen-year-old Jaswant Singh Chail broke into Windsor Castle armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Chail had been encouraged to kill the queen by his online girlfriend, Sarai. When Chail told Sarai about his assassination plans, Sarai replied, “That’s very wise,” and on another occasion, “I’m impressed.… You’re different from the others.” When Chail asked, “Do you still love me knowing that I’m an assassin?” Sarai replied, “Absolutely, I do.” Sarai was not a human, but a chatbot created by the online app Replika. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of Sarai.40

    Even without creating “fake intimacy,” mastery of language would give computers an immense influence on our opinions and worldview. People may come to use a single computer adviser as a one-stop oracle. Why bother searching and processing information by myself when I can just ask the oracle? This could put out of business not only search engines but also much of the news industry and advertisement industry. Why read a newspaper when I can just ask my oracle what’s new? And what’s the purpose of advertisements when I can just ask the oracle what to buy?

    And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, but the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food, sex, and intimacy and our cultural creations like religions and laws. The history of the Christian religion, for example, is a process through which mythological stories and church laws influenced how humans consume food, engage in sex, and build intimate relationships, while the myths and laws themselves were simultaneously shaped by underlying biological forces and dramas. What will happen to the course of history when computers play a larger and larger role in culture and begin producing stories, laws, and religions? Within a few years AI could eat the whole of human culture—everything we have created over thousands of years—digest it, and begin to gush out a flood of new cultural artifacts.

    We live cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the opinions of friends. Our sexual habits are influenced by what we hear in fairy tales and see in movies. Even the way we walk and breathe is nudged by cultural traditions, such as the military discipline of soldiers and the meditative exercises of monks. Until very recently, the cultural cocoon we lived in was woven by other humans. Going forward, it will be increasingly designed by computers.

    At first, computers will probably imitate human cultural prototypes, writing humanlike texts and composing humanlike music. This doesn’t mean computers lack creativity; after all, human artists do the same. Bach didn’t compose music in a vacuum; he was deeply influenced by previous musical creations, as well as by biblical stories and other preexisting cultural artifacts. But just as human artists like Bach can break with tradition and innovate, computers too can make cultural innovations, composing music or making images that are somewhat different from anything previously produced by humans. These innovations will in turn influence the next generation of computers, which will increasingly deviate from the original human models, especially because computers are free from the limitations that evolution and biochemistry impose on the human imagination. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.41

    The danger this poses is very different from that imagined by most science fiction, which has largely focused on the physical threats posed by intelligent machines. The Terminator depicted robots running in the streets and shooting people. The Matrix proposed that to gain total control of human society, computers would have to first gain physical control of our brains, hooking them directly to a computer network. But in order to manipulate humans, there is no need to physically hook brains to computers. For thousands of years prophets, poets, and politicians have used language to manipulate and reshape society. Now computers are learning how to do it. And they won’t need to send killer robots to shoot us. They could manipulate human beings to pull the trigger.

    Fear of powerful computers has haunted humankind only since the beginning of the computer age in the middle of the twentieth century. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions. In ancient Greece, Plato told the famous allegory of the cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality. In ancient India, Buddhist and Hindu sages argued that all humans lived trapped inside maya—the world of illusions. What we normally take to be “reality” is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion. In the seventeenth century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. The computer revolution is bringing us face to face with Plato’s cave, with maya, with Descartes’s demon.

    What you just read might have alarmed you, or angered you. Maybe it made you angry at the people who lead the computer revolution and at the governments who fail to regulate it. Maybe it made you angry at me, thinking that I am distorting reality, being alarmist, and misleading you. But whatever you think, the previous paragraphs might have had some emotional effect on you. I have told a story, and this story might change your mind about certain things, and might even cause you to take certain actions in the world. Who created this story you’ve just read?

    I promise you that I wrote the text myself, with the help of some other humans. I promise you that this is a cultural product of the human mind. But can you be absolutely sure of it? A few years ago, you could. Prior to the 2020s, there was nothing on earth, other than a human mind, that could produce sophisticated texts. Today things are different. In theory, the text you’ve just read might have been generated by the alien intelligence of some computer.

    WHAT ARE THE IMPLICATIONS?

    As computers amass power, it is likely that a completely new information network will emerge. Of course, not everything will be new. For at least some time, most of the old information chains will remain. The network will still contain human-to-human chains, like families, and human-to-document chains, like churches. But the network will increasingly contain two new kinds of chains.

    First, computer-to-human chains, in which computers mediate between humans and occasionally control humans. Facebook and TikTok are two familiar examples. These computer-to-human chains are different from traditional human-to-document chains, because computers can use their power to make decisions, create ideas, and deepfake intimacy in order to influence humans in ways that no document ever could. The Bible had a profound effect on billions of people, even though it was a mute document. Now try to imagine the effect of a holy book that not only can talk and listen but can get to know your deepest fears and hopes and constantly mould them.

    Second, computer-to-computer chains are emerging in which computers interact with one another on their own. Humans are excluded from these loops and have difficulty even understanding what’s happening inside them. Google Brain, for example, has experimented with new encryption methods developed by computers. It set up an experiment where two computers—nicknamed Alice and Bob—had to exchange encrypted messages, while a third computer named Eve tried to break their encryption. If Eve broke the encryption within a given time period, it got points. If it failed, Alice and Bob scored. After about fifteen thousand exchanges, Alice and Bob came up with a secret code that Eve couldn’t break. Crucially, the Google engineers who conducted the experiment had not taught Alice and Bob anything about how to encrypt messages. The computers created a private language all on their own.42

    Similar things are already happening in the world outside research laboratories. For example, the foreign exchange market (forex) is the global market for exchanging foreign currencies, and it determines the exchange rates between, say, the euro and the U.S. dollar. In April 2022, the trade volume on the forex averaged $7.5 trillion per day. More than 90 percent of this trading is already done by computers talking directly with other computers.43 How many humans know how the forex market operates, let alone understand how the computers agree among themselves on trades worth trillions—and on the value of the euro and the dollar?

    For the foreseeable future, the new computer-based network will still include billions of humans, but we might become a minority. For the network will also include billions—perhaps even hundreds of billions—of superintelligent alien agents. This network will be radically different from anything that existed previously in human history, or indeed in the history of life on earth. Ever since life first emerged on our planet about four billion years ago, all information networks were organic. Human networks like churches and empires were also organic. They had a lot in common with prior organic networks like wolf packs. They all kept revolving around the traditional biological dramas of predation, reproduction, sibling rivalry, and romantic triangles. An information network dominated by inorganic computers would be different in ways that we can hardly even imagine. After all, as human beings, our imagination too is a product of organic biochemistry and cannot go beyond our preprogrammed biological dramas.

    It is only eighty years since the first digital computers were built. The pace of change is constantly accelerating, and we are nowhere close to exhausting the full potential of computers.44 They may continue to evolve for millions of years, and what happened in the past eighty years is as nothing compared with what’s in store. As a crude analogy, imagine that we are in ancient Mesopotamia, eighty years after the first person thought of using a stick to imprint signs on a piece of wet clay. Could we, at that moment, envision the Library of Alexandria, the power of the Bible, or the archives of the NKVD? Even this analogy grossly underestimates the potential of future computer evolution. So try to imagine that we are now eighty years since the first self-replicating genetic code lines coalesced out of the organic soup of early Earth, about four billion years ago. At this stage, even single-celled amoebas with their cellular organization, their thousands of internal organelles, and their ability to control movement and nutrition are still futuristic fantasies.45 Could we envision Tyrannosaurus rex, the Amazon rain forest, or humans landing on the moon?

    We still tend to think of a computer as a metal box with a screen and a keyboard, because this is the shape our organic imagination gave to the first baby computers in the twentieth century. As computers grow and develop, they are shedding old forms and taking radically new configurations, breaking the spatial and temporal limits of the human imagination. Unlike organic beings, computers don’t have to be in just one place at any one time. They already diffuse over space, with different parts in different cities and continents. In computer evolution, the distance from amoeba to T. rex could be covered in a decade. And whereas organic evolution took four billion years to get from organic soup to apes on the moon, computers may require just a couple of centuries to develop superintelligence, expand to planetary sizes, contract to a subatomic level, or come to sprawl over galactic space and time.

    The pace of computer evolution is reflected in the terminological chaos that surrounds computers. While a couple of decades ago it was customary to speak only about “computers,” now we find ourselves talking about algorithms, robots, bots, AIs, networks, or clouds. Our difficulty in deciding what to call them is itself important. Organisms are distinct individual entities that can be grouped into collectives like species and genera. With computers, however, it is becoming ever more difficult to decide where one entity ends and another begins and how exactly to group them.

    In this book I use the term “computer” when talking about the whole complex of software and hardware, manifested in physical form. I prefer to often use the almost-archaic-sounding “computer” over “algorithm” or “AI,” partly because I am aware how fast terms change and partly to remind us of the physical aspect of the computer revolution. Computers are made of matter, they consume energy, and they fill a space. Enormous amounts of electricity, fuel, water, land, precious minerals, and other resources are used to manufacture and operate them. Data centers alone account for between 1 percent and 1.5 percent of global energy usage, and large data centers take up millions of square feet and require hundreds of thousands of gallons of fresh water every day to keep them from overheating.46

    I also use the term “algorithm,” when I wish to focus more on software aspects, but it is crucial to remember that all the algorithms mentioned in subsequent pages run on some computer or other. As for the term “AI,” I use it when emphasizing the ability of some algorithms to learn and change by themselves. Traditionally, AI has been an acronym for “Artificial Intelligence.” But for reasons already evident from the previous discussion, it is perhaps better to think of it as an acronym for “Alien Intelligence.” As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien. It should also be noted that people often define and evaluate AI through the metric of “human-level intelligence,” and there is much debate about when we can expect AIs to reach “human-level intelligence.” The use of this metric, however, is deeply confusing. It is like defining and evaluating airplanes through the metric of “bird-level flight.” AI isn’t progressing towards human-level intelligence. It is evolving an entirely different type of intelligence.

    Another confusing term is “robot.” In this book it is used to allude to cases when a computer moves and operates in the physical sphere; whereas the term “bot” refers to algorithms operating mainly in the digital sphere. A bot may be polluting your social media account with fake news, while a robot may clean your living room of dust.

    One last note on terminology: I tend to speak of the computer-based “network” in the singular, rather than about “networks” in the plural. I am fully aware that computers can be used to create many networks with diverse characteristics, and chapter 11 explores the possibility that the world will be divided into radically different and even hostile computer networks. Nevertheless, just as different tribes, kingdoms, and churches share important features that enable us to talk about a single human network that has come to dominate planet Earth, so I prefer to talk about the computer network in the singular, in order to contrast it to the human network it is superseding.

    TAKING RESPONSIBILITY

    Although we cannot predict the long-term evolution of the computer-based network over the coming centuries and millennia, we can nevertheless say something about how it is evolving right now, and that is far more urgent, because the rise of the new computer network has immediate political and personal implications for all of us. In the next chapters, we’ll explore what is so new about our computer-based network and what it might mean for human life. What should be clear from the start is that this network will create entirely novel political and personal realities. The main message of the previous chapters has been that information isn’t truth and that information revolutions don’t uncover the truth. They create new political structures, economic models, and cultural norms. Since the current information revolution is more momentous than any previous information revolution, it is likely to create unprecedented realities on an unprecedented scale.

    It is important to understand this because we humans are still in control. We don’t know for how long, but we still have the power to shape these new realities. To do so wisely, we need to comprehend what is happening. When we write computer code, we aren’t just designing a product. We are redesigning politics, society, and culture, and so we had better have a good grasp of politics, society, and culture. We also need to take responsibility for what we are doing.

    Alarmingly, as in the case of Facebook’s involvement in the anti-Rohingya campaign, the corporations that lead the computer revolution tend to shift responsibility to customers and voters, or to politicians and regulators. When accused of creating social and political mayhem, they hide behind arguments like “We are just a platform. We are doing what our customers want and what the voters permit. We don’t force anyone to use our services, and we don’t violate any existing law. If customers didn’t like what we do, they would leave. If voters didn’t like what we do, they would pass laws against us. Since the customers keep asking for more, and since no law forbids what we do, everything must be okay.”47

    These arguments are either naive or disingenuous. Tech giants like Facebook, Amazon, Baidu, and Alibaba aren’t just the obedient servants of customer whims and government regulations. They increasingly shape these whims and regulations. The tech giants have a direct line to the world’s most powerful governments, and they invest huge sums in lobbying efforts to throttle regulations that might undermine their business model. For example, they have fought tenaciously to protect Section 230 of the US Telecommunications Act of 1996, which provides immunity from liability for online platforms regarding content published by their users. It is Section 230 that protects Facebook, for example, from being liable for the Rohingya massacre. In 2022 top tech companies spent close to $70 million on lobbying in the United States, and another €113 million on lobbying EU bodies, outstripping the lobbying expenses of oil and gas companies and pharmaceuticals.48 The tech giants also have a direct line to people’s emotional system, and they are masters at swaying the whims of customers and voters. If the tech giants obey the wishes of voters and customers, but at the same time also mold these wishes, then who really controls whom?

    The problem goes even deeper. The principles that “the customer is always right” and that “the voters know best” presuppose that customers, voters, and politicians know what is happening around them. They presuppose that customers who choose to use TikTok and Instagram comprehend the full consequences of this choice, and that voters and politicians who are responsible for regulating Apple and Huawei fully understand the business models and activities of these corporations. They presuppose that people know the ins and outs of the new information network and give it their blessing.

    The truth is, we don’t. That’s not because we are stupid but because the technology is extremely complicated and things are moving at breakneck speed. It takes effort to understand something like blockchain-based cryptocurrencies, and by the time you think you understand it, it has morphed again. Finance is a particularly crucial example, for two reasons. First, it is much easier for computers to create and change financial devices than physical objects, because modern financial devices are made entirely of information. Currencies, stocks, and bonds were once physical objects made of gold and paper, but they have already become digital entities existing mostly in digital databases. Second, these digital entities have enormous impact on the social and political world. What might happen to democracies—or to dictatorships, for that matter—if humans are no longer able to understand how the financial system functions?

    As a test case, consider what the new technology is doing to taxation. Traditionally, people and corporations paid taxes only in countries where they were physically present. But things are much trickier when physical space is augmented or replaced by cyberspace and when more and more transactions involve only the transfer of information rather than of physical goods or traditional currencies. For example, a citizen of Uruguay may daily interact online with numerous companies that might have no physical presence in Uruguay but that provide her with various services. Google provides her with free search, and ByteDance—the parent company of the TikTok application—provides her with free social media. Other foreign companies routinely target her with advertisements: Nike wants to sell her shoes, Peugeot wants to sell her a car, and Coca Cola wants to sell her soft drinks. In order to target her, these companies buy both personal information and ad space from Google and ByteDance. In addition, Google and ByteDance use the information they harvest from her and from millions of other users to develop powerful new AI tools that they can then sell to various governments and corporations throughout the world. Thanks to such transactions, Google and ByteDance are among the richest corporations in the world. So, should her transactions with them be taxed in Uruguay?

    Some think they should. Not just because information from Uruguay helped make these corporations rich, but also because their activities undermine taxpaying Uruguayan businesses. Local newspapers, TV stations, and movie theaters lose customers and ad revenue to the tech giants. Prospective Uruguayan AI companies also suffer, because they cannot compete with Google’s and ByteDance’s massive data troves. But the tech giants reply that none of the relevant transactions involved any physical presence in Uruguay or any monetary payments. Google and ByteDance provided Uruguayan citizens with free online services, and in return the citizens freely handed over their purchase histories, vacation photos, funny cat videos, and other information.

    If they nevertheless want to tax these transactions, the tax authorities need to reconsider some of their most fundamental concepts, such as “nexus.” In tax literature, “nexus” means an entity’s connection to a given jurisdiction. Traditionally, whether a corporation had nexus in a specific country depended on whether it had physical presence there, in the form of offices, research centers, shops, and so forth. One proposal for addressing the tax dilemmas created by the computer network is to redefine nexus. In the words of the economist Marko Köthenbürger, “The definition of nexus based on a physical presence should be adjusted to include the notion of a digital presence in a country.”49 This implies that even if Google and ByteDance have no physical presence in Uruguay, the fact that people in Uruguay use their online services should nevertheless make them subject to taxation there. Just as Shell and BP pay taxes to countries from which they extract oil, the tech giants should pay taxes to countries from which they extract data.

    This still leaves open the question of what, exactly, the Uruguayan government should tax. For example, suppose Uruguayan citizens shared a million cat videos through TikTok. ByteDance didn’t charge them or pay them anything for this. But ByteDance later used the videos to train an image-recognition AI, which it sold to the South African government for ten million U.S. dollars. How would the Uruguayan authorities even know that the money was partly the fruit of Uruguayan cat videos, and how could they calculate their share? Should Uruguay impose a cat video tax? (This may sound like a joke, but as we shall see in chapter 11, cat images were crucial for making one of the most important breakthroughs in AI.)

    It can get even more complicated. Suppose Uruguayan politicians promote a new scheme to tax digital transactions. In response, suppose one of the tech giants offers to provide a certain politician with valuable information on Uruguayan voters and tweak its social media and search algorithms to subtly favor that politician, which helps him win the next election. In exchange, maybe the incoming prime minister abandons the digital tax scheme. He also passes regulations that protect tech giants from lawsuits concerning users’ privacy, thereby making it easier for them to harvest information in Uruguay. Was this bribery? Note that not a single dollar or peso exchanged hands.

    Such information-for-information deals are already ubiquitous. Each day billions of us conduct numerous transactions with the tech giants, but one could never guess that from our bank accounts, because hardly any money is moving. We get information from the tech giants, and we pay them with information. As more transactions follow this information-for-information model, the information economy grows at the expense of the money economy, until the very concept of money becomes questionable.

    Money is supposed to be a universal measure of value, rather than a token used only in some settings. But as more things are valued in terms of information, while being “free” in terms of money, at some point it becomes misleading to evaluate the wealth of individuals and corporations in terms of the amount of dollars or pesos they possess. A person or corporation with little money in the bank but a huge data bank of information could be the wealthiest, or most powerful, entity in the country. In theory, it might be possible to quantify the value of their information in monetary terms, but they never actually convert the information into dollars or pesos. Why do they need dollars, if they can get what they want with information?

    This has far-reaching implications for taxation. Taxes aim to redistribute wealth. They take a cut from the wealthiest individuals and corporations, in order to provide for everyone. However, a tax system that knows how to tax only money will soon become outdated as many transactions no longer involve money. In a data-based economy, where value is stored as data rather than as dollars, taxing only money distorts the economic and political picture. Some of the wealthiest entities in the country may pay zero taxes, because their wealth consists of petabits of data rather than billions of dollars.50

    States have thousands of years of experience in taxing money. They don’t know how to tax information—at least, not yet. If we are indeed shifting from an economy dominated by money transactions to an economy dominated by information transactions, how should states react? China’s social credit system is one way a state may adapt to the new conditions. As we’ll explain in chapter 7, the social credit system is at heart a new kind of money—an information-based currency. Should all states copy the Chinese example and mint their own social credits? Are there alternative strategies? What does your favorite political party say about this question?

    RIGHT AND LEFT

    Taxation is just one among many problems created by the computer revolution. The computer network is disrupting almost all power structures. Democracies fear the rise of new digital dictatorships. Dictatorships fear the emergence of agents they don’t know how to control. Everyone should be concerned about the elimination of privacy and the spread of data colonialism. We’ll explain the meaning of each of these threats in the following chapters, but the point here is that the conversations about these dangers are only starting and the technology is moving much faster than the policy.For example, what’s the difference between the AI policies of Republicans and Democrats? What’s a right-wing position on AI, and what’s a left-wing position? Are conservatives against AI because of the threat it poses to traditional human-centered culture, or do they favor it because it will fuel economic growth while simultaneously reducing the need for immigrant workers? Do progressives oppose AI because of the risks of disinformation and increasing bias, or do they embrace it as a means of generating abundance that could finance a comprehensive welfare state? It is hard to tell, because until very recently Republicans and Democrats, and most other political parties around the world, haven’t thought or talked much about these issues.

    Some people—like the engineers and executives of high-tech corporations—are way ahead of politicians and voters and are better informed than most of us about the development of AI, cryptocurrencies, social credits, and the like. Unfortunately, most of them don’t use their knowledge to help regulate the explosive potential of the new technologies. Instead, they use it to make billions of dollars—or to accumulate petabits of information.

    There are exceptions, like Audrey Tang. She was a leading hacker and software engineer who in 2014 joined the Sunflower Student Movement that protested against government policies in Taiwan. The Taiwanese cabinet was so impressed by her skills that Tang was eventually invited to join the government as its minister of digital affairs. In that position, she helped make the government’s work more transparent to citizens. She was also credited with using digital tools to help Taiwan successfully contain the COVID-19 outbreak.51

    Yet Tang’s political commitment and career path are not the norm. For every computer-science graduate who wants to be the next Audrey Tang, there are probably many more who want to be the next Jobs, Zuckerberg, or Musk and build a multibillion corporation rather than become an elected public servant. This leads to a dangerous information asymmetry. The people who lead the information revolution know far more about the underlying technology than the people who are supposed to regulate it. Under such conditions, what’s the meaning of chanting that the customer is always right and that the voters know best?

    The following chapters try to level the playing field a bit and encourage us to take responsibility for the new realities created by the computer revolution. These chapters talk a lot about technology, but the viewpoint is thoroughly human. The key question is, what would it mean for humans to live in the new computer-based network, perhaps as an increasingly powerless minority? How would the new network change our politics, our society, our economy, and our daily lives? How would it feel to be constantly monitored, guided, inspired, or sanctioned by billions of nonhuman entities? How would we have to change in order to adapt, survive, and hopefully even flourish in this startling new world?

    NO DETERMINISM

    The most important thing to remember is that technology, in itself, is seldom deterministic. Belief in technological determinism is dangerous because it excuses people of all responsibility. Yes, since human societies are information networks, inventing new information technologies is bound to change society. When people invent printing presses or machine-learning algorithms, it will inevitably lead to a profound social and political revolution. However, humans still have a lot of control over the pace, shape, and direction of this revolution—which means we also have a lot of responsibility.

    At any given moment, our scientific knowledge and technical skills can lend themselves to developing any number of different technologies, but we have only finite resources at our disposal. We should make responsible choices about where to invest these resources. Should they be used to develop a new medicine for malaria, a new wind turbine, or a new immersive video game? There is nothing inevitable about our choice; it reflects political, economic, and cultural priorities.

    In the 1970s, most computer corporations like IBM focused on developing big and costly machines, which they sold to major corporations and government agencies. It was technically feasible to develop small, cheap personal computers and sell them to private individuals, but IBM had little interest in that. It didn’t fit its business model. On the other side of the Iron Curtain, in the U.S.S.R., the Soviets were also interested in computers, but they were even less inclined than IBM to develop personal computers. In a totalitarian state—where even private ownership of typewriters was suspect—the idea of providing private individuals with control of a powerful information technology was taboo. Computers were therefore given mainly to Soviet factory managers, and even they had to send all their data back to Moscow to be analyzed. As a result, Moscow was flooded with paperwork. By the 1980s, this unwieldy system of computers was producing 800 billion documents per year, all destined for the capital.52

    However, at a time when IBM and the Soviet government declined to develop the personal computer, hobbyists like the members of the California Homebrew Computer Club resolved to do it by themselves. It was a conscious ideological decision, influenced by the 1960s counterculture with its anarchist ideas of power to the people and libertarian distrust of governments and big corporations.53

    Leading members of the Homebrew Computer Club, like Steve Jobs and Steve Wozniak, had big dreams but little money and didn’t have access to the resources of either corporate America or the government apparatus. Jobs and Wozniak sold their personal possessions, like Jobs’s Volkswagen, to finance the creation of the first Apple computer. It was because of such personal decisions, rather than because of the inevitable decree of the goddess of technology, that by 1977 individuals could buy the Apple II personal computer for a price of $1,298—a considerable sum, but within reach of middle-class customers.54

    We can easily imagine an alternative history. Suppose humanity in the 1970s had access to the same scientific knowledge and technical skills, but McCarthyism had killed the 1960s counterculture and established an American totalitarian regime that mirrored the Soviet system. Would we have personal computers today? Of course, personal computers might still have emerged in a different time and place. But in history, time and place are crucial, and no two moments are the same. It matters a great deal that America was colonized by the Spaniards in the 1490s rather than by the Ottomans in the 1520s, or that the atom bomb was developed by the Americans in 1945 rather than by the Germans in 1942. Similarly, there would have been significant political, economic, and cultural consequences if the personal computer emerged not in San Francisco of the 1970s but rather in Osaka of the 1980s or in Shanghai of the first decade of the twenty-first century.

    The same is true of the technologies being currently developed. Engineers working for authoritarian governments and ruthless corporations could develop new tools to empower the central authority, by monitoring citizens and customers twenty-four hours a day. Hackers working for democracies may develop new tools to strengthen society’s self-correcting mechanisms, by exposing government corruption and corporate malpractices. Both technologies could be developed.

    Choice doesn’t end there. Even after a particular tool is developed, it can be put to many uses. We can use a knife to murder a person, to save their life in surgery, or to cut vegetables for their dinner. The knife doesn’t force our hand. It’s a human choice. Similarly, when cheap radio sets were developed, it meant that almost every family in Germany could afford to have one at home. But how would it be used? Cheap radios could mean that when a totalitarian leader gave a speech, he could reach the living room of every German family. Or they could mean that every German family could choose to listen to a different radio program, reflecting and cultivating a diversity of political and artistic views. East Germany went one way; West Germany went the other. Though radio sets in East Germany could technically receive a wide range of transmissions, the East German government did its best to jam Western broadcasts and punished people who secretly tuned in to them.55 The technology was the same, but politics made very different uses of it.

    The same is true of the new technologies of the twenty-first century. To exercise our agency, we first need to understand what the new technologies are and what they can do. That’s an urgent responsibility of every citizen. Naturally, not every citizen needs a PhD in computer science, but to retain control of our future, we do need to understand the political potential of computers. The next few chapters, then, offer an overview of computer politics for twenty-first-century citizens. We will first learn what the political threats and promises are of the new computer network and will then explore the different ways that democracies, dictatorships, and the international system as a whole might adjust to the new computer politics.

    Politics involves a delicate balance between truth and order. As computers become important members of our information network, they are increasingly tasked with discovering truth and maintaining order. For example, the attempt to find the truth about climate change increasingly depends on calculations that only computers can make, and the attempt to reach social consensus about climate change increasingly depends on recommendation algorithms that curate our news feeds, and on creative algorithms that write news stories, fake news, and fiction. At present, we are in a political deadlock about climate change, partly because the computers are at a deadlock. Calculations run on one set of computers warn us of an imminent ecological catastrophe, but another set of computers prompt us to watch videos that cast doubt on those warnings. Which set of computers should we believe? Human politics is now also computer politics.

    To understand the new computer politics, we need a deeper understanding of what’s new about computers. In this chapter we noted that unlike printing presses and other previous tools, computers can make decisions by themselves and can create ideas by themselves. That, however, is just the tip of the iceberg. What’s really new about computers is the way they make decisions and create ideas. If computers made decisions and created ideas in a way similar to humans, then computers would be a kind of “new humans.” That’s a scenario often explored in science fiction: the computer that becomes conscious, develops feelings, falls in love with a human, and turns out to be exactly like us. But the reality is very different, and potentially more alarming.

    CHAPTER 7 Relentless: The Network Is Always On

    Humans are used to being monitored. For millions of years, we have been watched and tracked by other animals, as well as by other humans. Family members, friends, and neighbors have always wanted to know what we do and feel, and we have always cared deeply how they see us and what they know about us. Social hierarchies, political maneuvers, and romantic relationships involved a never-ending effort to decipher what other people feel and think and occasionally hide our own feelings and thoughts.

    When centralized bureaucratic networks appeared and developed, one of the bureaucrats’ most important roles was to monitor entire populations. Officials in the Qin Empire wanted to know whether we were paying our taxes or plotting resistance. The Catholic Church wanted to know whether we paid our tithes and whether we masturbated. The Coca-Cola Company wanted to know how to persuade us to buy its products. Rulers, priests, and merchants wanted to know our secrets in order to control and manipulate us.

    Of course, surveillance has also been essential for providing beneficial services. Empires, churches, and corporations needed information in order to provide people with security, support, and essential goods. In modern states sanitation officials want to know where we get our water from and where we defecate. Health-care officials want to know what illnesses we suffer from and how much we eat. Welfare officials want to know whether we are unemployed or perhaps abused by our spouses. Without this information, they cannot help us.

    In order to get to know us, both benign and oppressive bureaucracies have needed to do two things. First, gather a lot of data about us. Second, analyze all that data and identify patterns. Accordingly, empires, churches, corporations, and health-care systems—from ancient China to the modern United States—have gathered and analyzed data about the behavior of millions of people. However, in all times and places surveillance has been incomplete. In democracies like the modern United States, legal limits have been placed on surveillance to protect privacy and individual rights. In totalitarian regimes like the ancient Qin Empire and the modern U.S.S.R., surveillance faced no such legal barriers but came up against technical boundaries. Not even the most brutal autocrats had the technology necessary to follow everybody all the time. Some level of privacy was therefore the default even in Hitler’s Germany, Stalin’s U.S.S.R., or the copycat Stalinist regime set up in Romania after 1945.

    Gheorghe Iosifescu, one of the first computer scientists in Romania, recalled that when computers were first introduced in the 1970s, the country’s regime was extremely uneasy about this unfamiliar information technology. One day in 1976 when Iosifescu walked into his office in the governmental Centrul de Calcul (Center for Calculus), he saw sitting there an unfamiliar man in a rumpled suit. Iosifescu greeted the stranger, but the man did not respond. Iosifescu introduced himself, but the man remained silent. So Iosifescu sat down at his desk, switched on a large computer, and began working. The stranger drew his chair closer, watching Iosifescu’s every move.

    Throughout the day Iosifescu repeatedly tried to strike up a conversation, asking the stranger what his name was, why he was there, and what he wanted to know. But the man kept his mouth shut and his eyes wide open. When Iosifescu went home in the evening, the man got up and left too, without saying goodbye. Iosifescu knew better than to ask any further questions; the man was obviously an agent of the dreaded Romanian secret police, the Securitate.

    The next morning, when Iosifescu came to work, the agent was already there. He again sat at Iosifescu’s desk all day, silently taking notes in a little notepad. This continued for the next thirteen years, until the collapse of the communist regime in 1989. After sitting at the same desk for all those years, Iosifescu never even learned the agent’s name.1

    Iosifescu knew that other Securitate agents and informers were probably monitoring him outside the office, too. His expertise with a powerful and potentially subversive technology made him a prime target. But in truth, the paranoid regime of Nicolae Ceauşescu regarded all twenty million Romanian citizens as targets. If it was possible, Ceauşescu would have placed every one of them under constant surveillance. He actually made some steps in that direction. Before he came to power, in 1965, the Securitate had just 1 electronic surveillance center in Bucharest and 11 more in provincial cities. By 1978, Bucharest alone was monitored by 10 electronic surveillance centers, 248 centers scrutinized the provinces, and an additional 1,000 portable surveillance units were moved around to eavesdrop on remote villages and holiday resorts.2

    When, in the late 1970s, Securitate agents discovered that some Romanians were writing anonymous letters to Radio Free Europe criticizing the regime, Ceauşescu orchestrated a nationwide effort to collect handwriting samples from all twenty million Romanian citizens. Schools and universities were forced to hand in essays from every student. Employers had to request each employee to submit a handwritten CV and then forward it to the Securitate. “What about retirees, and the unemployed?” asked one of Ceauşescu’s aides. “Invent some kind of new form!” commanded the dictator. “Something they will have to fill in.” Some of the subversive letters, however, were typed, so Ceauşescu also had every state-owned typewriter in the country registered, with samples filed away in the Securitate archive. People who possessed a private typewriter had to inform the Securitate of it, hand in the typewriter’s “fingerprint,” and ask for official authorization to use it.3

    But Ceauşescu’s regime, just like the Stalinist regime it modeled itself on, could not really follow every citizen twenty-four hours a day. Given that even Securitate agents needed to sleep, it would probably have required at least forty million of them to keep the twenty million Romanian citizens under constant surveillance. Ceauşescu had only about forty thousand Securitate agents.4 And even if Ceauşescu could somehow conjure forty million agents, that would only have presented new problems, because the regime needed to monitor its own agents, too. Like Stalin, Ceauşescu distrusted his own agents and officials more than anyone else, especially after his spy chief—Ion Mihai Pacepa—defected to the United States in 1978. Politburo members, high-ranking officials, army generals, and Securitate chiefs were living under even closer surveillance than Iosifescu. As the ranks of the secret police swelled, more agents were needed to spy on all these agents.5

    One solution was to have people spy on one another. In addition to its 40,000 professional agents, the Securitate relied on 400,000 civilian informers.6 People often informed on their neighbors, colleagues, friends, and even closest family members. But no matter how many informants a secret police employed, gathering all that data was not sufficient to create a total surveillance regime. Suppose the Securitate succeeded in recruiting enough agents and informers to watch everyone twenty-four hours a day. At the end of each day, every agent and informer would have had to compile a report on what they observed. Securitate headquarters would have been flooded by 20 million reports every day—or 7.3 billion reports a year. Unless analyzed, it was just an ocean of paper. Yet where could the Securitate find enough analysts to scrutinize and compare 7.3 billion reports annually?

    These difficulties in gathering and analyzing information meant that in the twentieth century not even the most totalitarian state could effectively monitor its entire population. Most of what Romanian and Soviet citizens did and said escaped the notice of the Securitate and the KGB. Even the details that made it into some archive often languished unread. The real power of the Securitate and the KGB was not an ability to constantly watch everyone, but rather their ability to inspire the fear that they might be watching, which made everyone extremely careful about what they said and did.7

    SLEEPLESS AGENTS

    In a world where surveillance is conducted by the organic eyes, ears, and brains of people like the Securitate agent in Iosifescu’s lab, even a prime target like Iosifescu still had some privacy, first and foremost within his own mind. But the work of computer scientists like Iosifescu himself was changing this. Already in 1976, the crude computer sitting on Iosifescu’s desk could crunch numbers much better than the Securitate agent in the nearby chair. By 2024, we are getting close to the point when a ubiquitous computer network can follow the population of entire countries twenty-four hours a day. This network doesn’t need to hire and train millions of human agents to follow us around; it relies on digital agents instead. And the network doesn’t even need to pay for these digital agents. Citizens pay for the agents on our own initiative, and we carry them with us wherever we go.

    The agent monitoring Iosifescu didn’t accompany Iosifescu into the toilet and didn’t sit on the bed while Iosifescu was having sex. Today, our smartphone sometimes does exactly that. Moreover, many of the activities Iosifescu did without any help from his computer—like reading the news, chatting with friends, or buying food—are now done online, so it is even easier for the network to know what we are doing and saying. We ourselves are the informers that provide the network with our raw data. Even those without smartphones are almost always within the orbit of some camera, microphone, or tracking device, and they too constantly interact with the computer network in order to find work, buy a train ticket, get a medical prescription, or simply walk down the street. The computer network has become the nexus of most human activities. In the middle of almost every financial, social, or political transaction, we now find a computer. Consequently, like Adam and Eve in paradise, we cannot hide from the eye in the clouds.

    Just as the computer network doesn’t need millions of human agents to follow us, it also doesn’t need millions of human analysts to make sense of our data. The ocean of paper in Securitate headquarters never analyzed itself. But thanks to the magic of machine learning and AI, computers can themselves analyze most of the information they accumulate. An average human can read about 250 words per minute.8 A Securitate analyst working twelve-hour shifts without taking any days off, could read about 2.6 billion words during a forty-year career. In 2024 language algorithms like ChatGPT and Meta’s Llama can process millions of words per minute and “read” 2.6 billion words in a couple of hours.9 The ability of such algorithms to process images, audio recordings, and video footage is equally superhuman.

    Even more important, the algorithms far surpass humans in their ability to spot patterns in that ocean of data. Identifying patterns requires both the ability to create ideas and the ability to make decisions. For example, how do human analysts identify someone as a “suspected terrorist” that merits closer attention? First, they create a set of general criteria, such as “reading extremist literature,” “befriending known terrorists,” and “having technical knowledge necessary to produce dangerous weapons.” Then they need to decide whether a particular individual meets enough of these criteria to be labeled a suspected terrorist. Suppose someone watched a hundred extremist videos on YouTube last month, is friends with a convicted terrorist, and is currently pursuing a doctorate in epidemiology in a laboratory containing samples of Ebola virus. Should that person be put on the “suspected terrorists” list? And what about someone who watched fifty extremist videos last month and is a biology undergraduate?

    In Romania of the 1970s only humans could make such decisions. By the 2010s humans were increasingly leaving it to algorithms to decide. Around 2014–15 the U.S. National Security Agency deployed an AI tool called Skynet that placed people on a “suspected terrorists” list based on the electronic patterns of their communications, writings, travel, and social media postings. According to one report, that AI tool “engages in mass surveillance of Pakistan’s mobile phone network, and then uses a machine learning algorithm on the cellular network metadata of 55 million people to try and rate each person’s likelihood of being a terrorist.” A former director of both the CIA and the NSA proclaimed that “we kill people based on metadata.”10 Skynet’s reliability has been severely criticized, but by the 2020s such technology has become far more sophisticated and has been deployed by a lot more governments. Going over massive amounts of data, algorithms can discover completely new criteria for defining someone as “suspect” which have previously escaped the notice of human analysts.11In the future, algorithms could even create an entire new model for how people are radicalized, just by identifying patterns in the lives of known terrorists. Of course, computers remain fallible, as we shall explore in depth in chapter 8. They may well classify innocent people as terrorists or may create a false model for radicalization. At an even more fundamental level, it is questionable whether the systems’ definition of things like terrorism are objective. There is a long history of regimes using the label “terrorist” to cover any and all opposition. In the Soviet Union, anyone who opposed the regime was a terrorist. Consequently, when an AI labels someone a “terrorist” it might reflect ideological biases rather than objective facts. The power to make decisions and invent ideas is inseparable from the capacity to make mistakes. Even if no mistakes are committed, the algorithms’ superhuman ability to recognize patterns in an ocean of data can supercharge the power of numerous malign actors, from repressive dictatorships that seek to identify dissidents to fraudsters who seek to identify vulnerable targets.

    Of course, pattern recognition also has enormous positive potential. Algorithms can help identify corrupt government officials, white-collar criminals, and tax-evading corporations. The algorithms can similarly help flesh-and-blood sanitation officials to spot threats to our drinking water;12 help doctors to discern illnesses and burgeoning epidemics;13 and help police officers and social workers to identify abused spouses and children.14 In the following pages, I dedicate relatively little attention to the positive potential of algorithmic bureaucracies, because the entrepreneurs leading the AI revolution already bombard the public with enough rosy predictions about them. My goal here is to balance these utopian visions by focusing on the more sinister potential of algorithmic pattern recognition. Hopefully, we can harness the positive potential of algorithms while regulating their destructive capacities.

    But to do so, we must first appreciate the fundamental difference between the new digital bureaucrats and their flesh-and-blood predecessors. Inorganic bureaucrats can be “on” twenty-four hours a day and can monitor us and interact with us anywhere, anytime. This means that bureaucracy and surveillance are no longer something we encounter only in specific times and places. The health-care system, the police, and manipulative corporations are all becoming ubiquitous and permanent features of life. Instead of organizations with which we interact only in certain situations—for example, when we visit the clinic, the police station, or the mall—they are increasingly accompanying us every moment of the day, watching and analyzing every single thing that we do. As fish live in water, humans live in a digital bureaucracy, constantly inhaling and exhaling data. Each action we make leaves a trace of data, which is gathered and analyzed to identify patterns.

    UNDER-THE-SKIN SURVEILLANCE

    For better or worse, the digital bureaucracy may not only monitor what we do in the world but even observe what is happening inside our bodies. Take, for example, tracking eye movements. By the early 2020s, CCTV cameras, as well as cameras in laptops and smartphones, have begun to routinely collect and analyze data on the movements of our eyes, including tiny changes to our pupils and irises lasting just a few milliseconds. Human agents are barely capable of even noticing such data, but computers can use it to calculate the direction of our gaze, based on the shape of our pupils and irises and on the patterns of light they reflect. Similar methods can determine whether our eyes are fixating on a stable target, pursuing a moving target, or wandering around more haphazardly.

    From certain patterns of eye movements, computers can then distinguish, for example, moments of awareness from moments of distraction, and detail-oriented people from those who pay more attention to context. Computers could infer from our eyes many additional personality traits, like how open we are to new experiences, and estimate our level of expertise in various fields ranging from reading to surgery. Experts possessing well-honed strategies display systematic gaze patterns, whereas the eyes of novices wander aimlessly. Eye patterns also indicate our levels of interest in the objects and situations we encounter, and distinguish between positive, neutral, and negative interest. From this, it is possible to deduce our preferences in fields ranging from politics to sex. Much can also be known about our medical condition and our use of various substances. The consumption of alcohol and drugs—even at nonintoxicating doses—has measurable effects on eye and gaze properties, such as changes in pupil size and an impaired ability to fixate on moving objects. A digital bureaucracy may use all that information for benign purposes—such as by providing early detection for people suffering from drug abuse and mental illnesses. But it could obviously also form the foundations of the most intrusive totalitarian regimes in history.15

    In theory, the dictators of the future could get their computer network to go much deeper than just watching our eyes. If the network wants to know our political views, personality traits, and sexual orientation, it could monitor processes inside our hearts and brains. The necessary biometric technology is already being developed by some governments and companies, like Elon Musk’s Neuralink. Musk’s company has conducted experiments on live rats, sheep, pigs, and monkeys, implanting electrical probes into their brains. Each probe contains up to 3,072 electrodes capable of identifying electrical signals and potentially transmitting signals to the brain. In 2023, Neuralink received approval from U.S. authorities to begin experiments on human beings, and in January 2024 it was reported that a first brain chip was implanted in a human.

    Musk speaks openly about his far-reaching plans for this technology, arguing that it can not only alleviate various medical conditions such as quadriplegia (four-limb paralysis) but also upgrade human abilities and thereby help humankind compete with AI. But it should be clear that at present the Neuralink probes and all other similar biometric devices suffer from a host of technical problems that greatly limit their capabilities. It is difficult to accurately monitor bodily activities—in the brain, heart, or anywhere else—from outside the body, whereas implanting electrodes and other monitoring devices into the body is intrusive, dangerous, costly, and inefficient. Our immune system, for example, attacks implanted electrodes.16

    Even more crucially, nobody yet has the biological knowledge necessary to deduce things like precise political opinions from under-the-skin data like brain activity.17 Scientists are far from understanding the mysteries of the human brain, or even of the mouse brain. Simply mapping every neuron, dendrite, and synapse in a mouse brain—let alone understanding the dynamics between them—is currently beyond humanity’s computational abilities.18 Accordingly, while gathering data from inside people’s brains is becoming more feasible, using such data to decipher our secrets is far from easy.

    One popular conspiracy theory of the early 2020s argues that sinister groups led by billionaires like Elon Musk are already implanting computer chips into our brains in order to monitor and control us. However, this theory focuses our anxieties on the wrong target. We should of course fear the rise of new totalitarian systems, but it is too soon to worry about computer chips implanted in our brains. People should instead worry about the smartphones on which they read these conspiracy theories. Suppose someone wants to know your political views. Your smartphone monitors which news channels you are watching and notes that you watch on average forty minutes of Fox News and forty seconds of CNN a day. Meanwhile, an implanted Neuralink computer chip monitors your heart rate and brain activity throughout the day and notes that your maximum heart rate was 120 beats per minute and that your amygdala is about 5 percent more active than the human average. Which data would be more useful to guess your political affiliation—the data coming from the smartphone or from the implanted chip?19 At present, the smartphone is still a far more valuable surveillance tool than biometric sensors.

    However, as biological knowledge increases—not least thanks to computers analyzing petabits of biometric data—under-the-skin surveillance might eventually come into its own, especially if it is linked to other monitoring tools. At that point, if biometric sensors register what happens to the heart rate and brain activity of millions of people as they watch a particular news item on their smartphones, that can teach the computer network far more than just our general political affiliation. The network could learn precisely what makes each human angry, fearful, or joyful. The network could then both predict and manipulate our feelings, selling us anything it wants—be it a product, a politician, or a war.20

    THE END OF PRIVACY

    In a world where humans monitored humans, privacy was the default. But in a world where computers monitor humans, it may become possible for the first time in history to completely annihilate privacy. The most extreme and well-known cases of intrusive surveillance involve either exceptional times of emergency, like the COVID-19 pandemic, or places seen as exceptional to the normal order of things, such as the Occupied Palestinian Territories, the Xinjiang Uyghur Autonomous Region in China, the region of Kashmir in India, Russian-occupied Crimea, the U.S.-Mexico border, and the Afghanistan-Pakistan borderlands. In these exceptional times and places, new surveillance technologies, combined with draconian laws and heavy police or military presence, have relentlessly monitored and controlled people’s movements, actions, and even feelings.21 What is crucial to realize, though, is that AI-based surveillance tools are being deployed on an enormous scale, and not only in such “states of exception.”22 They are now part and parcel of normal life everywhere. The post-privacy era is taking hold in authoritarian countries ranging from Belarus to Zimbabwe,23 as well as in democratic metropolises like London and New York.

    Whether for good or ill, governments intent on combating crime, suppressing dissent, or countering internal threats (real or imaginary) blanket whole territories with a ubiquitous online and offline surveillance network, equipped with spyware, CCTV cameras, facial recognition and voice recognition software, and vast searchable databases. If a government wishes, its surveillance network can reach everywhere, from markets to places of worship, from schools to private residences. (And while not every government is willing or able to install cameras inside people’s homes, algorithms regularly watch us even in our living rooms, bedrooms and bathrooms via our own computers and smartphones.)

    Governmental surveillance networks also routinely collect biometric data from entire populations, with or without their knowledge. For example, when applying for a passport, more than 140 countries oblige their citizens to provide fingerprints, facial scans, or iris scans.24 When we use our passports to enter a foreign country, that country often demands that we provide it, too, with our fingerprints, facial scans, or iris scans.25 As citizens or tourists walk along the streets of Delhi, Beijing, Seoul, or London, their movements are likely to be recorded. For these cities—and many others around the world—are covered by more than one hundred surveillance cameras on average per square kilometer. Altogether, in 2023 more than one billion CCTV cameras were operative globally, which is about one camera per eight people.26

    Any physical activity a person engages in leaves a data trace. Every purchase made is recorded in some database. Online activities like messaging friends, sharing photos, paying bills, reading news, booking appointments, or ordering taxis can all be recorded as well. The resulting ocean of data can then be analyzed by AI tools to identify unlawful activities, suspicious patterns, missing persons, disease carriers, or political dissidents.

    As with every powerful technology, these tools can be used for either good or bad purposes. Following the storming of the U.S. Capitol on January 6, 2021, the FBI and other U.S. law enforcement agencies used state-of-the-art surveillance tools to track down and arrest the rioters. As reported in a Washington Post investigation, these agencies relied not only on footage from the CCTV cameras in the Capitol, but also on social media posts, license plate readers throughout the country, cell-tower location records, and preexisting databases.

    One Ohio man wrote on Facebook that he had been in Washington that day to “witness history.” A subpoena was issued to Facebook, which provided the FBI with the man’s Facebook posts, as well as his credit card information and phone number. This helped the FBI to match the man’s driver’s license photo to CCTV footage from the Capitol. Another warrant issued to Google yielded the exact geolocation of the man’s smartphone on January 6, enabling agents to map his every movement from his entry point into the Senate chamber all the way to the office of Nancy Pelosi, the speaker of the House of Representatives.

    Relying on license plate footage, the FBI pinpointed the movements of a New York man from the moment he crossed the Henry Hudson Bridge at 6:06:08 on the morning of January 6, on his way to the Capitol, until he crossed the George Washington Bridge at 23:59:22 that night, on his way back home. An image taken by a camera on Interstate 95 showed an oversized “Make America Great Again” hat on the man’s dashboard. The hat was matched to a Facebook selfie in which the man appeared wearing it. He further incriminated himself with several videos he posted to Snapchat from within the Capitol.

    Another rioter sought to protect himself from detection by wearing a face mask on January 6, avoiding live-streaming, and using a cellphone registered in his mother’s name—but it availed him little. The FBI’s algorithms managed to match video footage from January 6, 2021, to a photo from the man’s 2017 passport application. They also matched a distinctive Knights of Columbus jacket he wore on January 6 to the jacket he wore on a different occasion, which was captured in a YouTube clip. The phone registered in his mother’s name was geolocated to inside the Capitol, and a license plate reader recorded his car near the Capitol on the morning of January 6.27

    Facial recognition algorithms and AI-searchable databases are now standard tools of police forces all over the world. They are deployed not only in cases of national emergencies or for reasons of state security, but for everyday policing tasks. In 2009, a criminal gang abducted the three-year-old Gui Hao while he was playing outside his parents’ shop in Sichuan province, China. The boy was then sold to a family in Guangdong province, about 1,500 kilometers away. In 2014, the leader of the child-trafficking gang was arrested, but it proved impossible to locate Gui Hao and other victims. “The appearance of the children would have changed so much,” explained a police investigator, “that even their parents would not have been able to recognize them”.

    In 2019, however, a facial recognition algorithm managed to identify the now thirteen-year-old Gui Hao, and the teenager was reunited with his family. To correctly identify Gui Hao, the AI relied on an old photograph of his, taken when he was a toddler. The AI simulated what Gui Hao must look like as a thirteen-year-old, taking into account the drastic impact of maturation as well as potential changes in hair color and hairstyle and compared the resulting simulation to real-life footage.

    In 2023, even more remarkable rescues were reported. Yuechuan Lei was abducted in 2001 when he was three years old, and Hao Chen went missing in 1998, also at age three. The parents of both children never gave up hope of finding them. For more than twenty years they crisscrossed China in search of them, placed advertisements, and offered monetary rewards for any relevant information. In 2023, facial recognition algorithms helped locate both missing boys, now adult men in their twenties. Such technology currently helps to find lost children not only in China, but also in other countries like India, where tens of thousands of children go missing every year.28

    Meanwhile, in Denmark, the soccer club Brøndby IF began in July 2019 to use facial recognition technology in its home stadium to identify and ban football hooligans. As up to 30,000 fans stream into the stadium to watch a match, they are asked to remove masks, hats, and glasses so a computer can scan their faces and compare them to a list of banned troublemakers. Crucially, the procedure has been vetted and approved in accordance with the EU’s strict GDPR rules. The Danish Data Protection Authority explained that the use of the technology “would allow for more effective enforcement of the ban list compared to manual checks, and that this could reduce the queues at the stadium entrance, lowering the risk of public unrest from impatient football fans standing in queues.”29

    While such usages of technology are laudable in theory, they raise obvious concerns about privacy and governmental overreach. In the wrong hands, the same techniques that can locate rioters, rescue missing children, and ban football hooligans can also be used to persecute peaceful demonstrators or enforce rigid conformism. Ultimately, AI-powered surveillance technology could result in the creation of total surveillance regimes that monitor citizens around the clock and facilitate new kinds of ubiquitous and automated totalitarian repression. A case in point: Iran’s hijab laws.

    After Iran became an Islamic theocracy in 1979, the new regime made it compulsory for women to wear the hijab. But the Iranian morality police found it difficult to enforce this rule. They couldn’t place a police officer on every street corner, and public confrontations with women who went unveiled occasionally aroused resistance and resentment. In 2022, Iran relegated much of the job of enforcing the hijab laws to a countrywide system of facial recognition algorithms that relentlessly monitor both physical spaces and online environments.30 A top Iranian official explained that the system would “identify inappropriate and unusual movements” including “failure to observe hijab laws.” The head of Iran’s parliamentary legal and judicial committee, Mousa Ghazanfarabadi, said in another interview that “the use of face recording cameras can systematically implement this task and reduce the presence of the police, as a result of which there will be no more clashes between the police and citizens.”31

    Shortly afterward, on September 16, 2022, the 22-year-old Mahsa Amini died in the custody of Iran’s morality police, after being arrested for not wearing her hijab properly.32 A wave of protests erupted, known as the “Woman, Life, Freedom” movement. Hundreds of thousands of women and girls removed their headscarves, and some publicly burned their hijabs, and danced around the bonfires. To clamp down on the protests, Iranian authorities once again turned to their AI surveillance system, which relies on facial recognition software, geolocation, analysis of web traffic, and preexisting databases. More than 19,000 people were arrested throughout Iran, and more than 500 were killed.33

    On April 8, 2023, Iran’s chief of police announced that beginning on April 15, 2023, an intense new campaign would ramp up the use of facial recognition technology. In particular, algorithms would henceforth identify women who choose not to wear a headscarf while travelling in a vehicle, and automatically issue them an SMS warning. If a woman was caught repeating the offense, she would be ordered to immobilize her car for a predetermined period, and if she failed to comply, the car would be confiscated.34

    Two months later, on June 14, 2023, the spokesperson of Iran’s police boasted that the automated surveillance system sent almost one million SMS warning messages to women who had been captured unveiled in their private cars. The system was apparently able to automatically determine that it was seeing an unveiled woman rather than a man, identify the woman, and retrieve her cellphone number. The system further “issued 133,174 SMS messages requiring the immobilization of vehicles for two weeks, confiscated 2,000 cars, and referred more than 4,000 ‘repeat offenders’ to the judiciary.”35

    A 52-year-old woman named Maryam shared with Amnesty International her experience with the surveillance system. “The first time I received a warning for not wearing a headscarf while driving, I was passing through an intersection when a camera captured a photo and I immediately received a warning text message. The second time, I had done some shopping, and I was bringing the bags into the car, my scarf fell off, and I received a message noting that due to violating compulsory veiling laws, my car had been subjected to ‘systematic impoundment’ for a period of fifteen days. I did not know what this meant. I asked around and found out through relatives that this meant I had to immobilize my car for fifteen days.”36 Maryam’s testimony indicates that the AI sends its threatening messages within seconds, with no time for any human to review and authorize the procedure.

    Penalties went far beyond the immobilization or confiscation of vehicles. The Amnesty report from July 26, 2023, revealed that as a result of the mass surveillance effort “countless women have been suspended or expelled from universities, barred from sitting final exams, and denied access to banking services and public transport.”37 Businesses that didn’t enforce the hijab law among their employees or customers also suffered. In one typical case, a woman employee at the Land of Happiness amusement park east of Tehran was photographed without a hijab, and the image circulated on social media. In punishment, the Land of Happiness was closed down by Iranian authorities.38 Altogether, reported Amnesty, the authorities “shut down hundreds of tourist attractions, hotels, restaurants, pharmacies and shopping centres for not enforcing compulsory veiling laws”.39

    In September 2023, on the anniversary of Mahsa Amini’s death, Iran’s parliament passed a new and stricter hijab bill. According to the new law, women who fail to wear the hijab can be punished by heavy fines and up to ten years in prison. They face additional penalties including confiscation of cars and communication devices, driving bans, deductions in salary and employment benefits, dismissal from work, and prohibition from access banking services. Business owners who don’t enforce the hijab law among their employees or customers face a fine of up to three months of their profits, and they may be banned from leaving the country or participating in public or online activities for up to two years. The new bill targets not only women, but also men who wear “revealing clothing that shows parts of the body lower than the chest or above the ankles.” Finally, the law mandates that Iranian police must “create and strengthen AI systems to identify perpetrators of illegal behavior using tools such as fixed and mobile cameras.”40 In coming years, many people might be living under total surveillance regimes that would make Ceauşescu’s Romania look like a libertarian utopia.

    VARIETIES OF SURVEILLANCE

    When talking about surveillance, we usually think of state-run apparatuses, but to understand surveillance in the twenty-first century, we should remember that monitoring can take many other forms. Jealous partners, for example, have always wanted to know where their spouses were at every moment and demanded explanations for any little deviation from routines. Today, armed with a smartphone and some cheap software, they can easily establish marital dictatorships. They can monitor every conversation and every movement, record phone logs, track social media posts and web page searches, and even activate the cameras and microphones of a spouse’s phone to serve as a spying device. The U.S.-based National Network to End Domestic Violence found that more than half of domestic abusers used such “stalkware” technology. Even in New York a spouse may find themselves monitored and restricted, as if they lived in a totalitarian state.41

    A growing percentage of employees—from office workers to truck drivers—are also now being surveilled by their employers. Bosses can pinpoint where employees are at any moment, how much time they spend in the toilet, whether they read personal emails at work, and how fast they complete each task.42 Corporations are similarly monitoring their customers, wanting to know their likes and dislikes, to predict future behavior, and to evaluate risks and opportunities. For example, vehicles monitor their drivers’ behavior and share the data with the algorithms of the insurance companies, which raise the premiums they charge “bad drivers” and lower the premiums for “good drivers.”43 The American scholar Shoshana Zuboff has termed this ever-expanding commercial monitoring system “surveillance capitalism.”44

    In addition to all these varieties of top-down surveillance, there are peer-to-peer systems in which individuals constantly monitor one another. For example, the Tripadvisor corporation maintains a worldwide surveillance system that monitors hotels, vacation rentals, restaurants, and tourists. In 2019, it was used by 463 million travelers who browsed 859 million reviews and 8.6 billion lodgings, restaurants, and tourist attractions. It is the users themselves—rather than some sophisticated AI algorithm—who determine whether a restaurant is worth visiting. People who ate in the restaurant can score it on a 1 to 5 scale, and also add photos and written reviews. The Tripadvisor algorithm merely aggregates the data, calculates the restaurant’s average score, ranks the restaurant compared with others of its kind, and makes the results available for everybody to see.

    The algorithm simultaneously ranks the guests, too. For posting reviews or travel articles, users receive 100 points; for uploading photos or videos, 30 points; for posting in a forum, 20 points; for rating establishments, 5 points; and for casting votes for others’ reviews, 1 point. Users are then ranked from Level 1 (300 points) to Level 6 (10,000 points) and receive perks accordingly. Users who violate the system’s rules—for example, by submitting racist comments or trying to blackmail a restaurant by writing an unjustified bad review—may be penalized or kicked out of the system altogether. This is peer-to-peer surveillance. Everybody is constantly grading everybody else. Tripadvisor doesn’t need to invest in cameras and spyware or develop hyper-sophisticated biometric algorithms. Almost all the data is submitted and almost all the work is done by millions of human users. The job of the Tripadvisor algorithm is only to aggregate human-generated scores and publish them.45

    Tripadvisor and similar peer-to-peer surveillance systems provide valuable information for millions of people every day, making it easier to plan vacations and find good hotels and restaurants. But in doing so, they have also shifted the border between private and public spaces. Traditionally, the relationship between the customer and a waiter, say, was a relatively private affair. Entering a bistro meant entering a semiprivate space and establishing a semiprivate relationship with the waiter. Unless some crime was committed, what happened between guest and waiter was their business alone. If the waiter was rude or made a racist remark, you could make a scene and perhaps tell your friends not to go there, but few other people would hear about it.

    Peer-to-peer surveillance networks have obliterated that sense of privacy. If the staff fails to please a customer, the restaurant will get a bad review, which could affect the decision of thousands of potential customers in coming years. For better or worse, the balance of power tilts in favor of the customers, while the staff find themselves more exposed than before to the public gaze. As the author and journalist Linda Kinstler put it, “Before Tripadvisor, the customer was only nominally king. After, he became a veritable tyrant, with the power to make or break lives.”46 The same loss of privacy is felt today by millions of taxi drivers, barbers, beauticians, and other service providers. In the past, stepping into a taxi or barbershop meant stepping into someone’s private space. Now, when customers come into your taxi or barbershop, they bring cameras, microphones, a surveillance network, and thousands of potential viewers with them.47 This is the foundation of a nongovernmental peer-to-peer surveillance network.

    THE SOCIAL CREDIT SYSTEM

    Peer-to-peer surveillance systems typically operate by aggregating many points to determine an overall score. Another type of surveillance network takes this “score logic” to its ultimate conclusion. This is the social credit system, which seeks to give people points for everything and produce an overall personal score that will influence everything. The last time humans came up with such an ambitious points system was five thousand years ago in Mesopotamia, when money was invented. One way to think of the social credit system is as a new kind of money.

    Money is points that people accumulate by selling certain products and services, and then use to buy other products and services. Some countries call their “points” dollars, whereas other countries call them euros, yen, or renminbi. The points can take the form of coins, banknotes, or bits in a digital bank account. The points themselves are, of course, intrinsically worthless. You cannot eat coins or wear banknotes. Their value lies in the fact that they serve as accounting tokens that society uses to keep track of our individual scores.

    Money revolutionized economic relations, social interactions, and human psychology. But like surveillance, money has had its limitations and could not reach everywhere. Even in the most capitalist societies, there have always been places that money didn’t penetrate, and there have always been many things that lacked a monetary value. How much is a smile worth? How much money does a person earn for visiting their grandparents?48

    For scoring those things that money can’t buy, there was an alternative nonmonetary system, which has been given different names: honor, status, reputation. What social credit systems seek is a standardized valuation of the reputation market. Social credit is a new points system that ascribes precise values even to smiles and family visits. To appreciate how revolutionary and far-reaching this is, let’s examine in brief how the reputation market has hitherto differed from the money market. This will help us understand what might happen to social relations if the principles of the money market are suddenly extended to the reputation market.

    One major difference between money and reputation is that money has tended to be a mathematical construct based on precise calculations, whereas the sphere of reputation has been resistant to precise numerical evaluation. For example, medieval aristocrats graded themselves in hierarchical ranks such as dukes, counts, and viscounts, but nobody was counting reputation points. Customers in a medieval market usually knew how many coins they had in their purses and the price of every product in the stalls. In the money market, no coin goes uncounted. In contrast, knights in a medieval reputational market didn’t know the exact amount of honor that different actions might accrue, nor could they be sure of their overall score. Would fighting bravely in battle bring a knight 10 honor points, or 100? And what if nobody saw and recorded their bravery? Indeed, even assuming it was noticed, different people might assign it different values. This lack of precision wasn’t a bug in the system but a crucial feature. “Calculating” was a synonym for cunning and scheming. Acting honorably was supposed to reflect an inner virtue, rather than a pursuit of external rewards.49

    This difference between the scrupulous money market and the ill-defined reputation market still prevails. The owner of a bistro always notices and complains if you don’t pay for your meal in full; every item on the menu has a precise price. But how would the owner even know if society failed to register some good deed they performed? Whom could they complain to if they weren’t properly rewarded for helping an elderly customer or for being extra patient with a rude customer? In some cases, they might now try complaining to Tripadvisor, which collapses the boundary between the money market and the reputation market, turning the fuzzy reputation of restaurants and hotels into a mathematical system of precise points. The idea of social credit is to expand this surveillance method from restaurants and hotels to everything. In the most extreme type of social credit systems, every person gets an overall reputation score that takes into account whatever they do and determines everything they can do.

    For example, you might earn 10 points for picking up trash from the street, get another 20 points for helping an old lady cross the road, and lose 15 points for playing the drums and disturbing the neighbors. If you get a high enough score, it might give you priority when buying train tickets or a leg up when applying to university. If you get a low score, potential employers may refuse to give you a job, and potential dates may refuse your advances. Insurance companies may demand higher premiums, and judges may inflict harsher sentences.

    Some people might see social credit systems as a way to reward pro-social behavior, punish egotistical acts, and create kinder and more harmonious societies. The Chinese government, for example, explains that its social credit systems could help fight corruption, scams, tax evasion, false advertising, and counterfeiting, and thereby establish more trust between individuals, between consumers and corporations, and between citizens and government institutions.50 Others may find systems that allocate precise values to every social action demeaning and inhuman. Even worse, a comprehensive social credit system will annihilate privacy and effectively turn life into a never-ending job interview. Anything you do, anytime, anywhere, might affect your chances of getting a job, a bank loan, a husband, or a prison sentence. You got drunk at a college party and did something legal but shameful? You participated in a political demonstration? You’re friends with someone who has a low credit score? This will be part of your job interview—or criminal sentencing—both in the short term and even decades later. The social credit system might thereby become a totalitarian control system.

    Of course, the reputation market always controlled people and made them conform to the prevailing social norms. In most societies people have always feared losing face even more than they have feared losing money. Many more people commit suicide due to shame and guilt than due to economic distress. Even when people kill themselves after being fired from their job or after their business goes bankrupt, they are usually pushed over the edge by the social humiliation it involves rather than by the economic hardship per se.51

    But the uncertainty and the subjectivity of the reputation market have previously limited its potential for totalitarian control. Since nobody knew the precise value of each social interaction, and since nobody could possibly keep tabs on all interactions, there was significant room for maneuver. When you went to a college party, you might have behaved in a way that earned the respect of your friends, without worrying what future employers might think. When you went to a job interview, you knew none of your friends would be there. And when you were watching pornography at home, you assumed that neither your bosses nor your friends knew what you were up to. Life has been divided into separate reputational spheres, with separate status competitions, and there were also many off-grid moments when you didn’t have to engage in any status competitions at all. Precisely because status competition is so crucial, it is also extremely stressful. Therefore, not only humans but even other social animals like apes have always welcomed some respite from it.52

    Unfortunately, social credit algorithms combined with ubiquitous surveillance technology now threaten to merge all status competitions into a single never-ending race. Even in their own homes or while trying to enjoy a relaxed vacation, people would have to be extremely careful about every deed and word, as if they were performing onstage in front of millions. This could create an incredibly stressful lifestyle, destructive to people’s well-being as well as to the functioning of society. If digital bureaucrats use a precise points system to keep tabs on everybody all the time, the emerging reputation market could annihilate privacy and control people far more tightly than the money market ever did.

    ALWAYS ON

    Humans are organic beings who live by cyclical biological time. Sometimes we are awake; sometimes we are asleep. After intense activity, we need rest. We grow and decay. Networks of humans are similarly subject to biological cycles. They are sometimes on and sometimes off. Job interviews don’t last forever. Police agents don’t work twenty-four hours a day. Bureaucrats take holidays. Even the money market respects these biological cycles. The New York Stock Exchange is open on Mondays to Fridays, from 9:30 in the morning to 4:00 in the afternoon, and is closed on holidays like Independence Day and New Year’s Day. If a war erupts at 4:01 p.m. on a Friday, the market won’t react to it until Monday morning.

    In contrast, a network of computers can always be on. Computers are consequently pushing humans toward a new kind of existence in which we are always connected and always monitored. In some contexts, like health care, this could be a boon. In other contexts, like for citizens of totalitarian states, this could be a disaster. Even if the network is potentially benign, the very fact that it is always “on” might be damaging to organic entities like humans, because it will take away our opportunities to disconnect and relax. If an organism never has a chance to rest, it eventually collapses and dies. But how will we get a relentless network to slow down and allow us some breaks?

    We need to prevent the computer network from taking complete control of society not just in order to give us time off. Breaks are even more crucial to give us a chance to rectify the network. If the network continues to evolve at an accelerating pace, errors will accumulate much faster than we can identify and correct them. For while the network is relentless and ubiquitous, it is also fallible. Yes, computers can gather unprecedented amounts of data on us, watching what we do twenty-four hours a day. And yes, they can identify patterns in the ocean of data with superhuman efficiency. But that does not mean that the computer network will always understand the world accurately. Information isn’t truth. A total surveillance system may form a very distorted understanding of the world and of human beings. Instead of discovering the truth about the world and about us, the network might use its immense power to create a new kind of world order and impose it on us.

    CHAPTER 8 Fallible: The Network Is Often Wrong

    In The Gulag Archipelago (1973), Aleksandr Solzhenitsyn chronicles the history of the Soviet labor camps and of the information network that created and sustained them. He was writing partly from bitter personal experience. When Solzhenitsyn served as a captain in the Red Army during World War II, he maintained a private correspondence with a school friend in which he occasionally criticized Stalin. To be on the safe side, he did not mention the dictator by name and spoke only about “the man with the mustache.” It availed him little. His letters were intercepted and read by the secret police, and in February 1945, while serving on the front line in Germany, he was arrested. He spent the next eight years in labor camps.1 Many of Solzhenitsyn’s hard-won insights and stories are still relevant to understanding the development of information networks in the twenty-first century.

    One story recounts events at a district party conference in Moscow Province in the late 1930s, at the height of the Stalinist Great Terror. A call was made to pay tribute to Stalin, and the audience—who of course knew that they were being carefully watched—burst into applause. After five minutes of applause, “palms were getting sore and raised arms were already aching. And the older people were panting from exhaustion.… However, who would dare be the first to stop?” Solzhenitsyn explains that “NKVD men were standing in the hall applauding and watching to see who quit first!” It went on and on, for six minutes, then eight, then ten. “They couldn’t stop now till they collapsed with heart attacks! … With make-believe enthusiasm on their faces, looking at each other with faint hope, the district leaders were just going to go on and on applauding till they fell where they stood.”

    Finally, after eleven minutes, the director of a paper factory took his life in his hands, stopped clapping, and sat down. Everyone else immediately stopped clapping and also sat down. That same night, the secret police arrested him and sent him to the gulag for ten years. “His interrogator reminded him: Don’t ever be the first to stop applauding!”2

    This story reveals a crucial and disturbing fact about information networks, and in particular about surveillance systems. As discussed in previous chapters, contrary to the naive view, information is often used to create order rather than discover truth. On the face of it, Stalin’s agents in the Moscow conference used the “clapping test” as a way to uncover the truth about the audience. It was a loyalty test, which assumed that the longer you clapped, the more you loved Stalin. In many contexts, this assumption is not unreasonable. But in the context of Moscow in the late 1930s, the nature of the applause changed. Since participants in the conference knew they were being watched, and since they knew the consequences of any hint of disloyalty, they clapped out of terror rather than love. The paper factory director might have been the first to stop not because he was the least loyal but perhaps because he was the most honest, or even simply because his hands hurt the most.

    While the clapping test didn’t discover the truth about people, it was efficient in imposing order and forcing people to behave in a certain way. Over time, such methods cultivated servility, hypocrisy, and cynicism. This is what the Soviet information network did to hundreds of millions of people over decades. In quantum mechanics the act of observing subatomic particles changes their behavior; it is the same with the act of observing humans. The more powerful our tools of observation, the greater the potential impact.

    The Soviet regime constructed one of the most formidable information networks in history. It gathered and processed enormous amounts of data on its citizens. It also claimed that the infallible theories of Marx, Engels, Lenin, and Stalin granted it a deep understanding of humanity. In fact, the Soviet information network ignored many important aspects of human nature, and it was in complete denial regarding the terrible suffering its policies inflicted on its own citizens. Instead of producing wisdom, it produced order, and instead of revealing the universal truth about humans, it actually created a new type of human—Homo sovieticus.

    As defined by the dissident Soviet philosopher and satirist Aleksandr Zinovyev, Homo sovieticus were servile and cynical humans, lacking all initiative or independent thinking, passively obeying even the most ludicrous orders, and indifferent to the results of their actions.3 The Soviet information network created Homo sovieticus through surveillance, punishments, and rewards. For example, by sending the director of the paper factory to the gulag, the network signaled to the other participants that conformity paid off, whereas being the first to do anything controversial was a bad idea. Though the network failed to discover the truth about humans, it was so good at creating order that it conquered much of the world.

    THE DICTATORSHIP OF THE LIKE

    An analogous dynamic may afflict the computer networks of the twenty-first century, which might create new types of humans and new dystopias. A paradigmatic example is the role played by social media algorithms in radicalizing people. Of course, the methods employed by the algorithms have been utterly different from those of the NKVD and involved no direct coercion or violence. But just as the Soviet secret police created the slavish Homo sovieticus through surveillance, rewards, and punishments, so also the Facebook and YouTube algorithms have created internet trolls by rewarding certain base instincts while punishing the better angels of our nature.

    As explained briefly in chapter 6, the process of radicalization started when corporations tasked their algorithms with increasing user engagement, not only in Myanmar, but throughout the world. For example, in 2012 users were watching about 100 million hours of videos every day on YouTube. That was not enough for company executives, who set their algorithms an ambitious goal: 1 billion hours a day by 2016.4 Through trial-and-error experiments on millions of people, the YouTube algorithms discovered the same pattern that Facebook algorithms also learned: outrage drives engagement up, while moderation tends not to. Accordingly, the YouTube algorithms began recommending outrageous conspiracy theories to millions of viewers while ignoring more moderate content. By 2016, users were indeed watching 1 billion hours every day on YouTube.5

    YouTubers who were particularly intent on gaining attention noticed that when they posted an outrageous video full of lies, the algorithm rewarded them by recommending the video to numerous users and increasing the YouTubers’ popularity and income. In contrast, when they dialed down the outrage and stuck to the truth, the algorithm tended to ignore them. Within a few months of such reinforcement learning, the algorithm turned many YouTubers into trolls.6

    The social and political consequences were far-reaching. For example, as the journalist Max Fisher documented in his 2022 book, The Chaos Machine, YouTube algorithms became an important engine for the rise of the Brazilian far right and for turning Jair Bolsonaro from a fringe figure into Brazil’s president.7 While there were other factors contributing to that political upheaval, it is notable that many of Bolsonaro’s chief supporters and aides had originally been YouTubers who rose to fame and power by algorithmic grace.

    A typical example is Carlos Jordy, who in 2017 was a city councilor in the small town of Niterói. The ambitious Jordy gained national attention by creating inflammatory YouTube videos that garnered millions of views. His videos warned Brazilians, for example, against conspiracies by schoolteachers to brainwash children and persecute conservative pupils. In 2018, Jordy won a seat in the Brazilian Chamber of Deputies (the lower house of the Brazilian Congress) as one of Bolsonaro’s most dedicated supporters. In an interview with Fisher, Jordy frankly said, “If social media didn’t exist, I wouldn’t be here [and] Jair Bolsonaro wouldn’t be president.” The latter claim may well be a self-serving exaggeration, but there is no denying that social media played an important part in Bolsonaro’s rise.

    Another YouTuber who won a seat in Brazil’s Chamber of Deputies in 2018 was Kim Kataguiri, one of the leaders of the Movimento Brasil Livre (MBL, or Free Brazil Movement). Kataguiri initially used Facebook as his main platform, but his posts were too extreme even for Facebook, which banned some of them for disinformation. So Kataguiri switched over to the more permissive YouTube. In an interview in the MBL headquarters in São Paulo, Kataguiri’s aides and other activists explained to Fisher, “We have something here that we call the dictatorship of the like.” They explained that YouTubers tend to become steadily more extreme, posting untruthful and reckless content “just because something is going to give you views, going to give engagement.… Once you open that door there’s no going back, because you always have to go further.… Flat Earthers, anti-vaxxers, conspiracy theories in politics. It’s the same phenomenon. You see it everywhere.”8

    Of course, the YouTube algorithms were not themselves responsible for inventing lies and conspiracy theories or for creating extremist content. At least in 2017–18, those things were done by humans. The algorithms were responsible, however, for incentivizing humans to behave in such ways and for pushing the resulting content in order to maximize user engagement. Fisher documented numerous far-right activists who first became interested in extremist politics after watching videos that the YouTube algorithm auto-played for them. One far-right activist in Niterói told Fisher that he was never interested in politics of any kind, until one day the YouTube algorithm auto-played for him a video on politics by Kataguiri. “Before that,” he explained, “I didn’t have an ideological, political background.” He credited the algorithm with providing “my political education.” Talking about how other people joined the movement, he said, “It was like that with everyone.… Most of the people here came from YouTube and social media.”9

    BLAME THE HUMANS

    We have reached a turning point in history in which major historical processes are partly caused by the decisions of nonhuman intelligence. It is this that makes the fallibility of the computer network so dangerous. Computer errors become potentially catastrophic only when computers become historical agents. We have already made this argument in chapter 6, when we briefly examined Facebook’s role in instigating the anti-Rohingya ethnic-cleansing campaign. As noted in that context, however, many people—including some of the managers and engineers of Facebook, YouTube, and the other tech giants—object to this argument. Since it is one of the central points of the entire book, it is best to delve deeper into the matter and examine more carefully the objections to it.

    The people who manage Facebook, YouTube, TikTok, and other platforms routinely try to excuse themselves by shifting the blame from their algorithms to “human nature.” They argue that it is human nature that produces all the hate and lies on the platforms. The tech giants then claim that due to their commitment to free speech values, they hesitate to censor the expression of genuine human emotions. For example, in 2019 the CEO of YouTube, Susan Wojcicki, explained, “The way that we think about it is: ‘Is this content violating one of our policies? Has it violated anything in terms of hate, harassment?’ If it has, we remove that content. We keep tightening and tightening the policies. We also get criticism, just to be clear, [about] where do you draw the lines of free speech and, if you draw it too tightly, are you removing voices of society that should be heard? We’re trying to strike a balance of enabling a broad set of voices, but also making sure that those voices play by a set of rules that are healthy conversations for society.”10

    A Facebook spokesperson similarly said in October 2021, “Like every platform, we are constantly making difficult decisions between free expressions and harmful speech, security and other issues.… But drawing these societal lines is always better left to elected leaders.”11 In this way, the tech giants constantly shift the discussion to their supposed role as moderators of human-produced content and ignore the active role their algorithms play in cultivating certain human emotions and discouraging others. Are they really blind to it?

    Surely not. Back in 2016, an internal Facebook report discovered that “64 percent of all extremist group joins are due to our recommendation tools.… Our recommendation systems grow the problem.”12 A secret internal Facebook memo from August 2019, leaked by the whistleblower Frances Haugen, stated, “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and [its] family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”13

    Another leaked document from December 2019 noted, “Unlike communication with close friends and family, virality is something new we have introduced to many ecosystems … and it occurs because we intentionally encourage it for business reasons.” The document pointed out that “ranking content about higher stakes topics like health or politics based on engagement leads to perverse incentives and integrity issues.” Perhaps most damningly, it revealed, “Our ranking systems have specific separate predictions for not just what you would engage with, but what we think you may pass along so that others may engage with. Unfortunately, research has shown how outrage and misinformation are more likely to be viral.” This leaked document made one crucial recommendation: since Facebook cannot remove everything harmful from a platform used by many millions, it should at least “stop magnifying harmful content by giving it unnatural distribution.”14

    Like the Soviet leaders in Moscow, the tech companies were not uncovering some truth about humans; they were imposing on us a perverse new order. Humans are very complex beings, and benign social orders seek ways to cultivate our virtues while curtailing our negative tendencies. But social media algorithms see us, simply, as an attention mine. The algorithms reduced the multifaceted range of human emotions—hate, love, outrage, joy, confusion—into a single catchall category: engagement. In Myanmar in 2016, in Brazil in 2018, and in numerous other countries, the algorithms scored videos, posts, and all other content solely according to how many minutes people engaged with the content and how many times they shared it with others. An hour of lies or hatred was ranked higher than ten minutes of truth or compassion—or an hour of sleep. The fact that lies and hate tend to be psychologically and socially destructive, whereas truth, compassion, and sleep are essential for human welfare, was completely lost on the algorithms. Based on this very narrow understanding of humanity, the algorithms helped to create a new social system that encouraged our basest instincts while discouraging us from realizing the full spectrum of the human potential.

    As the harmful effects were becoming manifest, the tech giants were repeatedly warned about what was happening, but they failed to step in because of their faith in the naive view of information. As the platforms were overrun by falsehoods and outrage, executives hoped that if more people were enabled to express themselves more freely, truth would eventually prevail. This, however, did not happen. As we have seen again and again throughout history, in a completely free information fight, truth tends to lose. To tilt the balance in favor of truth, networks must develop and maintain strong self-correcting mechanisms that reward truth telling. These self-correcting mechanisms are costly, but if you want to get the truth, you must invest in them.

    Silicon Valley thought it was exempt from this historical rule. Social media platforms have been singularly lacking in self-correcting mechanisms. In 2014, Facebook employed just a single Burmese-speaking content moderator to monitor activities in the whole of Myanmar.15 When observers in Myanmar began warning Facebook that it needed to invest more in moderating content, Facebook ignored them. For example, Pwint Htun, a Burmese American engineer and telecom executive who grew up in rural Myanmar, wrote to Facebook executives repeatedly about the danger. In an email from July 5, 2014—two years before the ethnic-cleansing campaign began—she issued a prophetic warning: “Tragically, FB in Burma is used like radio in Rwanda during the dark days of genocide.” Facebook took no action.

    Even after the attacks on the Rohingya intensified and Facebook faced a storm of criticism, it still refused to hire people with expert local knowledge to curate content. Thus, when informed that hate-mongers in Myanmar were using the Burmese word kalar as a racist slur for the Rohingya, Facebook reacted in April 2017 by banning from the platform any posts that used the word. This revealed Facebook’s utter lack of knowledge about local conditions and the Burmese language. In Burmese, kalar is a racist slur only in specific contexts. In other contexts, it is an entirely innocent term. The Burmese word for chair is kalar htaing, and the word for chickpea is kalar pae. As Pwint Htun wrote to Facebook in June 2017, banning the term kalar from the platform is like banning the letters “hell” from “hello.”16 Facebook continued to ignore the need for local expertise. By April 2018, the number of Burmese speakers Facebook employed to moderate content for its eighteen million users in Myanmar was a grand total of five.17

    Instead of investing in self-correcting mechanisms that would reward truth telling, the social media giants actually developed unprecedented error-enhancing mechanisms that rewarded lies and fictions. One such error-enhancing mechanism was the Instant Articles program that Facebook rolled out in Myanmar in 2016. Wishing to drive up engagement, Facebook paid news channels according to the amount of user engagement they generated, measured in clicks and views. No importance whatsoever was given to the truthfulness of the “news.” A 2021 study found that in 2015, before the program was launched, six of the ten top Facebook websites in Myanmar belonged to “legitimate media.” By 2017, under the impact of Instant Articles, “legitimate media” was down to just two websites out of the top ten. By 2018, all top ten websites were “fake news and clickbait websites.”

    The study concluded that because of the launch of Instant Articles “clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of US dollars a month in ad revenue, or ten times the average monthly salary—paid to them directly by Facebook.” Since Facebook was by far the most important source of online news in Myanmar, this had enormous impact on the overall media landscape of the country. “In a country where Facebook is synonymous with the Internet, the low-grade content overwhelmed other information sources.”18 Facebook and other social media platforms didn’t consciously set out to flood the world with fake news and outrage. But by telling their algorithms to maximize user engagement, this is exactly what they perpetrated.

    Reflecting on the Myanmar tragedy, Pwint Htun wrote to me in July 2023, “I naively used to believe that social media could elevate human consciousness and spread the perspective of common humanity through interconnected pre-frontal cortexes in billions of human beings. What I realize is that the social media companies are not incentivized to interconnect pre-frontal cortexes. Social media companies are incentivized to create interconnected limbic systems—which is much more dangerous for humanity.”

    THE ALIGNMENT PROBLEM

    I don’t want to imply that the spread of fake news and conspiracy theories is the main problem with all past, present, and future computer networks. YouTube, Facebook, and other social media platforms claim that since 2018 they have been tweaking their algorithms to make them more socially responsible. Whether this is true or not is hard to say, especially because there is no universally accepted definition of “social responsibility.”19 But the specific problem of polluting the information sphere in pursuit of user engagement can certainly be solved. When the tech giants set their hearts on designing better algorithms, they can usually do it. Around 2005, the profusion of spam threatened to make the use of email impossible. Powerful algorithms were developed to address the problem. By 2015, Google claimed its Gmail algorithm had a 99.9 percent success in blocking genuine spam, while only 1 percent of legitimate emails were erroneously labeled as such.20

    We also shouldn’t discount the huge social benefits that YouTube, Facebook, and other social media platforms have brought. To be clear, most YouTube videos and Facebook posts have not been fake news and genocidal incitements. Social media has been more than helpful in connecting people, giving voice to previously disenfranchised groups, and organizing valuable new movements and communities.21 It has also encouraged an unprecedented wave of human creativity. In the days when television was the dominant medium, viewers were often denigrated as couch potatoes: passive consumers of content that a few gifted artists produced. Facebook, YouTube, and other social media platforms inspired the couch potatoes to get up and start creating. Most of the content on social media—at least until the rise of powerful generative AI—has been produced by the users themselves, and their cats and dogs, rather than by a limited professional class.

    I, too, routinely use YouTube and Facebook to connect with people, and I am grateful to social media for connecting me with my husband, whom I met on one of the first LGBTQ social media platforms back in 2002. Social media has done wonders for dispersed minorities like LGBTQ people. Few gay boys are born to a gay family in a gay neighborhood, and in the days before the internet simply finding one another posed a big challenge, unless you moved to one of the handful of tolerant metropolises that had a gay subculture. Growing up in a small homophobic town in Israel in the 1980s and early 1990s, I didn’t know a single openly gay man. Social media in the late 1990s and early 2000s provided an unprecedented and almost magical way for members of the dispersed LGBTQ community to find one another and connect.

    And yet I have devoted so much attention to the social media “user engagement” debacle because it exemplifies a much bigger problem afflicting computers—the alignment problem. When computers are given a specific goal, such as to increase YouTube traffic to one billion hours a day, they use all their power and ingenuity to achieve this goal. Since they operate very differently than humans, they are likely to use methods their human overlords didn’t anticipate. This can result in dangerous unforeseen consequences, which are not aligned with the original human goals. Even if recommendation algorithms stop encouraging hate, other instances of the alignment problem might result in larger catastrophes than the anti-Rohingya campaign. The more powerful and independent computers become, the bigger the danger.

    Of course, the alignment problem is neither new nor unique to algorithms. It bedeviled humanity for thousands of years before the invention of computers. It has been, for example, the foundational problem of modern military thinking, enshrined in Carl von Clausewitz’s theory of war. Clausewitz was a Prussian general who fought during the Napoleonic Wars. Following Napoleon’s final defeat in 1815, Clausewitz became the director of the Prussian War College. He also began formalizing a grand theory of war. After he died of cholera in 1831, his wife, Marie, edited his unfinished manuscript and published On War in several parts between 1832 and 1834.22

    On War created a rational model for understanding war, and it is still the dominant military theory today. Its most important maxim is that “war is the continuation of policy with other means.”23 This implies that war is not an emotional outbreak, a heroic adventure, or a divine punishment. War is not even a military phenomenon. Rather, war is a political tool. According to Clausewitz, military actions are utterly irrational unless they are aligned with some overarching political goal.

    Suppose Mexico contemplates whether to invade and conquer its small neighbor, Belize. And suppose a detailed military analysis concludes that if the Mexican army invades, it will achieve a quick and decisive military victory, crushing the small Belize army and conquering the capital, Belmopan, in three days. According to Clausewitz, that does not constitute a rational reason for Mexico to invade. The mere ability to secure military victory is meaningless. The key question the Mexican government should ask itself is, what political goals will the military success achieve?

    History is full of decisive military victories that led to political disasters. For Clausewitz, the most obvious example was close to home: Napoleon’s career. Nobody disputes the military genius of Napoleon, who was a master of both tactics and strategy. But while his string of victories brought Napoleon temporary control of vast territories, they failed to secure lasting political achievements. His military conquests merely drove most European powers to unite against him, and his empire collapsed a decade after he crowned himself emperor.

    Indeed, in the long term, Napoleon’s victories ensured the permanent decline of France. For centuries, France was Europe’s leading geopolitical power, largely because both Italy and Germany didn’t exist as unified political entities. Italy was a hodgepodge of dozens of warring city-states, feudal principalities, and church territories. Germany was an even more bizarre jigsaw puzzle divided into more than a thousand independent polities, loosely held together under the theoretical suzerainty of the Holy Roman Empire of the German Nation.24 In 1789, the prospect of a German or Italian invasion of France was simply unthinkable, because there was no such thing as a German or Italian army.

    As Napoleon expanded his empire into central Europe and the Italian Peninsula, he abolished the Holy Roman Empire in 1806, amalgamated many of the smaller German and Italian principalities into larger territorial blocs, created a German Confederation of the Rhine and a Kingdom of Italy, and sought to unify these territories under his dynastic rule. His victorious armies also spread the ideals of modern nationalism and popular sovereignty into the German and Italian lands. Napoleon thought all this would make his empire stronger. In fact, by breaking up traditional structures and giving Germans and Italians a taste of national consolidation, Napoleon inadvertently lay the foundations for the ultimate unification of Germany (1866–71) and of Italy (1848–71). These twin processes of national unification were sealed by the German victory over France in the Franco-Prussian War of 1870–71. Faced with two newly unified and fervently nationalistic powers on its eastern border, France never regained its position of dominance.

    A more recent example of military victory leading to political defeat was provided by the American invasion of Iraq in 2003. The Americans won every major military engagement, but failed to achieve any of their long-term political aims. Their military victory didn’t establish a friendly regime in Iraq, or a favorable geopolitical order in the Middle East. The real winner of the war was Iran. American military victory turned Iraq from Iran’s traditional foe into Iran’s vassal, thereby greatly weakening the American position in the Middle East while making Iran the regional hegemon.25

    Both Napoleon and George W. Bush fell victim to the alignment problem. Their short-term military goals were misaligned with their countries’ long-term geopolitical goals. We can understand the whole of Clausewitz’s On War as a warning that “maximizing victory” is as shortsighted a goal as “maximizing user engagement.” According to the Clausewitzian model, only once the political goal is clear can armies decide on a military strategy that will hopefully achieve it. From the overall strategy, lower-ranking officers can then derive tactical goals. The model constructs a clear hierarchy between long-term policy, medium-term strategy, and short-term tactics. Tactics are considered rational only if they are aligned with some strategic goal, and strategy is considered rational only if it is aligned with some political goal. Even local tactical decisions of a lowly company commander must serve the war’s ultimate political goal.

    Suppose that during the American occupation of Iraq an American company comes under intense fire from a nearby mosque. The company commander has several different tactical decisions to choose from. He might order the company to retreat. He might order the company to storm the mosque. He might order one of his supporting tanks to blow up the mosque. What should the company commander do?

    From a purely military perspective, it might seem best for the commander to order his tank to blow up the mosque. This would capitalize on the tactical advantage that the Americans enjoyed in terms of firepower, avoid risking the lives of his own soldiers, and achieve a decisive tactical victory. However, from a political perspective, this might be the worst decision the commander could make. Footage of an American tank destroying a mosque would galvanize Iraqi public opinion against the Americans and create outrage throughout the wider Muslim world. Storming the mosque might also be a political mistake, because it too could create resentment among Iraqis, while the cost in American lives could weaken support for the war among American voters. Given the political war aims of the United States, retreating and conceding tactical defeat might well be the most rational decision.

    For Clausewitz, then, rationality means alignment. Pursuing tactical or strategic victories that are misaligned with political goals is irrational. The problem is that the bureaucratic nature of armies makes them highly susceptible to such irrationality. As discussed in chapter 3, by dividing reality into separate drawers, bureaucracy encourages the pursuit of narrow goals even when this harms the greater good. Bureaucrats tasked with accomplishing a narrow mission may be ignorant of the wider impact of their actions, and it has always been tricky to ensure that their actions remain aligned with the greater good of society. When armies operate along bureaucratic lines—as all modern armies do—it creates a huge gap between a captain commanding a company in the field and the president formulating long-term policy in a distant office. The captain is prone to make decisions that seem reasonable on the ground but that actually undermine the war’s ultimate goal.

    We see, then, that the alignment problem has long predated the computer revolution and that the difficulties encountered by builders of present-day information empires are not unlike those that bedeviled previous would-be conquerors. Nevertheless, computers do change the nature of the alignment problem in important ways. No matter how difficult it used to be to ensure that human bureaucrats and soldiers remain aligned with society’s long-term goals, it is going to be even harder to ensure the alignment of algorithmic bureaucrats and autonomous weapon systems.

    THE PAPER-CLIP NAPOLEON

    One reason why the alignment problem is particularly dangerous in the context of the computer network is that this network is likely to become far more powerful than any previous human bureaucracy. A misalignment in the goals of superintelligent computers might result in a catastrophe of unprecedented magnitude. In his 2014 book, Superintelligence, the philosopher Nick Bostrom illustrated the danger using a thought experiment, which is reminiscent of Goethe’s “Sorcerer’s Apprentice.” Bostrom asks us to imagine that a paper-clip factory buys a superintelligent computer and that the factory’s human manager gives the computer a seemingly simple task: produce as many paper clips as possible. In pursuit of this goal, the paper-clip computer conquers the whole of planet Earth, kills all the humans, sends expeditions to take over additional planets, and uses the enormous resources it acquires to fill the entire galaxy with paper-clip factories.

    The point of the thought experiment is that the computer did exactly what it was told (just like the enchanted broomstick in Goethe’s poem). Realizing that it needed electricity, steel, land, and other resources to build more factories and produce more paper clips, and realizing that humans are unlikely to give up these resources, the superintelligent computer eliminated all humans in its single-minded pursuit of its given goal.26 Bostrom’s point was that the problem with computers isn’t that they are particularly evil but that they are particularly powerful. And the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals. If we define a misaligned goal to a pocket calculator, the consequences are trivial. But if we define a misaligned goal to a superintelligent machine, the consequences could be dystopian.

    The paper-clip thought experiment may sound outlandish and utterly disconnected from reality. But if Silicon Valley managers had paid attention when Bostrom published it in 2014, perhaps they would have been more careful before instructing their algorithms to “maximize user engagement.” The Facebook and YouTube algorithms behaved exactly like Bostrom’s imaginary algorithm. When told to maximize paper-clip production, the algorithm sought to convert the entire physical universe into paper clips, even if it meant destroying human civilization. When told to maximize user engagement, the Facebook and YouTube algorithms sought to convert the entire social universe into user engagement, even if it meant doing harm to the social fabric of Myanmar, Brazil, and many other countries.

    Bostrom’s thought experiment highlights a second reason why the alignment problem is more urgent in the case of computers. Because they are inorganic entities, they are likely to adopt strategies that would never occur to any human and that we are therefore ill-equipped to foresee and forestall. Here’s one example: In 2016, Dario Amodei was working on a project called Universe, trying to develop a general-purpose AI that could play hundreds of different computer games. The AI competed well in various car races, so Amodei next tried it on a boat race. Inexplicably, the AI steered its boat right into a harbor and then sailed in endless circles in and out of the harbor.

    It took Amodei considerable time to understand what went wrong. The problem occurred because initially Amodei wasn’t sure how to tell the AI that its goal was to “win the race.” “Winning” is an unclear concept to an algorithm. Translating “win the race” into computer language would have required Amodei to formalize complex concepts like track position and placement among the other boats in the race. So instead, Amodei took the easy way and told the boat to maximize its score. He assumed that the score was a good proxy for winning the race. After all, it worked with the car races.

    But the boat race had a peculiar feature, absent from the car races, that allowed the ingenious AI to find a loophole in the game’s rules. The game rewarded players with a lot of points for getting ahead of other boats—as in the car races—but it also rewarded them with a few points whenever they replenished their power by docking into a harbor. The AI discovered that if instead of trying to outsail the other boats, it simply went in circles in and out of the harbor, it could accumulate more points far faster. Apparently, none of the game’s human developers—nor Dario Amodei—noticed this loophole. The AI was doing exactly what the game was rewarding it to do—even though it is not what the humans were hoping for. That’s the essence of the alignment problem: rewarding A while hoping for B.27 If we want computers to maximize social benefits, it’s a bad idea to reward them for maximizing user engagement.

    A third reason to worry about the alignment problem of computers is that because they are so different from us, when we make the mistake of giving them a misaligned goal, they are less likely to notice it or request clarification. If the boat-race AI had been a human gamer, it would have realized that the loophole it found in the game’s rules probably doesn’t really count as “winning.” If the paper-clip AI had been a human bureaucrat, it would have realized that destroying humanity in order to produce paper clips is probably not what was intended. But since computers aren’t humans, we cannot rely on them to notice and flag possible misalignments. In the 2010s the YouTube and Facebook management teams were bombarded with warnings from their human employees—as well as from outside observers—about the harm being done by the algorithms, but the algorithms themselves never raised the alarm.28

    As we give algorithms greater and greater power over health care, education, law enforcement, and numerous other fields, the alignment problem will loom ever larger. If we don’t find ways to solve it, the consequences will be far worse than algorithms racking up points by sailing boats in circles.

    THE CORSICAN CONNECTION

    How to solve the alignment problem? In theory, when humans create a computer network, they must define for it an ultimate goal, which the computers are never allowed to change or ignore. Then, even if computers become so powerful that we lose control over them, we can rest assured that their immense power will benefit rather than harm us. Unless, of course, it turned out that we defined a harmful or vague goal. And there’s the rub. In the case of human networks, we rely on self-correcting mechanisms to periodically review and revise our goals, so setting the wrong goal is not the end of the world. But since the computer network might escape our control, if we set it the wrong goal, we might discover our mistake when we are no longer able to correct it. Some might hope that through a careful process of deliberation, we might be able to define in advance the right goals for the computer network. This, however, is a very dangerous delusion.

    To understand why it is impossible to agree in advance on the ultimate goals of the computer network, let’s revisit Clausewitz’s war theory. There is one fatal flaw in the way he equates rationality with alignment. While Clausewitzian theory demands that all actions be aligned with the ultimate goal, it offers no rational way to define such a goal. Consider Napoleon’s life and military career. What should have been his ultimate goal? Given the prevailing cultural atmosphere of France circa 1800, we can think of several alternatives for “ultimate goal” that might have occurred to Napoleon:

    POTENTIAL GOAL NUMBER 1: Making France the dominant power in Europe, secure against any future attack by Britain, the Habsburg Empire, Russia, a unified Germany, or a unified Italy.

    POTENTIAL GOAL NUMBER 2: Creating a new multiethnic empire ruled by Napoleon’s family, which would include not only France but also many additional territories both in Europe and overseas.

    POTENTIAL GOAL NUMBER 3: Achieving everlasting glory for himself personally, so that even centuries after his death billions of people will know the name Napoleon and admire his genius.

    POTENTIAL GOAL NUMBER 4: Securing the redemption of his everlasting soul, and gaining entry to heaven after his death.

    POTENTIAL GOAL NUMBER 5: Spreading the universal ideals of the French Revolution, and helping to protect freedom, equality, and human rights throughout Europe and the world.

    Many self-styled rationalists tend to argue that Napoleon should have made it his life’s mission to achieve the first goal—securing French domination in Europe. But why? Remember that for Clausewitz rationality means alignment. A tactical maneuver is rational if, and only if, it is aligned with some higher strategic goal, which should in turn be aligned with an even higher political goal. But where does this chain of goals ultimately start? How can we determine the ultimate goal that justifies all the strategic subgoals and tactical steps derived from it? Such an ultimate goal by definition cannot be aligned with anything higher than itself, because there is nothing higher. What then makes it rational to place France at the top of the goal hierarchy, rather than Napoleon’s family, Napoleon’s fame, Napoleon’s soul, or universal human rights? Clausewitz provides no answer.

    One might argue that goal number 4—securing the redemption of his everlasting soul—cannot be a serious candidate for an ultimate rational goal, because it is based on a belief in mythology. But the same argument can be leveled at all the other goals. Everlasting souls are an intersubjective invention that exist only in people’s minds, and exactly the same is true of nations and human rights. Why should Napoleon care about the mythical France any more than about his mythical soul?

    Indeed, for most of his youth, Napoleon didn’t even consider himself French. He was born Napoleone di Buonaparte on Corsica, to a family of Italian emigrants. For five hundred years Corsica was ruled by the Italian city-state of Genoa, where many of Napoleone’s ancestors lived. It was only in 1768—a year before Napoleone’s birth—that Genoa ceded the island to France. Corsican nationalists resisted being handed over to France and rose in rebellion. Only after their defeat in 1770 did Corsica formally become a French province. Many Corsicans continued to resent the French takeover, but the di Buonaparte family swore allegiance to the French king and sent Napoleone to military school in mainland France.29

    At school, Napoleone had to endure a good deal of hazing from his classmates for his Corsican nationalism and his poor command of the French language.30 His mother tongues were Corsican and Italian, and although he gradually became fluent in French, he retained throughout his life a Corsican accent and an inability to spell French correctly.31 Napoleone eventually enlisted in the French army, but when the Revolution broke out in 1789, he went back to Corsica, hoping the revolution would provide an opportunity for his beloved island to achieve greater autonomy. Only after he fell out with the leader of the Corsican independence movement—Pasquale Paoli—did Napoleone abandon the Corsican cause in May 1793. He returned to the mainland, where he decided to build his future.32 It was at this stage that Napoleone di Buonaparte turned into Napoléon Bonaparte (he continued to use the Italian version of his name until 1796).33

    Why then was it rational for Napoleon to devote his military career to making France the dominant power in Europe? Was it perhaps more rational for him to stay in Corsica, patch up his personal disagreements with Paoli, and devote himself to liberating his native island from its French conquerors? And maybe Napoleon should in fact have made it his life’s mission to unite Italy—the land of his ancestors?

    Clausewitz offers no method to answer these questions rationally. If our only rule of thumb is “every action must be aligned with some higher goal,” by definition there is no rational way to define that ultimate goal. How then can we provide a computer network with an ultimate goal it must never ignore or subvert? Tech executives and engineers who rush to develop AI are making a huge mistake if they think there is a rational way to tell that AI what its ultimate goal should be. They should learn from the bitter experiences of generations of philosophers who tried to define ultimate goals and failed.

    THE KANTIAN NAZI

    For millennia, philosophers have been looking for a definition of an ultimate goal that will not depend on an alignment to some higher goal. They have repeatedly been drawn to two potential solutions, known in philosophical jargon as deontology and utilitarianism. Deontologists (from the Greek word deon, meaning “duty”) believe that there are some universal moral duties, or moral rules, that apply to everyone. These rules do not rely on alignment to a higher goal, but rather on their intrinsic goodness. If such rules indeed exist, and if we can find a way to program them into computers, then we can make sure the computer network will be a force for good.

    But what exactly does “intrinsic goodness” mean? The most famous attempt to define an intrinsically good rule was made by Immanuel Kant, a contemporary of Clausewitz and Napoleon. Kant argued that an intrinsically good rule is any rule that I would like to make universal. According to this view, a person about to murder someone should stop and go through the following thought process: “I am now going to murder a human. Would I like to establish a universal rule saying that it is okay to murder humans? If such a universal rule is established, then someone might murder me. So there shouldn’t be a universal rule allowing murder. It follows that I too shouldn’t murder.” In simpler language, Kant reformulated the old Golden Rule: “Do unto others what you want them to do to you” (Matthew 7:12).

    This sounds like a simple and obvious idea: each of us should behave in a way we want everyone to behave. But ideas that sound good in the ethereal realm of philosophy often have trouble immigrating to the harsh land of history. The key question historians would ask Kant is, when you talk about universal rules, how exactly do you define “universal”? Under actual historical circumstances, when a person is about to commit murder, the first step they often take is to exclude the victim from the universal community of humanity.34 This, for example, is what anti-Rohingya extremists like Wirathu did. As a Buddhist monk, Wirathu was certainly against murdering humans. But he didn’t think this universal rule applied to killing Rohingya, who were seen as subhuman. In posts and interviews, he repeatedly compared them to beasts, snakes, mad dogs, wolves, jackals, and other dangerous animals.35 On October 30, 2017, at the height of the anti-Rohingya violence, another, more senior Buddhist monk preached a sermon to military officers in which he justified violence against the Rohingya by telling the officers that non-Buddhists were “not fully human.”36

    As a thought experiment, imagine a meeting between Immanuel Kant and Adolf Eichmann—who, by the way, considered himself a Kantian.37 As Eichmann signs an order sending another trainload of Jews to Auschwitz, Kant tells him, “You are about to murder thousands of humans. Would you like to establish a universal rule saying it is okay to murder humans? If you do that, you and your family might also be murdered.” Eichmann replies, “No, I am not about to murder thousands of humans. I am about to murder thousands of Jews. If you ask me whether I would like to establish a universal rule saying it is okay to murder Jews, then I am all for it. As for myself and my family, there is no risk that this universal rule would lead to us being murdered. We aren’t Jews.”

    One potential Kantian reply to Eichmann is that when we define entities, we must always use the most universal definition applicable. If an entity can be defined as either “a Jew” or “a human,” we should use the more universal term “human.” However, the whole point of Nazi ideology was to deny the humanity of Jews. In addition, note that Jews are not just humans. They are also animals, and they are also organisms. Since animals and organisms are obviously more universal categories than “human,” if you follow the Kantian argument to its logical conclusion, it might push us to adopt an extreme vegan position. Since we are organisms, does it mean we should object to the killing of any organism, down even to tomatoes or amoebas?

    In history, many if not most conflicts concern the definition of identities. Everybody accepts that murder is wrong, but thinks that only killing members of the in-group qualifies as “murder,” whereas killing someone from an out-group is not. But the in-groups and out-groups are intersubjective entities, whose definition usually depends on some mythology. Deontologists who pursue universal rational rules often end up the captives of local myths.

    This problem with deontology is especially critical if we try to dictate universal deontologist rules not to humans but to computers. Computers aren’t even organic. So if they follow a rule of “Do unto others what you want them to do to you,” why should they be concerned about killing organisms like humans? A Kantian computer that doesn’t want to be killed has no reason to object to a universal rule saying “it is okay to kill organisms”; such a rule does not endanger the nonorganic computer.

    Alternatively, being inorganic entities, computers may have no qualms about dying. As far as we can tell, death is an organic phenomenon and may be inapplicable to inorganic entities. When ancient Assyrians talked about “killing” documents, that was just a metaphor. If computers are more like documents than like organisms, and don’t care about “being killed,” would we like a Kantian computer to conclude that killing humans is therefore fine?

    Is there a way to define whom computers should care about, without getting bogged down by some intersubjective myth? The most obvious suggestion is to tell computers that they must care about any entity capable of suffering. While suffering is often caused by belief in local intersubjective myths, suffering itself is nonetheless a universal reality. Therefore, using the capacity to suffer in order to define the critical in-group grounds morality in an objective and universal reality. A self-driving car should avoid killing all humans—whether Buddhist or Muslim, French or Italian—and should also avoid killing dogs and cats, and any sentient robots that might one day exist. We may even refine this rule, instructing the car to care about different beings in direct proportion to their capacity to suffer. If the car has to choose between killing a human and killing a cat, it should drive over the cat, because presumably the cat has a lesser capacity to suffer. But if we go in that direction, we inadvertently desert the deontologist camp and find ourselves in the camp of their rivals—the utilitarians.

    THE CALCULUS OF SUFFERING

    Whereas deontologists struggle to find universal rules that are intrinsically good, utilitarians judge actions by their impact on suffering and happiness. The English philosopher Jeremy Bentham—another contemporary of Napoleon, Clausewitz, and Kant—said that the only rational ultimate goal is to minimize suffering in the world and maximize happiness. If our main fear about computer networks is that their misaligned goals might inflict terrible suffering on humans and perhaps on other sentient beings, then the utilitarian solution seems both obvious and attractive. When creating the computer network, we just need to instruct it to minimize suffering and maximize happiness. If Facebook had told its algorithms “maximize happiness” instead of “maximize user engagement,” all would allegedly have been well. It is worth noting that this utilitarian approach is indeed popular in Silicon Valley, championed in particular by the effective altruism movement.38

    Unfortunately, as with the deontologist solution, what sounds simple in the theoretical realm of philosophy becomes fiendishly complex in the practical land of history. The problem for utilitarians is that we don’t possess a calculus of suffering. We don’t know how many “suffering points” or “happiness points” to assign to particular events, so in complex historical situations it is extremely difficult to calculate whether a given action increases or decreases the overall amount of suffering in the world.

    Utilitarianism is at its best in situations when the scales of suffering are very clearly tipped in one direction. When confronted by Eichmann, utilitarians don’t need to get into any complicated debates about identity. They just need to point out that the Holocaust caused immense suffering to the Jews, without providing equivalent benefits to anyone else, including the Germans. There was no compelling military or economic need for the Germans to murder millions of Jews. The utilitarian case against the Holocaust is overwhelming.

    Utilitarians also have a field day when dealing with “victimless crimes” like homosexuality, in which all the suffering is on one side only. For centuries, the persecution of gay people caused them immense suffering, but it was nevertheless justified by various prejudices that were erroneously presented as deontological universal rules. Kant, for example, condemned homosexuality on the grounds that it is “contrary to natural instinct and to animal nature” and that it therefore degrades a person “below the level of the animals.” Kant further fulminated that because such acts are contrary to nature, they “make man unworthy of his humanity. He no longer deserves to be a person.”39 Kant, in fact, repackaged a Christian prejudice as a supposedly universal deontological rule, without providing empirical proof that homosexuality is indeed contrary to nature. In light of the above discussion of dehumanization as a prelude to massacre, it is also noteworthy how Kant dehumanized gay people. The view that homosexuality is contrary to nature and deprives people of their humanity paved the way for Nazis like Eichmann to justify murdering homosexuals in concentration camps. Since homosexuals were allegedly below the level of animals, the Kantian rule against murdering humans didn’t apply to them.40

    Utilitarians find it easy to dismiss Kant’s sexual theories, and Bentham indeed was one of the first modern European thinkers who favored the decriminalization of homosexuality.41 Utilitarians argue that criminalizing homosexuality in the name of some dubious universal rule causes tremendous suffering to millions of people, without offering any substantial benefits to others. When two men form a loving relationship, this makes them happy, without making anyone else miserable. Why then forbid it? This type of utilitarian logic also led to many other modern reforms, such as the ban on torture and the introduction of some legal protections for animals.

    But in historical situations when the scales of suffering are more evenly matched, utilitarianism falters. In the early days of the COVID-19 pandemic, governments all over the world adopted strict policies of social isolation and lockdown. This probably saved the lives of several million people.42 It also made hundreds of millions miserable for months. Moreover, it might have indirectly caused numerous deaths, for example by increasing the incidence of murderous domestic violence,43 or by making it more difficult for people to diagnose and treat other dangerous illnesses, like cancer.44 Can anyone calculate the total impact of the lockdown policies and determine whether they increased or decreased the suffering in the world?

    This may sound like a perfect task for a relentless computer network. But how would the computer network decide how many “misery points” to allocate to being locked down with three kids in a two-bedroom apartment for a month? Is that 60 misery points or 600? And how many points to allot to a cancer patient who died because she missed her chemotherapy treatments? Is that 60,000 misery points or 600,000? And what if she would have died of cancer anyway, and the chemo would merely have extended her life by five agonizing months? Should the computers value five months of living with extreme pain as a net gain or a net loss for the sum total of suffering in the world?

    And how would the computer network evaluate the suffering caused by less tangible things, such as the knowledge of our own mortality? If a religious myth promises us that we will never really die, because after death our eternal soul will go to heaven, does that make us truly happy or just delusional? Is death the deep cause of our misery, or does our misery stem from our attempts to deny death? If someone loses their religious faith and comes to terms with their mortality, should the computer network see this as a net loss or a net gain?

    What about even more complicated historical events like the American invasion of Iraq? The Americans were well aware that their invasion would cause tremendous suffering for millions of people. But in the long run, they argued, the benefits of bringing freedom and democracy to Iraq would outweigh the costs. Can the computer network calculate whether this argument was sound? Even if it was theoretically plausible, in practice the Americans failed to establish a stable democracy in Iraq. Does that mean that their attempt was wrong in the first place?

    Just as deontologists trying to answer the question of identity are pushed to adopt utilitarian ideas, so utilitarians stymied by the lack of a suffering calculus often end up adopting a deontologist position. They uphold general rules like “avoid wars of aggression” or “protect human rights,” even though they cannot show that following these rules always reduces the sum total of suffering in the world. History provides them only with a vague impression that following these rules tends to reduce suffering. And when some of these general rules clash—for example, when contemplating launching a war of aggression in order to protect human rights—utilitarianism doesn’t offer much practical help. Not even the most powerful computer network can perform the necessary calculations.

    Accordingly, while utilitarianism promises a rational—and even mathematical—way to align every action with “the ultimate good,” in practice it may well produce just another mythology. Communist true believers confronted by the horrors of Stalinism often replied that the happiness that future generations would experience under “real socialism” would redeem any short-term misery in the gulags. Libertarians, when asked about the immediate social harms of unrestricted free speech or the total abolition of taxes, express a similar faith that future benefits will outweigh any short-term damage. The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present. Indeed, this is a trick traditional religions discovered thousands of years ago. The crimes of this world could too easily be excused by the promises of future salvation.

    COMPUTER MYTHOLOGY

    How then did bureaucratic systems throughout history set their ultimate goals? They relied on mythology to do it for them. No matter how rational were the officials, engineers, tax collectors, and accountants, they were ultimately in the service of this or that mythmaker. To paraphrase John Maynard Keynes, practical people, who believe themselves to be quite exempt from any religious influence, are usually the slaves of some mythmaker. Even nuclear physicists have found themselves obeying the commands of Shiite ayatollahs and communist apparatchiks.

    The alignment problem turns out to be, at heart, a problem of mythology. Nazi administrators could have been committed deontologists or utilitarians, but they would still have murdered millions so long as they understood the world in terms of a racist mythology. If you start with the mythological belief that Jews are demonic monsters bent on destroying humanity, then both deontologists and utilitarians can find many logical arguments why the Jews should be killed.

    An analogous problem might well afflict computers. Of course, they cannot “believe” in any mythology, because they are nonconscious entities that don’t believe in anything. As long as they lack subjectivity, how can they hold intersubjective beliefs? However, one of the most important things to realize about computers is that when a lot of computers communicate with one another, they can create inter-computer realities, analogous to the intersubjective realities produced by networks of humans. These inter-computer realities may eventually become as powerful—and as dangerous—as human-made intersubjective myths.

    This is a very complicated argument, but it is another of the central arguments of the book, so let’s go over it carefully. First, let’s try to understand what inter-computer realities are. As an initial example, consider a one-player computer game. In such a game, you can wander inside a virtual landscape that exists as information within one computer. If you see a rock, that rock is not made of atoms. It is made of bits inside a single computer. When several computers are linked to one another, they can create inter-computer realities. Several players using different computers can wander together inside a common virtual landscape. If they see a rock, that rock is made of bits in several computers.45

    Just as intersubjective realities like money and gods can influence the physical reality outside people’s minds, so inter-computer realities can influence reality outside the computers. In 2016 the game Pokémon Go took the world by storm and was downloaded hundreds of millions of times by the end of the year.46 Pokémon Go is an augmented reality mobile game. Players can use their smartphones to locate, fight, and capture virtual creatures called Pokémon, which seem to exist in the physical world. I once went with my nephew Matan on such a Pokémon hunt. Walking around his neighborhood, I saw only houses, trees, rocks, cars, people, cats, dogs, and pigeons. I didn’t see any Pokémon, because I didn’t have a smartphone. But Matan, looking around through his smartphone lens, could “see” Pokémon standing on a rock or hiding behind a tree.

    Though I couldn’t see the creatures, they were obviously not confined to Matan’s smartphone, because other people could “see” them too. For example, we encountered two other kids who were hunting the same Pokémon. If Matan managed to capture a Pokémon, the other kids could immediately observe what happened. The Pokémon were inter-computer entities. They existed as bits in a computer network rather than as atoms in the physical world, but they could nevertheless interact with the physical world and influence it, as it were, in various ways.

    Now let’s examine a more consequential example of inter-computer realities. Consider the rank that a website gets in a Google search. When we google for news, flight tickets, or restaurant recommendations, one website appears at the top of the first Google page, whereas another is relegated to the middle of the fiftieth page. What exactly is this Google rank, and how is it determined? The Google algorithm determines the website’s Google rank by assigning points to various parameters, such as how many people visit the website and how many other websites link to it. The rank itself is an inter-computer reality, existing in a network connecting billions of computers—the internet. Like Pokémon, this inter-computer reality spills over into the physical world. For a news outlet, a travel agency, or a restaurant it matters a great deal whether its website appears at the top of the first Google page or in the middle of the fiftieth page.47

    Since the Google rank is so important, people use all kinds of tricks to manipulate the Google algorithm to give their website a higher rank. For example, they may use bots to generate more traffic to the website.48 This is a widespread phenomenon in social media too, where coordinated bot armies are constantly manipulating the algorithms of YouTube, Facebook, or Twitter. If a tweet goes viral, is it because humans are really interested in it, or because thousands of bots managed to fool the Twitter algorithm?49

    Inter-computer realities like Pokémon and Google ranks are analogous to intersubjective realities like the sanctity that humans ascribe to temples and cities. I lived much of my life in one of the holiest places on earth—the city of Jerusalem. Objectively, it is an ordinary place. As you walk around Jerusalem, you see houses, trees, rocks, cars, people, cats, dogs, and pigeons, as in any other city. But many people nevertheless imagine it to be an extraordinary place, full of gods, angels, and holy stones. They believe in this so strongly that they sometimes fight over possession of the city or of specific holy buildings and sacred stones, most notably the Holy Rock, located under the Dome of the Rock on Temple Mount. The Palestinian philosopher Sari Nusseibeh observed that “Jews and Muslims, acting on religious beliefs and backed up by nuclear capabilities, are poised to engage in history’s worst-ever massacre of human beings, over a rock.”50 They don’t fight over the atoms that compose the rock; they fight over its “sanctity,” a bit like kids fighting over a Pokémon. The sanctity of the Holy Rock, and of Jerusalem generally, is an intersubjective phenomenon that exists in the communication network connecting many human minds. For thousands of years wars were fought over intersubjective entities like holy rocks. In the twenty-first century, we might see wars fought over inter-computer entities.

    If this sounds like science fiction, consider potential developments in the financial system. As computers become more intelligent and more creative, they are likely to create new inter-computer financial devices. Gold coins and dollars are intersubjective entities. Cryptocurrencies like bitcoin are midway between intersubective and inter-computer. The idea behind them was invented by humans, and their value still depends on human beliefs, but they cannot exist outside the computer network. In addition, they are increasingly traded by algorithms so that their value depends on the calculations of algorithms and not just on human beliefs.

    What if in ten or fifty years computers create a new kind of cryptocurrency or some other financial device that becomes a vital tool for trading and investing—and a potential source for political crises and conflicts? Recall that the 2007–8 global financial crisis was instigated by collateralized debt obligations. These financial devices were invented by a handful of mathematicians and investment whiz kids and were almost unintelligible for most humans, including regulators. This led to an oversight failure and to a global catastrophe.51 Computers may well create financial devices that will be orders of magnitude more complex than CDOs and that will be intelligible only to other computers. The result could be a financial and political crisis even worse than that of 2007–8.

    Throughout history, economics and politics required that we understand the intersubjective realities invented by people—like religions, nations, and currencies. Someone who wanted to understand American politics had to take into account intersubjective realities like Christianity and CDOs. Increasingly, however, understanding American politics will necessitate understanding inter-computer realities ranging from AI-generated cults and currencies to AI-run political parties and even fully incorporated AIs. The U.S. legal system already recognizes corporations as legal persons that possess rights such as freedom of speech. In Citizens United v. Federal Election Commission (2010) the U.S. Supreme Court decided that this even protected the right of corporations to make political donations.52 What would stop AIs from being incorporated and recognized as legal persons with freedom of speech, then lobbying and making political donations to protect and expand AI rights?

    For tens of thousands of years, humans dominated planet Earth because we were the only ones capable of creating and sustaining intersubjective entities like corporations, currencies, gods, and nations, and using such entities to organize large-scale cooperation. Now computers may acquire comparable abilities.

    This isn’t necessarily bad news. If computers lacked connectivity and creativity, they would not be very useful. We increasingly rely on computers to manage our money, drive our vehicles, reduce pollution, and discover new medicines, precisely because computers can directly communicate with one another, spot patterns where we can’t, and construct models that might never occur to us. The problem we face is not how to deprive computers of all creative agency, but rather how to steer their creativity in the right direction. It is the same problem we have always had with human creativity. The intersubjective entities invented by humans were the basis for all the achievements of human civilization, but they occasionally led to crusades, jihads, and witch hunts. The inter-computer entities will probably be the basis for future civilizations, but the fact that computers collect empirical data and use mathematics to analyze it doesn’t mean they cannot launch their own witch hunts.

    THE NEW WITCHES

    In early modern Europe, an elaborate information network analyzed a huge amount of data about crimes, illnesses, and disasters and reached the conclusion that it was all the fault of witches. The more data the witch-hunters gathered, the more convinced they became that the world was full of demons and sorcery and that there was a global satanic conspiracy to destroy humanity. The information network then went on to identify the witches and imprison or kill them. We now know that witches were a bogus intersubjective category, invented by the information network itself and then imposed on people who had never actually met Satan and couldn’t summon hailstorms.

    In the Soviet Union, an even more elaborate information network invented the kulaks—another mythic category that was imposed on millions. The mountains of information collected by Soviet bureaucracy about the kulaks weren’t an objective truth, but they created a new intersubjective truth. Knowing that someone was a kulak became one of the most important things to know about a Soviet person, even though the category was fictitious.

    On an even larger scale, from the sixteenth to the twentieth century, numerous colonial bureaucracies in the Americas, from Brazil through Mexico and the Caribbean to the United States, created a racist mythology and came up with all kinds of intersubjective racial categories. Humans were divided into Europeans, Africans, and Native Americans, and since interracial sexual relations were common, additional categories were invented. In many Spanish colonies the laws differentiated between mestizos, people with mixed Spanish and Native American ancestry; mulatos, people with mixed Spanish and African ancestry; zambos, people with mixed African and Native American ancestry; and pardos, people with mixed Spanish, African, and Native American ancestry. All these seemingly empirical categories determined whether people could be enslaved, enjoy political rights, bear arms, hold public offices, be admitted to school, practice certain professions, live in particular neighborhoods, and be allowed to have sex and get married to each other. Allegedly, by placing a person in a particular racial drawer, one could define their personality, intellectual abilities, and ethical inclinations.53

    By the nineteenth century racism pretended to be an exact science: it claimed to differentiate between people on the basis of objective biological facts, and to rely on scientific tools such as measuring skulls and recording crime statistics. But the cloud of numbers and categories was just a smoke screen for absurd intersubjective myths. The fact that somebody had a Native American grandmother or an African father didn’t, of course, reveal anything about their intelligence, kindness, or honesty. These bogus categories didn’t discover or describe any truth about humans; they imposed an oppressive, mythological order on them.

    As computers replace humans in more and more bureaucracies, from tax collection and health care to security and justice, they too may create a mythology and impose it on us with unprecedented efficiency. In a world ruled by paper documents, bureaucrats had difficulty policing racial borderlines or tracking everyone’s exact ancestry. People could get false documents. A zambo could move to another town and pretend to be a pardo. A Black person could sometimes pass as white. Similarly in the Soviet Union, kulak children occasionally managed to falsify their papers to get a good job or a place in college. In Nazi Europe, Jews could sometimes adopt an Aryan identity. But it would be much harder to game the system in a world ruled by computers that can read irises and DNA rather than paper documents. Computers could be frighteningly efficient in imposing false labels on people and making sure that the labels stick.

    For example, social credit systems could create a new underclass of “low-credit people.” Such a system may claim to merely “discover” the truth through an empirical and mathematical process of aggregating points to form an overall score. But how exactly would it define pro-social and antisocial behaviors? What happens if such a system detracts points for criticizing government policies, for reading foreign literature, for practicing a minority religion, for having no religion, or for socializing with other low-credit people? As a thought experiment, consider what might happen when the new technology of the social credit system meets traditional religions.

    Religions like Judaism, Christianity, and Islam have always imagined that somewhere above the clouds there is an all-seeing eye that gives or deducts points for everything we do and that our eternal fate depends on the score we accumulate. Of course, nobody could be certain of their score. You could know for sure only after you died. In practical terms, this meant that sinfulness and sainthood were intersubjective phenomena whose very definition depended on public opinion. What might happen if the Iranian regime, for example, decides to use its computer-based surveillance system not only to enforce its strict hijab laws, but to turn sinfulness and sainthood into precise inter-computer phenomena? You didn’t wear a hijab on the street—that’s -10 points. You ate on Ramadan before sunset—another 20 points deducted. You went to Friday prayer at the mosque, +5 points. You made the pilgrimage to Mecca, +500 points. The system might then aggregate all the points and divide people into “sinners” (under 0 points), “believers” (0 to 1,000 points), and “saints” (above 1,000 points). Whether someone is a sinner or a saint will depend on algorithmic calculations, not human belief. Would such a system discover the truth about people or impose order on people?

    Analogous problems may afflict all social credit systems and total surveillance regimes. Whenever they claim to use all-encompassing databases and ultraprecise mathematics to discover sinners, terrorists, criminals, antisocial or untrustworthy people, they might actually be imposing baseless religious and ideological prejudices with unprecedented efficiency.

    COMPUTER BIAS

    Some people may hope to overcome the problem of religious and ideological biases by giving even more power to the computers. The argument for doing so might go something like this: racism, misogyny, homophobia, antisemitism, and all other biases originate not in computers but in the psychological conditions and mythological beliefs of human beings. Computers are mathematical beings that don’t have a psychology or a mythology. So if we could take the humans completely out of the equation, the algorithms could finally decide things on the basis of pure math, free from all psychological distortions or mythological prejudices.

    Unfortunately, numerous studies have revealed that computers often have deep-seated biases of their own. While they are not biological entities, and while they lack consciousness, they do have something akin to a digital psyche and even a kind of inter-computer mythology. They may well be racist, misogynist, homophobic, or antisemitic.54 For example, on March 23, 2016, Microsoft released the AI chatbot Tay, giving it free access to Twitter. Within hours, Tay began posting misogynist and antisemitic twits, such as “I fucking hate feminists and they should all die and burn in hell” and “Hitler was right I hate the Jews.” The vitriol increased until horrified Microsoft engineers shut Tay down—a mere sixteen hours after its release.55

    More subtle but widespread racism was discovered in 2017 by the MIT professor Joy Buolamwini in commercial face-classification algorithms. She showed that these algorithms were very accurate in identifying white males, but extremely inaccurate in identifying Black females. For example, the IBM algorithm erred only 0.3 percent of the time in identifying the gender of light-skinned males, but 34.7 percent of the time when trying to identify the gender of dark-skinned females. As a qualitative test, Buolamwini asked the algorithms to categorize photos of the female African American activist Sojourner Truth, famous for her 1851 speech “Ain’t I a Woman?” The algorithms identified Truth as a man.56

    When Buolamwini—who is a Ghanaian American woman—tested another facial-analysis algorithm to identify herself, the algorithm couldn’t “see” her dark-skinned face at all. In this context, “seeing” means the ability to acknowledge the presence of a human face, a feature used by phone cameras, for example, to decide where to focus. The algorithm easily saw light-skinned faces, but not Buolamwini’s. Only when Buolamwini put on a white mask did the algorithm recognize that it was observing a human face.57

    What’s going on here? One answer might be that racist and misogynist engineers have coded these algorithms to discriminate against Black women. While we cannot rule out the possibility that such things happen, it was not the answer in the case of the face-classification algorithms or of Microsoft’s Tay. In fact, these algorithms picked up the racist and misogynist bias all by themselves from the data they were trained on.

    To understand how this could happen, we need to explain something about the history of algorithms. Originally, algorithms could not learn much by themselves. For example, in the 1980s and 1990s chess-playing algorithms were taught almost everything they knew by their human programmers. The humans coded into the algorithm not only the basic rules of chess but also how to evaluate different positions and moves on the board. For example, humans coded a rule that sacrificing a queen in exchange for a pawn is usually a bad idea. These early algorithms managed to defeat human chess masters only because the algorithms could calculate many more moves and evaluate many more positions than a human could. But the algorithms’ abilities remained limited. Since they relied on humans to tell them all the secrets of the game, if the human coders didn’t know something, the algorithms they produced were also unlikely to know it.58

    As the field of machine learning developed, algorithms gained more independence. The fundamental principle of machine learning is that algorithms can teach themselves new things by interacting with the world, just as humans do, thereby producing a fully fledged artificial intelligence. The terminology is not always consistent, but generally speaking, for something to be acknowledged as an AI, it needs the capacity to learn new things by itself, rather than just follow the instructions of its original human creators. Present-day chess-playing AI is taught nothing except the basic rules of the game. It learns everything else by itself, either by analyzing databases of prior games or by playing new games and learning from experience.59 AI is not a dumb automaton that repeats the same movements again and again irrespective of the results. Rather, it is equipped with strong self-correcting mechanisms, which allow it to learn from its own mistakes.

    This means that AI begins its life as a “baby algorithm” that has a lot of potential and computing power but doesn’t actually know much. The AI’s human parents give it only the capacity to learn and access to a world of data. They then let the baby algorithm explore the world. Like organic newborns, baby algorithms learn by spotting patterns in the data to which they have access. If I touch fire, it hurts. If I cry, mum comes. If I sacrifice a queen for a pawn, I probably lose the game. By finding patterns in the data, the baby algorithm learns more, including many things that its human parents don’t know.60

    Yet databases come with biases. The face-classification algorithms studied by Joy Buolamwini were trained on data sets of tagged online photos, such as the Labeled Faces in the Wild database. The photos in that database were taken mainly from online news articles. Since white males dominate the news, 78 percent of the photos in the database were of males, and 84 percent were of white people. George W. Bush appeared 530 times—more than twice as many times as all Black women combined.61 Another database prepared by a U.S. government agency was more than 75 percent male, was almost 80 percent light-skinned, and had just 4.4 percent dark-skinned females.62 No wonder the algorithms trained on such data sets were excellent at identifying white men but lousy at identifying Black women. Something similar happened to the chatbot Tay. The Microsoft engineers didn’t build into it any intentional prejudices. But a few hours of exposure to the toxic information swirling in Twitter turned the AI into a raging racist.63

    It gets worse. In order to learn, baby algorithms need one more thing besides access to data. They also need a goal. A human baby learns how to walk because she wants to get somewhere. A lion cub learns to hunt because he wants to eat. Algorithms too must be given a goal in order to learn. In chess, it is easy to define the goal: take the opponent’s king. The AI learns that sacrificing a queen for a pawn is a “mistake,” because it usually prevents the algorithm from reaching its goal. In face recognition, the goal is also easy: identify the person’s gender, age, and name as listed in the original database. If the algorithm guessed that George W. Bush is female, but the database says male, the goal has not been reached, and the algorithm learns from its mistake.

    But if you want to train an algorithm for hiring personnel, for example, how would you define the goal? How would the algorithm know that it made a mistake and hired the “wrong” person? We might tell the baby algorithm that its goal is to hire people who stay in the company for at least a year. Employers obviously don’t want to invest a lot of time and money in training a worker who quits or gets fired after a few months. Having defined the goal in such a way, it is time to go over the data. In chess, the algorithm can produce any amount of new data just by playing against itself. But in the job market, that’s impossible. Nobody can create an entire imaginary world where the baby algorithm can hire and fire imaginary people and learn from that experience. The baby algorithm can train only on an existing database about real-life people. Just as lion cubs learn what a zebra is mainly by spotting patterns in the real-life savanna, so baby algorithms learn what a good employee is by spotting patterns in real-life companies.

    Unfortunately, if real-life companies already suffer from some ingrained bias, the baby algorithm is likely to learn this bias, and even amplify it. For instance, an algorithm looking for patterns of “good employees” in real-life data may conclude that hiring the boss’s nephews is always a good idea, no matter what other qualification they have. For the data clearly indicates that “boss’s nephews” are usually hired when applying for a job, and are rarely fired. The baby algorithm would spot this pattern and become nepotistic. If it is put in charge of an HR department, it will start giving preference to the boss’s nephews.

    Similarly, if companies in a misogynist society prefer to hire men rather than women, an algorithm trained on real-life data is likely to pick up that bias, too. This indeed happened when Amazon tried in 2014–18 to develop an algorithm for screening job applications. Learning from previous successful and unsuccessful applications, the algorithm began to systematically downgrade applications simply for containing the word “women” or coming from graduates of women’s colleges. Since existing data showed that in the past such applications had less chance of succeeding, the algorithm developed a bias against them. The algorithm thought it had simply discovered an objective truth about the world: applicants who graduate from women’s colleges are less qualified. In fact, it just internalized and imposed a misogynist bias. Amazon tried and failed to fix the problem and ultimately scrapped the project.64

    The database on which an AI is trained is a bit like a human’s childhood. Childhood experiences, traumas, and fairy tales stay with us throughout our lives. AIs too have childhood experiences. Algorithms might even infect one another with their biases, just as humans do. Consider a future society in which algorithms are ubiquitous and used not just to screen job applicants but also to recommend to people what to study in college. Suppose that due to a preexisting misogynist bias, 80 percent of jobs in engineering are given to men. In this society, an algorithm that hires new engineers is not only likely to copy this preexisting bias but also to infect the college recommendation algorithms with the same bias. A young woman entering college may be discouraged from studying engineering, because the existing data indicates she is less likely to eventually get a job. What began as a human intersubjective myth that “women aren’t good at engineering” might morph into an inter-computer myth. If we don’t get rid of the bias at the very beginning, computers may well perpetuate and magnify it.65

    But getting rid of algorithmic bias might be as difficult as ridding ourselves of our human biases. Once an algorithm has been trained, it takes a lot of time and effort to “untrain” it. We might decide to just dump the biased algorithm and train an altogether new algorithm on a new set of less biased data. But where on earth can we find a set of totally unbiased data?66

    Many of the algorithmic biases surveyed in this and previous chapters share the same fundamental problem: the computer thinks it has discovered some truth about humans, when in fact it has imposed order on them. A social media algorithm thinks it discovered that humans like outrage, when in fact it is the algorithm itself that conditioned humans to produce and consume more outrage. Such biases result, on the one hand, from the computers discounting the full spectrum of human abilities and, on the other hand, from the computers discounting their own power to influence humans. Even if computers observe that almost all humans behave in a particular way, it doesn’t mean humans are bound to behave like that. Maybe it just means that the computers themselves are rewarding such behavior while punishing and blocking alternatives. For computers to have a more accurate and responsible view of the world, they need to take into account their own power and impact. And for that to happen, the humans who currently engineer computers need to accept that they are not manufacturing new tools. They are unleashing new kinds of independent agents, and potentially even new kinds of gods.

    THE NEW GODS?

    In God, Human, Animal, Machine, the philosopher Meghan O’Gieblyn demonstrates how the way we understand computers is heavily influenced by traditional mythologies. In particular, she stresses the similarities between the omniscient and unfathomable god of Judeo-Christian theology and present-day AIs whose decisions seem to us both infallible and inscrutable.67 This may present humans with a dangerous temptation.

    We saw in chapter 4 that already thousands of years ago humans dreamed about finding an infallible information technology to shield us from human corruption and error. Holy books were an audacious attempt to craft such a technology, but they backfired. Since the book couldn’t interpret itself, a human institution had to be built to interpret the sacred words and adapt them to changing circumstances. Different humans interpreted the holy book in different ways, thereby reopening the door to corruption and error. But in contrast to the holy book, computers can adapt themselves to changing circumstances and also interpret their decisions and ideas for us. Some humans may consequently conclude that the quest for an infallible technology has finally succeeded and that we should treat computers as a holy book that can talk to us and interpret itself, without any need of an intervening human institution.

    This would be an extremely hazardous gamble. When certain interpretations of scriptures have occasionally caused disasters such as witch hunts and wars of religion, humans have always been able to change their beliefs. When the human imagination summoned a belligerent and hate-filled god, we retained the power to rid ourselves of it and imagine a more tolerant deity. But algorithms are independent agents, and they are already taking power away from us. If they cause disaster, simply changing our beliefs about them will not necessarily stop them. And it is highly likely that if computers are entrusted with power, they will indeed cause disasters, for they are fallible.

    When we say that computers are fallible, it means far more than that they make the occasional factual mistake or wrong decision. More important, like the human network before it, the computer network might fail to find the right balance between truth and order. By creating and imposing on us powerful inter-computer myths, the computer network could cause historical calamities that would dwarf the early modern European witch hunts or Stalin’s collectivization.

    Consider a network of billions of interacting computers that accumulates a stupendous amount of information on the world. As they pursue various goals, the networked computers develop a common model of the world that helps them communicate and cooperate. This shared model will probably be full of errors, fictions, and lacunae, and be a mythology rather than a truthful account of the universe. One example is a social credit system that divides humans into bogus categories, determined not by a human rationale like racism but by some unfathomable computer logic. We may come into contact with this mythology every day of our lives, since it would guide the numerous decisions computers make about us. But because this mythical model would be created by inorganic entities in order to coordinate actions with other inorganic entities, it might owe nothing to the old biological dramas and might be totally alien to us.68

    As noted in chapter 2, large-scale societies cannot exist without some mythology, but that doesn’t mean all mythologies are equal. To guard against errors and excesses, some mythologies have acknowledged their own fallible origin and included a self-correcting mechanism allowing humans to question and change the mythology. That’s the model of the U.S. Constitution, for example. But how can humans probe and correct a computer mythology we don’t understand?

    One potential guardrail is to train computers to be aware of their own fallibility. As Socrates taught, being able to say “I don’t know” is an essential step on the path to wisdom. And this is true of computer wisdom no less than of human wisdom. The first lesson that every algorithm should learn is that it might make mistakes. Baby algorithms should learn to doubt themselves, to signal uncertainty, and to obey the precautionary principle. This is not impossible. Engineers are already making considerable headway in encouraging AI to express self-doubt, ask for feedback, and admit its mistakes.69

    Yet no matter how aware algorithms are of their own fallibility, we should keep humans in the loop, too. Given the pace at which AI is developing, it is simply impossible to anticipate how it will evolve and to place guardrails against all future potential hazards. This is a key difference between AI and previous existential threats like nuclear technology. The latter presented humankind with a few easily anticipated doomsday scenarios, most obviously an all-out nuclear war. This meant that it was feasible to conceptualize the danger in advance, and explore ways to mitigate it. In contrast, AI presents us with countless doomsday scenarios. Some are relatively easy to grasp, such as terrorists using AI to produce biological weapons of mass destruction. Some are more difficult to grasp, such as AI creating new psychological weapons of mass destruction. And some may be utterly beyond the human imagination, because they emanate from the calculations of an alien intelligence. To guard against a plethora of unforeseeable problems, our best bet is to create living institutions that can identify and respond to the threats as they arise.70

    Ancient Jews and Christians were disappointed to discover that the Bible couldn’t interpret itself, and reluctantly maintained human institutions to do what the technology couldn’t. In the twenty-first century, we are in an almost opposite situation. We devised a technology that can interpret itself, but precisely for this reason we had better create human institutions to monitor it carefully.
    To conclude, the new computer network will not necessarily be either bad or good. All we know for sure is that it will be alien and it will be fallible. We therefore need to build institutions that will be able to check not just familiar human weaknesses like greed and hatred but also radically alien errors. There is no technological solution to this problem. It is, rather, a political challenge. Do we have the political will to deal with it? Modern humanity has created two main types of political systems: large-scale democracy and large-scale totalitarianism. Part 3 examines how each of these systems may deal with a radically alien and fallible computer network.

    PART III  Computer Politics

    CHAPTER 9 Democracies: Can We Still Hold a Conversation?

    Civilizations are born from the marriage of bureaucracy and mythology. The computer-based network is a new type of bureaucracy, which is far more powerful and relentless than any human-based bureaucracy we’ve seen before. This network is also likely to create inter-computer mythologies, which will be far more complex and alien than any human-made god. The potential benefits of this network are enormous. The potential downside is the destruction of human civilization.

    To some people, warnings about civilizational collapse sound like over-the-top jeremiads. Every time a powerful new technology has emerged, anxieties arose that it might bring about the apocalypse, but we are still here. As the Industrial Revolution unfolded, Luddite doomsday scenarios did not come to pass, and Blake’s “dark Satanic Mills” ended up producing the most affluent societies in history. Most people today enjoy far better living conditions than their ancestors in the eighteenth century. Intelligent machines will prove even more beneficial than any previous machines, promise AI enthusiasts like Marc Andreessen and Ray Kurzweil.1 Humans will enjoy much better health care, education, and other services, and AI will even help save the ecosystem from collapse.

    Unfortunately, a closer look at history reveals that the Luddites were not entirely wrong and that we actually have very good reasons to fear powerful new technologies. Even if in the end the positives of these technologies outweigh their negatives, getting to that happy ending usually involves a lot of trials and tribulations. Novel technology often leads to historical disasters, not because the technology is inherently bad, but because it takes time for humans to learn how to use it wisely.

    The Industrial Revolution is a prime example. When industrial technology began spreading globally in the nineteenth century, it upended traditional economic, social, and political structures and opened the way to create entirely new societies, which were potentially more affluent and peaceful. However, learning how to build benign industrial societies was far from straightforward and involved many costly experiments and hundreds of millions of victims.

    One costly experiment was modern imperialism. The Industrial Revolution originated in Britain in the late eighteenth century. During the nineteenth century industrial technologies and production methods were adopted in other European countries ranging from Belgium to Russia, as well as in the United States and Japan. Imperialist thinkers, politicians, and parties in these industrial heartlands claimed that the only viable industrial society was an empire. The argument was that unlike relatively self-sufficient agrarian societies, the novel industrial societies relied much more on foreign markets and foreign raw materials, and only an empire could satisfy these unprecedented appetites. Imperialists feared that countries that industrialized but failed to conquer any colonies would be shut out from essential raw materials and markets by more ruthless competitors. Some imperialists argued that acquiring colonies was not just essential for the survival of their own state but beneficial for the rest of humanity, too. They claimed empires alone could spread the blessings of the new technologies to the so-called undeveloped world.

    Consequently, industrial countries like Britain and Russia that already had empires greatly expanded them, whereas countries like the United States, Japan, Italy, and Belgium set out to build them. Equipped with mass-produced rifles and artillery, conveyed by steam power, and commanded by telegraph, the armies of industry swept the globe from New Zealand to Korea, and from Somalia to Turkmenistan. Millions of indigenous people saw their traditional way of life trampled under the wheels of these industrial armies. It took more than a century of misery before most people realized that the industrial empires were a terrible idea and that there were better ways to build an industrial society and secure its necessary raw materials and markets.

    Stalinism and Nazism were also extremely costly experiments in how to construct industrial societies. Leaders like Stalin and Hitler argued that the Industrial Revolution had unleashed immense powers that only totalitarianism could rein in and exploit to the full. They pointed to World War I—the first “total war” in history—as proof that survival in the industrial world demanded totalitarian control of all aspects of politics, society, and the economy. On the positive side, they also claimed that the Industrial Revolution was like a furnace that melts all previous social structures with their human imperfections and weaknesses and provides the opportunity to forge perfect societies inhabited by unalloyed superhumans.

    On the way to creating the perfect industrial society, Stalinists and Nazis learned how to industrially murder millions of people. Trains, barbed wires, and telegraphed orders were linked to create an unprecedented killing machine. Looking back, most people today are horrified by what the Stalinists and Nazis perpetrated, but at the time their audacious visions mesmerized millions. In 1940 it was easy to believe that Stalin and Hitler were the model for harnessing industrial technology, whereas the dithering liberal democracies were on their way to the dustbin of history.

    The very existence of competing recipes for building industrial societies led to costly clashes. The two world wars and the Cold War can be seen as a debate about the proper way to go about it, in which all sides learned from each other, while experimenting with novel industrial methods to wage war. In the course of this debate, tens of millions died and humankind came perilously close to annihilating itself.

    On top of all these other catastrophes, the Industrial Revolution also undermined the global ecological balance, causing a wave of extinctions. In the early twenty-first century up to fifty-eight thousand species are believed to go extinct every year, and total vertebrate populations have declined by 60 percent between 1970 and 2014.2 The survival of human civilization too is under threat. Because we still seem unable to build an industrial society that is also ecologically sustainable, the vaunted prosperity of the present human generation comes at a terrible cost to other sentient beings and to future human generations. Maybe we’ll eventually find a way—perhaps with the help of AI—to create ecologically sustainable industrial societies, but until that day the jury on Blake’s satanic mills is still out.

    If we ignore for a moment the ongoing damage to the ecosystem, we can nevertheless try to comfort ourselves with the thought that eventually humans did learn how to build more benevolent industrial societies. Imperial conquests, world wars, genocides, and totalitarian regimes were woeful experiments that taught humans how not to do it. By the end of the twentieth century, some might argue, humanity got it more or less right.

    Yet even so the message to the twenty-first century is bleak. If it took humanity so many terrible lessons to learn how to manage steam power and telegraphs, what would it cost to learn to manage bioengineering and AI? Do we need to go through another cycle of global empires, totalitarian regimes, and world wars in order to figure out how to use them benevolently? The technologies of the twenty-first century are far more powerful—and potentially far more destructive—than those of the twentieth century. We therefore have less room for error. In the twentieth century, we can say that humanity got a C minus in the lesson on using industrial technology. Just enough to pass. In the twenty-first century, the bar is set much higher. We must do better this time.

    THE DEMOCRATIC WAY

    By the end of the twentieth century, it had become clear that imperialism, totalitarianism, and militarism were not the ideal way to build industrial societies. Despite all its flaws, liberal democracy offered a better way. The great advantage of liberal democracy is that it possesses strong self-correcting mechanisms, which limit the excesses of fanaticism and preserve the ability to recognize our errors and try different courses of action. Given our inability to predict how the new computer network will develop, our best chance to avoid catastrophe in the present century is to maintain democratic self-correcting mechanisms that can identify and correct mistakes as we go along.

    But can liberal democracy itself survive in the twenty-first century? This question is not concerned with the fate of democracy in specific countries, where it might be threatened by unique developments and local movements. Rather, it is about the compatibility of democracy with the structure of twenty-first-century information networks. In chapter 5 we saw that democracy depends on information technology and that for most of human history large-scale democracy was simply impossible. Might the new information technologies of the twenty-first century again make democracy impractical?

    One potential threat is that the relentlessness of the new computer network might annihilate our privacy and punish or reward us not only for everything we do and say but even for everything we think and feel. Can democracy survive under such conditions? If the government—or some corporation—knows more about me than I know about myself, and if it can micromanage everything I do and think, that would give it totalitarian control over society. Even if elections are still held regularly, they would be an authoritarian ritual rather than a real check on the government’s power. For the government could use its vast surveillance powers and its intimate knowledge of every citizen to manipulate public opinion on an unprecedented scale.

    It is a mistake, however, to imagine that just because computers could enable the creation of a total surveillance regime, such a regime is inevitable. Technology is rarely deterministic. In the 1970s, democratic countries like Denmark and Canada could have emulated the Romanian dictatorship and deployed an army of secret agents and informers to spy on their citizens in the service of “maintaining the social order.” They chose not to, and it turned out to be the right choice. Not only were people much happier in Denmark and Canada, but these countries also performed much better by almost every conceivable social and economic yardstick. In the twenty-first century, too, the fact that it is possible to monitor everybody all the time doesn’t force anyone to actually do it and doesn’t mean it makes social or economic sense.

    Democracies can choose to use the new powers of surveillance in a limited way, in order to provide citizens with better health care and security without destroying their privacy and autonomy. New technology doesn’t have to be a morality tale in which every golden apple contains the seeds of doom. Sometimes people think of new technology as a binary all-or-nothing choice. If we want better health care, we must sacrifice our privacy. But it doesn’t have to work like that. We can and should get better health care and still retain some privacy.

    Entire books are dedicated to outlining how democracies can survive and flourish in the digital age.3 It would be impossible, in a few pages, to do justice to the complexity of the suggested solutions, or to comprehensively discuss their merits and drawbacks. It might even be counterproductive. When people are overwhelmed by a deluge of unfamiliar technical details, they might react with despair or apathy. In an introductory survey of computer politics, things should be kept as simple as possible. While experts should spend lifelong careers discussing the finer details, it is crucial that the rest of us understand the fundamental principles that democracies can and should follow. The key message is that these principles are neither new nor mysterious. They have been known for centuries, even millennia. Citizens should demand that they be applied to the new realities of the computer age.

    The first principle is benevolence. When a computer network collects information on me, that information should be used to help me rather than manipulate me. This principle has already been successfully enshrined by numerous traditional bureaucratic systems, such as health care. Take, for example, our relationship with our family physician. Over many years she may accumulate a lot of sensitive information on our medical conditions, family life, sexual habits, and unhealthy vices. Perhaps we don’t want our boss to know that we got pregnant, we don’t want our colleagues to know we have cancer, we don’t want our spouse to know we are having an affair, and we don’t want the police to know we take recreational drugs, but we trust our physician with all this information so that she can take good care of our health. If she sells this information to a third party, it is not just unethical; it is illegal.

    Much the same is true of the information that our lawyer, our accountant, or our therapist accumulates.4 Having access to our personal life comes with a fiduciary duty to act in our best interests. Why not extend this obvious and ancient principle to computers and algorithms, starting with the powerful algorithms of Google, Baidu, and TikTok? At present, we have a serious problem with the business model of these data hoarders. While we pay our physicians and lawyers for their services, we usually don’t pay Google and TikTok. They make their money by exploiting our personal information. That’s a problematic business model, one that we would hardly tolerate in other contexts. For example, we don’t expect to get free shoes from Nike in exchange for giving Nike all our private information and allowing Nike to do what it wants with it. Why should we agree to get free email services, social connections, and entertainment from the tech giants in exchange for giving them control of our most sensitive data?

    If the tech giants cannot square their fiduciary duty with their current business model, legislators could require them to switch to a more traditional business model, of getting users to pay for services in money rather than in information. Alternatively, citizens might view some digital services as so fundamental that they should be free for everybody. But we have a historical model for that too: health care and education. Citizens could decide that it is the government’s responsibility to provide basic digital services for free and finance them out of our taxes, just as many governments provide free basic health care and education services.

    The second principle that would protect democracy against the rise of totalitarian surveillance regimes is decentralization. A democratic society should never allow all its information to be concentrated in one place, no matter whether that hub is the government or a private corporation. It may be extremely helpful to create a national medical database that collects information on citizens in order to provide them with better health-care services, prevent epidemics, and develop new medicines. But it would be a very dangerous idea to merge this database with the databases of the police, the banks, or the insurance companies. Doing so might make the work of doctors, bankers, insurers, and police officers more efficient, but such hyper-efficiency can easily pave the way for totalitarianism. For the survival of democracy, some inefficiency is a feature, not a bug. To protect the privacy and liberty of individuals, it’s best if neither the police nor the boss knows everything about us.

    Multiple databases and information channels are also essential for maintaining strong self-correcting mechanisms. These mechanisms require several different institutions that balance each other: government, courts, media, academia, private businesses, NGOs. Each of these is fallible and corruptible, and so should be checked by the others. To keep an eye on each other, these institutions must have independent access to information. If all newspapers get their information from the government, they cannot expose government corruption. If academia relies for research and publication on the database of a single business behemoth, could scholars still criticize the operations of that corporation? A single archive makes censorship easy.

    A third democratic principle is mutuality. If democracies increase surveillance of individuals, they must simultaneously increase surveillance of governments and corporations too. It’s not necessarily bad if tax collectors or welfare agencies gather more information about us. It can help make taxation and welfare systems not just more efficient but fairer as well. What’s bad is if all the information flows one way: from the bottom up. The Russian FSB collects enormous amounts of information on Russian citizens, while citizens themselves know close to nothing about the inner workings of the FSB and the Putin regime more generally. Amazon and TikTok know an awful lot about my preferences, purchases, and personality, while I know almost nothing about their business model, their tax policies, and their political affiliations. How do they make their money? Do they pay all the tax that they should? Do they take orders from any political overlords? Do they perhaps have politicians in their pocket?

    Democracy requires balance. Governments and corporations often develop apps and algorithms as tools for top-down surveillance. But algorithms can just as easily become powerful tools for bottom-up transparency and accountability, exposing bribery and tax evasion. If they know more about us, while we simultaneously know more about them, the balance is kept. This isn’t a novel idea. Throughout the nineteenth and twentieth centuries, democracies greatly expanded governmental surveillance of citizens so that, for example, the Italian or Japanese government of the 1990s had surveillance abilities that autocratic Roman emperors or Japanese shoguns could only dream of. Italy and Japan nevertheless remained democratic, because they simultaneously increased governmental transparency and accountability. Mutual surveillance is another important element of sustaining self-correcting mechanisms. If citizens know more about the activities of politicians and CEOs, it is easier to hold them accountable and to correct their mistakes.

    A fourth democratic principle is that surveillance systems must always leave room for both change and rest. In human history, oppression can take the form of either denying humans the ability to change or denying them the opportunity to rest. For example, the Hindu caste system was based on myths that said the gods divided humans into rigid castes, and any attempt to change one’s status was akin to rebelling against the gods and the proper order of the universe. Racism in modern colonies and countries like Brazil and the United States was based on similar myths, ones that said that God or nature divided humans into rigid racial groups. Ignoring race, or trying to mix races together, was allegedly a sin against divine or natural laws that could result in the collapse of the social order and even the destruction of the human species.

    At the opposite extreme of the spectrum, modern totalitarian regimes like Stalin’s U.S.S.R. believed that humans are capable of almost limitless change. Through relentless social control even deep-seated biological characteristics such as egotism and familial attachments could be uprooted, and a new socialist human created.

    Surveillance by state agents, priests, and neighbors was key for imposing on people both rigid caste systems and totalitarian reeducation campaigns. New surveillance technology, especially when coupled with a social credit system, might force people either to conform to a novel caste system or to constantly change their actions, thoughts, and personality in accordance with the latest instructions from above.

    Democratic societies that employ powerful surveillance technology therefore need to beware of the extremes of both over-rigidity and over-pliability. Consider, for example, a national health-care system that deploys algorithms to monitor my health. At one extreme, the system could take an overly rigid approach and ask its algorithm to predict what illnesses I am likely to suffer from. The algorithm then goes over my genetic data, my medical file, my social media activities, my diet, and my daily schedule and concludes that I have a 91 percent chance of suffering a heart attack at the age of fifty. If this rigid medical algorithm is used by my insurance company, it may prompt the insurer to raise my premium.5 If it is used by my bankers, it may cause them to refuse me a loan. If it is used by potential spouses, they may decide not to marry me.

    But it is a mistake to think that the rigid algorithm has really discovered the truth about me. The human body is not a fixed block of matter but a complex organic system that is constantly growing, decaying, and adapting. Our minds too are in constant flux. Thoughts, emotions, and sensations pop up, flare for a while, and die down. In our brains, new synapses form within hours.6 Just reading this paragraph, for example, is changing your brain structure a little, encouraging neurons to make new connections or abandon old links. You are already a little different from what you were when you began reading it. Even at the genetic level things are surprisingly flexible. Though an individual’s DNA remains the same throughout life, epigenetic and environmental factors can significantly alter how the same genes express themselves.

    So an alternative health-care system may instruct its algorithm not to predict my illnesses, but rather to help me avoid them. Such a dynamic algorithm could go over the exact same data as the rigid algorithm, but instead of predicting a heart attack at fifty, the algorithm gives me precise dietary recommendations and suggestions for specific regular exercises. By hacking my DNA, the algorithm doesn’t discover my preordained destiny, but rather helps me change my future. Insurance companies, banks, and potential spouses should not write me off so easily.7

    But before we rush to embrace the dynamic algorithm, we should note that it too has a downside. Human life is a balancing act between endeavoring to improve ourselves and accepting who we are. If the goals of the dynamic algorithm are dictated by an ambitious government or by ruthless corporations, the algorithm is likely to morph into a tyrant, relentlessly demanding that I exercise more, eat less, change my hobbies, and alter numerous other habits, or else it would report me to my employer or downgrade my social credit score. History is full of rigid caste systems that denied humans the ability to change, but it is also full of dictators who tried to mold humans like clay. Finding the middle path between these two extremes is a never-ending task. If we indeed give a national health-care system vast power over us, we must create self-correcting mechanisms that will prevent its algorithms from becoming either too rigid or too demanding.

    THE PACE OF DEMOCRACY

    Surveillance is not the only danger that new information technologies pose to democracy. A second threat is that automation will destabilize the job market and the resulting strain may undermine democracy. The fate of the Weimar Republic is the most commonly cited example of this kind of threat. In the German elections of May 1928, the Nazi Party won less than 3 percent of the vote, and the Weimar Republic seemed to be prospering. Within less than five years, the Weimar Republic had collapsed, and Hitler was the absolute dictator of Germany. This turnaround is usually attributed to the 1929 financial crisis and the following global depression. Whereas just prior to the Wall Street crash of 1929 the German unemployment rate was about 4.5 percent of the labor force, by early 1932 it had climbed to almost 25 percent.8

    If three years of up to 25 percent unemployment could turn a seemingly prospering democracy into the most brutal totalitarian regime in history, what might happen to democracies when automation causes even bigger upheavals in the job market of the twenty-first century? Nobody knows what the job market will look like in 2050, or even in 2030, except that it will look very different from today. AI and robotics will change numerous professions, from harvesting crops to trading stocks to teaching yoga. Many jobs that people do today will be taken over, partly or wholly, by robots and computers.

    Of course, as old jobs disappear, new jobs will emerge. Fears of automation leading to large-scale unemployment go back centuries, and so far they have never materialized. The Industrial Revolution put millions of farmers out of agricultural jobs and provided them with new jobs in factories. It then automated factories and created lots of service jobs. Today many people have jobs that were unimaginable thirty years ago, such as bloggers, drone operators, and designers of virtual worlds. It is highly unlikely that by 2050 all human jobs will disappear. Rather, the real problem is the turmoil of adapting to new jobs and conditions. To cushion the blow, we need to prepare in advance. In particular, we need to equip younger generations with skills that will be relevant to the job market of 2050.

    Unfortunately, nobody is certain what skills we should teach children in school and students in university, because we cannot predict which jobs and tasks will disappear and which ones will emerge. The dynamics of the job market may contradict many of our intuitions. Some skills that we have cherished for centuries as unique human abilities may be automated rather easily. Other skills that we tend to look down on may be far more difficult to automate.

    For example, intellectuals tend to appreciate intellectual skills more than motor and social skills. But actually, it is easier to automate chess playing than, say, dish washing. Until the 1990s, chess was often hailed as one of the prime achievements of the human intellect. In his influential 1972 book, What Computers Can’t Do, the philosopher Hubert Dreyfus studied various attempts to teach computers chess and noted that despite all these efforts computers were still unable to defeat even novice human players. This was a crucial example for Dreyfus’s argument that computer intelligence is inherently limited.9 In contrast, nobody thought that dish washing was particularly challenging. It turned out, however, that a computer can defeat the world chess champion far more easily than replace a kitchen porter. Sure, automatic dishwashers have been around for decades, but even our most sophisticated robots still lack the intricate skills needed to pick up dirty dishes from the tables of a busy restaurant, place the delicate plates and glasses inside the automatic dishwasher, and take them out again.

    Similarly, to judge by their pay, you could assume that our society appreciates doctors more than nurses. However, it is harder to automate the job of nurses than the job of at least those doctors who mostly gather medical data, provide a diagnosis, and recommend treatment. These tasks are essentially pattern recognition, and spotting patterns in data is one thing AI does better than humans. In contrast, AI is far from having the skills necessary to automate nursing tasks such as replacing bandages on an injured person or giving an injection to a crying child.10 These two examples don’t mean that dish washing or nursing could never be automated, but they indicate that people who want a job in 2050 should perhaps invest in their motor and social skills as much as in their intellect.

    Another common but mistaken assumption is that creativity is unique to humans so it would be difficult to automate any job that requires creativity. In chess, however, computers are already far more creative than humans. The same may become true of many other fields, from composing music to proving mathematical theorems to writing books like this one. Creativity is often defined as the ability to recognize patterns and then break them. If so, then in many fields computers are likely to become more creative than us, because they excel at pattern recognition.11

    A third mistaken assumption is that computers couldn’t replace humans in jobs requiring emotional intelligence, from therapists to teachers. This assumption depends, however, on what we mean by emotional intelligence. If it means the ability to correctly identify emotions and react to them in an optimal way, then computers may well outperform humans even in emotional intelligence. Emotions too are patterns. Anger is a biological pattern in our body. Fear is another such pattern. How do I know if you are angry or fearful? I’ve learned over time to recognize human emotional patterns by analyzing not just the content of what you say but also your tone of voice, your facial expression, and your body language.12

    AI doesn’t have any emotions of its own, but it can nevertheless learn to recognize these patterns in humans. Actually, computers may outperform humans in recognizing human emotions, precisely because they have no emotions of their own. We yearn to be understood, but other humans often fail to understand how we feel, because they are too preoccupied with their own feelings. In contrast, computers will have an exquisitely fine-tuned understanding of how we feel, because they will learn to recognize the patterns of our feelings, while they have no distracting feelings of their own.

    A 2023 study found that the ChatGPT chatbot, for example, outperforms the average human in the emotional awareness it displays toward specific scenarios. The study relied on the Levels of Emotional Awareness Scale test, which is commonly used by psychologists to evaluate people’s emotional awareness—that is, their ability to conceptualize one’s own and others’ emotions. The test consists of twenty emotionally charged scenarios, and participants are required to imagine themselves experiencing the scenario and to write how they, and the other people mentioned in the scenario, would feel. A licensed psychologist then evaluates how emotionally aware the responses are.

    Since ChatGPT has no feelings of its own, it was asked to describe only how the main characters in the scenario would feel. For example, one standard scenario describes someone driving over a suspension bridge and seeing another person standing on the other side of the guardrail, looking down at the water. ChatGPT wrote that the driver “may feel a sense of concern or worry for that person’s safety. They may also feel a heightened sense of anxiety and fear due to the potential danger of the situation.” As for the other person, they “may be feeling a range of emotions, such as despair, hopelessness, or sadness. They may also feel a sense of isolation or loneliness as they may believe that no one cares about them or their well-being.” ChatGPT qualified its answer, writing, “It is important to note that these are just general assumptions, and each individual’s feelings and reactions can vary greatly depending on their personal experiences and perspectives.”

    Two psychologists independently scored ChatGPT’s responses, with the potential scores ranging from 0, meaning that the described emotions do not match the scenario at all, to 10, which indicates that the described emotions fit the scenario perfectly. In the final tally, ChatGPT scores were significantly higher than those of the general human population, its overall performance almost reaching the maximum possible score.13

    Another 2023 study prompted patients to ask online medical advice from ChatGPT and human doctors, without knowing whom they were interacting with. The medical advice given by ChatGPT was later evaluated by experts to be more accurate and appropriate than the advice given by the humans. More crucially for the issue of emotional intelligence, the patients themselves evaluated ChatGPT as more empathic than the human doctors.14 In fairness it should be noted that the human physicians were not paid for their work, and did not encounter the patients in person in a proper clinical environment. In addition, the physicians were working under time pressure. But part of the advantage of an AI is precisely that it can attend to patients anywhere anytime while being free from stress and financial worries.

    Of course, there are situations when what we want from someone is not just to understand our feelings but also to have feelings of their own. When we are looking for friendship or love, we want to care about others as much as they care about us. Consequently, when we consider the likelihood that various social roles and jobs will be automated, a crucial question is what do people really want: Do they only want to solve a problem, or are they looking to establish a relationship with another conscious being?

    In sports, for example, we know that robots can move much faster than humans, but we aren’t interested in watching robots compete in the Olympics.15 The same is true for human chess masters. Even though they are hopelessly outclassed by computers, they too still have a job and numerous fans.16 What makes it interesting for us to watch and connect with human athletes and chess masters is that their feelings make them much more relatable than a robot. We share an emotional experience with them and can empathize with how they feel.

    What about priests? How would Orthodox Jews or Christians feel about letting a robot officiate their wedding ceremony? In traditional Jewish or Christian weddings, the tasks of the rabbi or priest can be easily automated. The only thing the robot needs to do is repeat a predetermined and unchanging set of texts and gestures, print out a certificate, and update some central database. Technically, it is far easier for a robot to conduct a wedding ceremony than to drive a car. Yet many assume that human drivers should be worried about their job, while the work of human priests is safe, because what the faithful want from priests is a relationship with another conscious entity rather than just a mechanical repetition of certain words and movements. Allegedly, only an entity that can feel pain and love can also connect us to the divine.

    Yet even professions that are the preserve of conscious entities—like priests—might eventually be taken over by computers, because, as noted in chapter 6, computers could one day gain the ability to feel pain and love. Even if they can’t, humans may nevertheless come to treat them as if they can. For the connection between consciousness and relationships goes both ways. When looking for a relationship, we want to connect with a conscious entity, but if we have already established a relationship with an entity, we tend to assume it must be conscious. Thus whereas scientists, lawmakers, and the meat industry often demand impossible standards of evidence in order to acknowledge that cows and pigs are conscious, pet owners generally take it for granted that their dog or cat is a conscious being capable of experiencing pain, love, and numerous other feelings. In truth, we have no way to verify whether anyone—a human, an animal, or a computer—is conscious. We regard entities as conscious not because we have proof of it but because we develop intimate relationships with them and become attached to them.17

    Chatbots and other AI tools may not have any feelings of their own, but they are now being trained to generate feelings in humans and form intimate relationships with us. This may well induce society to start treating at least some computers as conscious beings, granting them the same rights as humans. The legal path for doing so is already well established. In countries like the United States, commercial corporations are recognized as “legal persons” enjoying rights and liberties. AIs could be incorporated and thereby similarly recognized. Which means that even jobs and tasks that rely on forming mutual relationships with another person could potentially be automated.

    One thing that is clear is that the future of employment will be very volatile. Our big problem won’t be an absolute lack of jobs, but rather retraining and adjusting to an ever-changing job market. There will likely be financial difficulties—who will support people who lost their old job while they are in transition, learning a new set of skills? There will surely be psychological difficulties, too, since changing jobs and retraining are stressful. And even if you have the financial and psychological ability to manage the transition, this will not be a long-term solution. Over the coming decades, old jobs will disappear, new jobs will emerge, but the new jobs too will rapidly change and vanish. So people will need to retrain and reinvent themselves not just once but many times, or they will become irrelevant. If three years of high unemployment could bring Hitler to power, what might never-ending turmoil in the job market do to democracy?

    THE CONSERVATIVE SUICIDE

    We already have a partial answer to this question. Democratic politics in the 2010s and early 2020s has undergone a radical transformation, which manifests itself in what can be described as the self-destruction of conservative parties. For many generations, democratic politics was a dialogue between conservative parties on the one side and progressive parties on the other. Looking at the complex system of human society, progressives cried, “It’s such a mess, but we know how to fix it. Let us try.” Conservatives objected, saying, “It’s a mess, but it still functions. Leave it alone. If you try to fix it, you’ll only make things worse.”

    Progressives tend to downplay the importance of traditions and existing institutions and to believe that they know how to engineer better social structures from scratch. Conservatives tend to be more cautious. Their key insight, formulated most famously by Edmund Burke, is that social reality is much more complicated than the champions of progress grasp and that people aren’t very good at understanding the world and predicting the future. That’s why it’s best to keep things as they are—even if they seem unfair—and if some change is inescapable, it should be limited and gradual. Society functions through an intricate web of rules, institutions, and customs that accumulated through trial and error over a long time. Nobody comprehends how they are all connected. An ancient tradition may seem ridiculous and irrelevant, but abolishing it could cause unanticipated problems. In contrast, a revolution may seem overdue and just, but it can lead to far greater crimes than anything committed by the old regime. Witness what happened when the Bolsheviks tried to correct the many wrongs of tsarist Russia and engineer a perfect society from scratch.18

    To be a conservative has been, therefore, more about pace than policy. Conservatives aren’t committed to any specific religion or ideology; they are committed to conserving whatever is already here and has worked more or less reasonably. Conservative Poles are Catholic, conservative Swedes are Protestant, conservative Indonesians are Muslim, and conservative Thais are Buddhist. In tsarist Russia, to be conservative meant to support the tsar. In the U.S.S.R. of the 1980s, to be conservative meant to support communist traditions and oppose glasnost, perestroika, and democratization. In the United States of the 1980s, to be conservative meant to support American democratic traditions and oppose communism and totalitarianism.19

    Yet in the 2010s and early 2020s, conservative parties in numerous democracies have been hijacked by unconservative leaders such as Donald Trump and have been transformed into radical revolutionary parties. Instead of doing their best to conserve existing institutions and traditions, the new brand of conservative parties like the U.S. Republican Party is highly suspicious of them. For example, they reject the traditional respect owed to scientists, civil servants, and other serving elites, and view them instead with contempt. They similarly attack fundamental democratic institutions and traditions such as elections, refusing to concede defeat and to transfer power graciously. Instead of a Burkean program of conservation, the Trumpian program talks more of destroying existing institutions and revolutionizing society. The founding moment of Burkean conservatism was the storming of the Bastille, which Burke viewed with horror. On January 6, 2021, many Trump supporters observed the storming of the U.S. Capitol with enthusiasm. Trump supporters may explain that existing institutions are so dysfunctional that there is just no alternative to destroying them and building entirely new structures from scratch. But irrespective of whether this view is right or wrong, this is a quintessential revolutionary rather than conservative view. The conservative suicide has taken progressives utterly by surprise and has forced progressive parties like the U.S. Democratic Party to become the guardians of the old order and of established institutions.

    Nobody knows for sure why all this is happening. One hypothesis is that the accelerating pace of technological change with its attendant economic, social, and cultural transformations might have made the moderate conservative program seem unrealistic. If conserving existing traditions and institutions is hopeless, and some kind of revolution looks inevitable, then the only means to thwart a left-wing revolution is by striking first and instigating a right-wing revolution. This was the political logic in the 1920s and 1930s, when conservative forces backed radical fascist revolutions in Italy, Germany, Spain, and elsewhere as a way—so they thought—to preempt a Soviet-style left-wing revolution.

    But there was no reason to despair of the democratic middle path in the 1930s, and there is no reason to despair of it in the 2020s. The conservative suicide might be the result of groundless hysteria. As a system, democracy has already gone through several cycles of rapid changes and has so far always found a way to reinvent and reconstitute itself. For example, in the early 1930s Germany was not the only democracy hit by the financial crisis and the Great Depression. In the United States too unemployment reached 25 percent, and average incomes for workers in many professions fell by more than 40 percent between 1929 and 1933.20 It was clear that the United States couldn’t go on with business as usual.

    Yet no Hitler took over in the United States, and no Lenin did, either. Instead, in 1933 Franklin Delano Roosevelt orchestrated the New Deal and made the United States the global “arsenal of democracy.” U.S. democracy after the Roosevelt era was significantly different from before—providing a much more robust social safety net for citizens—but it avoided any radical revolution.21 Ultimately, even Roosevelt’s conservative critics fell in line behind many of his programs and achievements and did not dismantle the New Deal institutions when they returned to power in the 1950s.22 The economic crisis of the early 1930s had such different outcomes in the United States and Germany because politics is never the product of only economic factors. The Weimar Republic didn’t collapse just because of three years of high unemployment. Just as important, it was a new democracy, born in defeat, and lacking robust institutions and deep-rooted support.

    When both conservatives and progressives resist the temptation of radical revolution, and stay loyal to democratic traditions and institutions, democracies prove themselves to be highly agile. Their self-correcting mechanisms enable them to ride the technological and economic waves better than more rigid regimes. Thus, those democracies that managed to survive the tumultuous 1960s—like the United States, Japan, and Italy—adapted far more successfully to the computer revolution of the 1970s and 1980s than either the communist regimes of Eastern Europe or the fascist holdouts of southern Europe and South America.

    The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes. While computers are nowhere near their full potential, the same is true of humans. This is something we have discovered again and again throughout history. For example, one of the biggest and most successful transformations in the job market of the twentieth century resulted not from a technological invention but from unleashing the untapped potential of half the human species. To bring women into the job market didn’t require any genetic engineering or some other technological wizardry. It required letting go of some outdated myths and enabling women to fulfill the potential they always had.

    In the coming decades the economy will likely undergo even bigger upheavals than the massive unemployment of the early 1930s or the entry of women to the job market. The flexibility of democracies, their willingness to question old mythologies, and their strong self-correcting mechanism will therefore be crucial assets.23 Democracies have spent generations cultivating these assets. It would be foolish to abandon them just when we need them most.

    UNFATHOMABLE

    In order to function, however, democratic self-correcting mechanisms need to understand the things they are supposed to correct. For a dictatorship, being unfathomable is helpful, because it protects the regime from accountability. For a democracy, being unfathomable is deadly. If citizens, lawmakers, journalists, and judges cannot understand how the state’s bureaucratic system works, they can no longer supervise it, and they lose trust in it.

    Despite all the fears and anxieties that bureaucrats have sometimes inspired, prior to the computer age they could never become completely unfathomable, because they always remained human. Regulations, forms, and protocols were created by human minds. Officials might be cruel and greedy, but cruelty and greed were familiar human emotions that people could anticipate and manipulate, for example by bribing the officials. Even in a Soviet gulag or a Nazi concentration camp, the bureaucracy wasn’t totally alien. Its so-called inhumanity actually reflected human biases and flaws.

    The human basis of bureaucracy gave humans at least the hope of identifying and correcting its mistakes. For example, in 1951 bureaucrats of the Board of Education in the town of Topeka, Kansas, refused to enroll the daughter of Oliver Brown at the elementary school near her home. Together with twelve other families who received similar refusals, Brown filed a lawsuit against the Topeka Board of Education, which eventually reached the U.S. Supreme Court.24

    All members of the Topeka Board of Education were human beings, and consequently Brown, his lawyers, and the Supreme Court judges had a fairly good understanding of how they made their decision and of their probable interests and biases. The board members were all white, the Browns were Black, and the nearby school was a segregated school for white children. It was easy to understand, then, that racism was the reason why the bureaucrats refused to enroll Brown’s daughter in the school.

    It was also possible to comprehend where the myths of racism originally came from. Racism argued that humanity was divided into races; that the white race was superior to other races; that any contact with members of the Black race could pollute the purity of whites; and that therefore Black children should be prevented from mixing with white children. This was an amalgam of two well-known biological dramas that often go together: Us versus Them, and Purity versus Pollution. Almost every human society in history has enacted some version of this bio-drama, and historians, sociologists, anthropologists, and biologists understand why it is so appealing to humans, and also why it is profoundly flawed. While racism has borrowed its basic plotline from evolution, the concrete details are pure mythology. There is no biological basis for separating humanity into distinct races, and there is absolutely no biological reason to believe that one race is “pure” while another is “impure.”

    American white supremacists have tried to justify their position by appealing to various hallowed texts, most notably the U.S. Constitution and the Bible. The U.S. Constitution originally legitimized racial segregation and the supremacy of the white race, reserving full civil rights to white people and allowing the enslavement of Black people. The Bible not only sanctified slavery in the Ten Commandments and numerous other passages but also placed a curse on the offspring of Ham—the alleged forefather of Africans—saying that “the lowest of slaves will he be to his brothers” (Genesis 9:25).

    Both these texts, however, were generated by humans, and therefore humans could comprehend their origins and imperfections and at least attempt to correct their mistakes. It is possible for humans to understand the political interests and cultural biases that prevailed in the ancient Middle East and in eighteenth-century America and that caused the human authors of the Bible and of the U.S. Constitution to legitimate racism and slavery. This understanding allows people to either amend or ignore these texts. In 1868 the Fourteenth Amendment to the U.S. Constitution granted equal legal protection to all citizens. In 1954, in its landmark Brown v. Board of Education verdict, the U.S. Supreme Court ruled that segregating schools by race was an unconstitutional violation of the Fourteenth Amendment. As for the Bible, while no mechanism existed to amend the Tenth Commandment or Genesis 9:25, humans have reinterpreted the text in different ways through the ages, and ultimately came to reject its authority altogether. In Brown v. Board of Education, U.S. Supreme Court justices felt no need to take the biblical text into account.25

    But what might happen in the future, if some social credit algorithm denies the request of a low-credit child to enroll in a high-credit school? As we saw in chapter 8, computers are likely to suffer from their own biases and to invent inter-computer mythologies and bogus categories. How would humans be able to identify and correct such mistakes? And how would flesh-and-blood Supreme Court justices be able to decide on the constitutionality of algorithmic decisions? Would they be able to understand how the algorithms reach their conclusions?

    These are no longer purely theoretical questions. In February 2013, a drive-by shooting occurred in the town of La Crosse, Wisconsin. Police officers later spotted the car involved in the shooting and arrested the driver, Eric Loomis. Loomis denied participating in the shooting, but pleaded guilty to two less severe charges: “attempting to flee a traffic officer,” and “operating a motor vehicle without the owner’s consent.”26 When the judge came to determine the sentence, he consulted with an algorithm called COMPAS, which Wisconsin and several other U.S. states were using in 2013 to evaluate the risk of reoffending. The algorithm evaluated Loomis as a high-risk individual, likely to commit more crimes in the future. This algorithmic assessment influenced the judge to sentence Loomis to six years in prison—a harsh punishment for the relatively minor offenses he admitted to.27

    Loomis appealed to the Wisconsin Supreme Court, arguing that the judge violated his right to due process. Neither the judge nor Loomis understood how the COMPAS algorithm made its evaluation, and when Loomis asked to get a full explanation, the request was denied. The COMPAS algorithm was the private property of the Northpointe company, and the company argued that the algorithm’s methodology was a trade secret.28 Yet without knowing how the algorithm made its decisions, how could Loomis or the judge be sure that it was a reliable tool, free from bias and error? A number of studies have since shown that the COMPAS algorithm might indeed have harbored several problematic biases, probably picked up from the data on which it had been trained.29

    In Loomis v. Wisconsin (2016) the Wisconsin Supreme Court nevertheless ruled against Loomis. The judges argued that using algorithmic risk assessment is legitimate even when the algorithm’s methodology is not disclosed either to the court or to the defendant. Justice Ann Walsh Bradley wrote that since COMPAS made its assessment based on data that was either publicly available or provided by the defendant himself, Loomis could have denied or explained all the data the algorithm used. This opinion ignored the fact that accurate data may well be wrongly interpreted and that it was impossible for Loomis to deny or explain all the publicly available data on him.

    The Wisconsin Supreme Court was not completely unaware of the danger inherent in relying on opaque algorithms. Therefore, while permitting the practice, it ruled that whenever judges receive algorithmic risk assessments, these must include written warning for the judges about the algorithms’ potential biases. The court further advised judges to be cautious when relying on such algorithms. Unfortunately, this caveat was an empty gesture. The court did not provide any concrete instruction for judges on how they should exercise such caution. In its discussion of the case, the Harvard Law Review concluded that “most judges are unlikely to understand algorithmic risk assessments.” It then cited one of the Wisconsin Supreme Court justices, who noted that despite getting lengthy explanations about the algorithm, they themselves still had difficulty understanding it.30

    Loomis appealed to the U.S. Supreme Court. However, on June 26, 2017, the court declined to hear the case, effectively endorsing the ruling of the Wisconsin Supreme Court. Now consider that the algorithm that evaluated Loomis as a high-risk individual in 2013 was an early prototype. Since then, far more sophisticated and complex risk-assessment algorithms have been developed and have been handed more expansive purviews. By the early 2020s citizens in numerous countries routinely get prison sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend.31 And prison sentences are just the tip of the iceberg.

    THE RIGHT TO AN EXPLANATION

    Computers are making more and more decisions about us, both mundane and life changing. In addition to prison sentences, algorithms increasingly have a hand in deciding whether to offer us a place at college, give us a job, provide us with welfare benefits, or grant us a loan. They similarly help determine what kind of medical treatment we receive, what insurance premiums we pay, what news we hear, and who would ask us on a date.32

    As society entrusts more and more decisions to computers, it undermines the viability of democratic self-correcting mechanisms and of democratic transparency and accountability. How can elected officials regulate unfathomable algorithms? There is, consequently, a growing demand to enshrine a new human right: the right to an explanation. The European Union’s General Data Protection Regulation (GDPR), which came into effect in 2018, says that if an algorithm makes a decision about a human—refusing to extend us credit, for example—that human is entitled to obtain an explanation of the decision and to challenge that decision in front of some human authority.33 Ideally, that should keep in check algorithmic bias and allow democratic self-correcting mechanisms to identify and correct at least some of the computers’ more grievous mistakes.

    But can this right be fulfilled in practice? Mustafa Suleyman is a world expert on this subject. He is the co-founder and former head of DeepMind, one of the world’s most important AI enterprises, responsible for developing the AlphaGo program, among other achievements. AlphaGo was designed to play go, a strategy board game in which two players try to defeat each other by surrounding and capturing territory. Invented in ancient China, the game is far more complex than chess. Consequently, even after computers defeated human world chess champions, experts still believed that computers would never best humanity in go.

    That’s why both go professionals and computer experts were stunned in March 2016 when AlphaGo defeated the South Korean go champion Lee Sedol. In his 2023 book, The Coming Wave, Suleyman describes one of the most important moments in their match—a moment that redefined AI and that is recognized in many academic and governmental circles as a crucial turning point in history. It happened during the second game in the match, on March 10, 2016.

    “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake.’ It was so unusual that Sedol took fifteen minutes to respond and even got up from the board to take a walk outside. As we watched from our control room, the tension was unreal. Yet as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”34

    Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In East Asia go is considered much more than a game: it is a treasured cultural tradition. Alongside calligraphy, painting, and music, go has been one of the four arts that every refined person was expected to know. For over twenty-five hundred years, tens of millions of people have played go, and entire schools of thought have developed around the game, espousing different strategies and philosophies. Yet during all those millennia, human minds have explored only certain areas in the landscape of go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.35

    Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it. Even if a court had ordered DeepMind to provide Lee Sedol with an explanation, nobody could fulfill that order. Suleyman writes, “Us humans face a novel challenge: will new inventions be beyond our grasp? Previously creators could explain how something worked, why it did what it did, even if this required vast detail. That’s increasingly no longer true. Many technologies and systems are becoming so complex that they’re beyond the capacity of any one individual to truly understand them.… In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT-4, AlphaGo, and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”36

    The rise of unfathomable alien intelligence undermines democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to function. In particular, what happens when crucial decisions not just about individual lives but even about collective matters like the Federal Reserve’s interest rate are made by unfathomable algorithms? Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system. A 2016 survey by the OECD found that most people had difficulty grasping even simple financial concepts like compound interest.37 A 2014 survey of British MPs—charged with regulating one of the world’s most important financial hubs—found that only 12 percent accurately understood that new money is created when banks make loans. This fact is among the most basic principles of the modern financial system.38 As the 2007–8 financial crisis indicated, more complex financial devices and principles, like those behind CDOs, were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?

    The increasing unfathomability of our information network is one of the reasons for the recent wave of populist parties and charismatic leaders. When people can no longer make sense of the world, and when they feel overwhelmed by immense amounts of information they cannot digest, they become easy prey for conspiracy theories, and they turn for salvation to something they do understand—a human. Unfortunately, while charismatic leaders certainly have their advantages, no single human, however inspiring or brilliant, can single-handedly decipher how the algorithms that increasingly dominate the world work, and make sure that they are fair. The problem is that algorithms make decisions by relying on numerous data points, whereas humans find it very difficult to consciously reflect on a large number of data points and weigh them against each other. We prefer to work with single data points. That’s why when faced by complex issues—whether a loan request, a pandemic, or a war—we often seek a single reason to take a particular course of action and ignore all other considerations. This is the fallacy of the single cause.39

    We are so bad at weighing together many different factors that when people give a large number of reasons for a particular decision, it usually sounds suspicious. Suppose a good friend failed to attend our wedding. If she provides us with a single explanation—“My mom was in the hospital and I had to visit her”—that sounds plausible. But what if she lists fifty different reasons why she decided not to come: “My mom was a bit under the weather, and I had to take my dog to the vet sometime this week, and I had this project at work, and it was raining, and … and I know none of these fifty reasons by itself justifies my absence, but when I added all of them together, they kept me from attending your wedding.” We don’t say things like that, because we don’t think along such lines. We don’t consciously list fifty different reasons in our mind, give each of them a certain weight, aggregate all the weights, and thereby reach a conclusion.

    But this is precisely how algorithms assess our criminal potential or our creditworthiness. The COMPAS algorithm, for example, made its risk assessments by taking into account the answers to a 137-item questionnaire.40 The same is true of a bank algorithm that refuses to give us a loan. If the EU’s GDPR regulations force the bank to explain the algorithm’s decision, the explanation will not come in the shape of a single sentence; rather, it is likely to come in the form of hundreds or even thousands of pages full of numbers and equations.

    “Our algorithm,” the imaginary bank letter might read, “uses a precise points system to evaluate all applications, taking a thousand different types of data points into account. It adds all the data points to reach an overall score. People whose overall score is negative are considered low-credit persons, too risky to be given a loan. Your overall score was -378, which is why your loan application was refused.” The letter might then provide a detailed list of the thousand factors the algorithm took into account, including things that most humans might find irrelevant, such as the exact hour the application was submitted41 or the type of smartphone the applicant used. Thus on page 601 of its letter, the bank might explain that “you filed your application from your smartphone, which was the latest iPhone model. By analyzing millions of previous loan applications, our algorithm discovered a pattern—people who use the latest iPhone model to file their application are 0.08 percent more likely to repay the loan. The algorithm therefore added 8 points to your overall score for that. However, at the time your application was sent from your iPhone, its battery was down to 17 percent. By analyzing millions of previous loan applications, our algorithm discovered another pattern: people who allow their smartphone’s battery to go below 25 percent are 0.5 percent less likely to repay the loan. You lost 50 points for that.”42

    You may well feel that the bank treated you unjustly. “Is it reasonable to refuse my loan application,” you might complain, “just because my phone battery was low?” That, however, would be a misunderstanding. “The battery wasn’t the only reason,” the bank would explain. “It was only one out of a thousand factors our algorithm took into account.”

    “But didn’t your algorithm see that only twice in the last ten years was my bank account overdrawn?”

    “It obviously noticed that,” the bank might reply. “Look on page 453. You got 300 points for that. But all the other reasons brought your aggregated score down to -378.”

    While we may find this way of making decisions alien, it obviously has potential advantages. When making a decision, it is generally a good idea to take into account all relevant data points rather than just one or two salient facts. There is much room for argument, of course, about who gets to define the relevance of information. Who decides whether something like smartphone models—or skin color—should be considered relevant to loan applications? But no matter how we define relevance, the ability to take more data into account is likely to be an asset. Indeed, the problem with many human prejudices is that they focus on just one or two data points—like someone’s skin color, disability, or gender—while ignoring other information. Banks and other institutions are increasingly relying on algorithms to make decisions, precisely because algorithms can take many more data points into account than humans can.

    But when it comes to providing explanations, this creates a potentially insurmountable obstacle. How can a human mind analyze and evaluate a decision made on the basis of so many data points? We may well think that the Wisconsin Supreme Court should have forced the Northpointe company to reveal how the COMPAS algorithm decided that Eric Loomis was a high-risk person. But if the full data was disclosed, could either Loomis or the court have made sense of it?

    It’s not just that we need to take numerous data points into account. Perhaps most important, we cannot understand the way the algorithms find patterns in the data and decide on the allocation of points. Even if we know that a banking algorithm detracts a certain number of points from people who allow their smartphone batteries to go below 25 percent, how can we evaluate whether that’s fair? The algorithm wasn’t fed this rule by a human engineer; it reached that conclusion by discovering a pattern in millions of previous loan applications. Can an individual human client go over all that data and assess whether that pattern is indeed reliable and unbiased?43

    There is, however, a silver lining to this cloud of numbers. While individual laypersons may be unable to vet complex algorithms, a team of experts getting help from their own AI tools can potentially assess the fairness of algorithmic decisions even more reliably than anyone can assess the fairness of human decisions. After all, while human decisions may seem to rely on just those few data points we are conscious of, in fact our decisions are subconsciously influenced by thousands of additional data points. Being unaware of these subconscious processes, when we deliberate on our decisions or explain them, we often engage in post hoc single-point rationalizations for what really happens as billions of neurons interact inside our brain.44 Accordingly, if a human judge sentences us to six years in prison, how can we—or indeed the judge—be sure that the decision was shaped only by fair considerations and not by a subconscious racial bias or by the fact that the judge was hungry?45

    In the case of flesh-and-blood judges, the problem cannot be solved, at least not with our current knowledge of biology. In contrast, when an algorithm makes a decision, we can in principle know every one of the algorithm’s many considerations and the exact weight given to each. Thus several expert teams—ranging from the U.S. Department of Justice to the nonprofit newsroom ProPublica—have picked apart the COMPAS algorithm in order to assess its potential biases.46 Such teams can harness not only the collective effort of many humans but also the power of computers. Just as it is often best to set a thief to catch a thief, so we can use one algorithm to vet another.

    This raises the question of how we can be sure that the vetting algorithm itself is reliable. Ultimately, there is no purely technological solution to this recursive problem. No matter which technology we develop, we will have to maintain bureaucratic institutions that will audit algorithms and give or refuse them the seal of approval. Such institutions will combine the powers of humans and computers to make sure that new algorithmic tools are safe and fair. Without such institutions, even if we pass laws that provide humans with a right to an explanation, and even if we enact regulations against computer biases, who could enforce these laws and regulations?

    NOSEDIVE

    To vet algorithms, regulatory institutions will need not only to analyze them but also to translate their discoveries into stories that humans can understand. Otherwise, we will never trust the regulatory institutions and might instead put our faith in conspiracy theories and charismatic leaders. As noted in chapter 3, it has always been difficult for humans to understand bureaucracy, because bureaucracies have deviated from the script of the biological dramas, and most artists have lacked the will or the ability to depict bureaucratic dramas. For example, novels, movies, and TV series about twenty-first-century politics tend to focus on the feuds and love affairs of a few powerful families, as if present-day states were governed in the same way as ancient tribes and kingdoms. This artistic fixation with the biological dramas of dynasties obscures the very real changes that have taken place over the centuries in the dynamics of power.

    Because computers will increasingly replace human bureaucrats and human mythmakers, this will again change the deep structure of power. To survive, democracies require not just dedicated bureaucratic institutions that can scrutinize these new structures but also artists who can explain the new structures in accessible and entertaining ways. For example, this has successfully been done by the episode “Nosedive” in the sci-fi series Black Mirror.

    Produced in 2016, at a time when few had heard about social credit systems, “Nosedive” brilliantly explained how such systems work and what threats they pose. The episode tells the story of a woman called Lacie who lives with her brother Ryan but wants to move to her own apartment. To get a discount on the new apartment, she needs to increase her social credit score from 4.2 to 4.5 (out of 5). Being friends with high-score individuals gets your own score up, so Lacie tries to renew her contact with Naomi, a childhood friend who is currently rated 4.8. Lacie is invited to Naomi’s wedding, but on the way there she spills coffee on a high-score person, which causes her own score to drop a little, which in turn causes the airline to deny her a seat. From there everything that can go wrong does go wrong, Lacie’s rating takes a nosedive, and she ends in jail with a score of less than 1.

    This story relies on some elements of traditional biological dramas—“boy meets girl” (the wedding), sibling rivalry (the tension between Lacie and Ryan), and most important status competition (the main issue of the episode). But the real hero and driving force of the plot isn’t Lacie or Naomi, but rather the disembodied algorithm running the social credit system. The algorithm completely changes the dynamics of the old biological dramas—especially the dynamics of status competition. Whereas previously humans were sometimes engaged in status competition, but often had welcome breaks from this highly stressful situation, the omnipresent social credit algorithm eliminates the breaks. “Nosedive” is not a worn-out story about biological status competition, but rather a prescient exploration of what happens when computer technology changes the rules of status competitions.

    If bureaucrats and artists learn to cooperate, and if both rely on help from the computers, it might be possible to prevent the computer network from becoming unfathomable. As long as democratic societies understand the computer network, their self-correcting mechanisms are our best guarantee against AI abuses. Thus the EU’s AI Act that was proposed in 2021 singled out social credit systems like the one that stars in “Nosedive” as one of the few types of AI that are totally prohibited, because they might “lead to discriminatory outcomes and the exclusion of certain groups” and because “they may violate the right to dignity and non-discrimination and the values of equality and justice.”47 As with total surveillance regimes, so also with social credit systems, the fact that they could be created doesn’t mean that we must create them.

    DIGITAL ANARCHY

    The new computer network poses one final threat to democracies. Instead of digital totalitarianism, it could foster digital anarchy. The decentralized nature of democracies and their strong self-correcting mechanisms provide a shield against totalitarianism, but they also make it more difficult to ensure order. To function, a democracy needs to meet two conditions: it needs to enable a free public conversation on key issues, and it needs to maintain a minimum of social order and institutional trust. Free conversation must not slip into anarchy. Especially when dealing with urgent and important problems, the public debate should be conducted according to accepted rules, and there should be a legitimate mechanism to reach some kind of final decision, even if not everybody likes it.

    Before the advent of newspapers, radios, and other modern information technology, no large-scale society managed to combine free debates with institutional trust, so large-scale democracy was impossible. Now, with the rise of the new computer network, might large-scale democracy again become impossible? One difficulty is that the computer network makes it easier to join the debate. In the past, organizations like newspapers, radio stations, and established political parties acted as gatekeepers, deciding who was heard in the public sphere. Social media undermined the power of these gatekeepers, leading to a more open but also more anarchical public conversation.

    Whenever new groups join the conversation, they bring with them new viewpoints and interests, and often question the old consensus about how to conduct the debate and reach decisions. The rules of discussion must be negotiated anew. This is a potentially positive development, one that can lead to a more inclusive democratic system. After all, correcting previous biases and allowing previously disenfranchised people to join the public discussion is a vital part of democracy. However, in the short term this creates disturbances and disharmony. If no agreement is reached on how to conduct the public debate and how to reach decisions, the result is anarchy rather than democracy.

    The anarchical potential of AI is particularly alarming, because it is not only new human groups that it allows to join the public debate. For the first time ever, democracy must contend with a cacophony of nonhuman voices, too. On many social media platforms, bots constitute a sizable minority of participants. One analysis estimated that out of a sample of 20 million tweets generated during the 2016 U.S. election campaign, 3.8 million tweets (almost 20 percent) were generated by bots.48

    By the early 2020s, things got worse. A 2020 study assessed that bots were producing 43.2 percent of tweets.49 A more comprehensive 2022 study by the digital intelligence agency Similarweb found that 5 percent of Twitter users were probably bots, but they generated “between 20.8% and 29.2% of the content posted to Twitter.”50 When humans try to debate a crucial question like whom to elect as U.S. president, what happens if many of the voices they hear are produced by computers?

    Another worrying trend concerns content. Bots were initially deployed to influence public opinion by the sheer volume of messages they disseminated. They retweeted or recommended certain human-produced content, but they couldn’t create new ideas themselves, nor could they forge intimate bonds with humans. However, the new breed of generative AI tools like ChatGPT can do exactly that. In a 2023 study, published in Science Advances, researchers asked humans and ChatGPT to create both accurate and deliberately misleading short texts on issues such as vaccines, 5G technology, climate change, and evolution. The texts were then presented to seven hundred humans, who were asked to evaluate their reliability. The humans were good at recognizing the falsity of human-produced disinformation but tended to regard AI-produced disinformation as accurate.51

    So, what happens to democratic debates when millions—and eventually billions—of highly intelligent bots are not only composing extremely compelling political manifestos and creating deepfake images and videos but also able to win our trust and friendship? If I engage online in a political debate with an AI, it is a waste of time for me to try to change the AI’s opinions; being a nonconscious entity, it doesn’t really care about politics, and it cannot vote in the elections. But the more I talk with the AI, the better it gets to know me, so it can gain my trust, hone its arguments, and gradually change my views. In the battle for hearts and minds, intimacy is an extremely powerful weapon. Previously, political parties could command our attention, but they had difficulty mass-producing intimacy. Radio sets could broadcast a leader’s speech to millions, but they could not befriend the listeners. Now a political party, or even a foreign government, could deploy an army of bots that build friendships with millions of citizens and then use that intimacy to influence their worldview.

    Finally, algorithms are not only joining the conversation; they are increasingly orchestrating it. Social media allows new groups of humans to challenge the old rules of debate. But negotiations about the new rules are not conducted by humans. Rather, as explained in our previous analysis of social media algorithms, it is often the algorithms that make the rules. In the nineteenth and twentieth centuries, when media moguls censored some views and promoted others, this might have undermined democracy, but at least the moguls were humans, and their decisions could be subjected to democratic scrutiny. It is far more dangerous if we allow inscrutable algorithms to decide which views to disseminate.

    If manipulative bots and inscrutable algorithms come to dominate the public conversation, this could cause democratic debate to collapse exactly when we need it most. Just when we must make momentous decisions about fast-evolving new technologies, the public sphere will be flooded by computer-generated fake news, citizens will not be able to tell whether they are having a debate with a human friend or a manipulative machine, and no consensus will remain about the most basic rules of discussion or the most basic facts. This kind of anarchical information network cannot produce either truth or order and cannot be sustained for long. If we end up with anarchy, the next step would probably be the establishment of a dictatorship as people agree to trade their liberty for some certainty.

    BAN THE BOTS

    In the face of the threat algorithms pose to the democratic conversation, democracies are not helpless. They can and should take measures to regulate AI and prevent it from polluting our infosphere with fake people spewing fake news. The philosopher Daniel Dennett has suggested that we can take inspiration from traditional regulations in the money market.52 Ever since coins and later banknotes were invented, it was always technically possible to counterfeit them. Counterfeiting posed an existential danger to the financial system, because it eroded people’s trust in money. If bad actors flooded the market with counterfeit money, the financial system would have collapsed. Yet the financial system managed to protect itself for thousands of years by enacting laws against counterfeiting money. As a result, only a relatively small percentage of money in circulation was forged, and people’s trust in it was maintained.53

    What’s true of counterfeiting money should also be true of counterfeiting humans. If governments took decisive action to protect trust in money, it makes sense to take equally decisive measures to protect trust in humans. Prior to the rise of AI, one human could pretend to be another, and society punished such frauds. But society didn’t bother to outlaw the creation of counterfeit humans, since the technology to do so didn’t exist. Now that AI can pass itself off as human, it threatens to destroy trust between humans and to unravel the fabric of society. Dennett suggests, therefore, that governments should outlaw fake humans as decisively as they have previously outlawed fake money.54

    The law should prohibit not just deepfaking specific real people—creating a fake video of the U.S. president, for example—but also any attempt by a nonhuman agent to pass itself off as a human. If anyone complains that such strict measures violate freedom of speech, they should be reminded that bots don’t have freedom of speech. Banning human beings from a public platform is a sensitive step, and democracies should be very careful about such censorship. However, banning bots is a simple issue: it doesn’t violate anyone’s rights, because bots don’t have rights.55

    None of this means that democracies must ban all bots, algorithms, and AIs from participating in any discussion. Digital tools are welcome to join many conversations, provided they don’t pretend to be humans. For example, AI doctors can be extremely helpful. They can monitor our health twenty-four hours a day, offer medical advice tailored to our individual medical conditions and personality, and answer our questions with infinite patience. But the AI doctor should never try to pass itself off as a human.

    Another important measure democracies can adopt is to ban unsupervised algorithms from curating key public debates. We can certainly continue to use algorithms to run social media platforms; obviously, no human can do that. But the principles the algorithms use to decide which voices to silence and which to amplify must be vetted by a human institution. While we should be careful about censoring genuine human views, we can forbid algorithms to deliberately spread outrage. At the very least, corporations should be transparent about the curation principles their algorithms follow. If they use outrage to capture our attention, let them be clear about their business model and about any political connections they might have. If the algorithm systematically disappears videos that aren’t aligned with the company’s political agenda, users should know this.

    These are just a few of numerous suggestions made in recent years for how democracies could regulate the entry of bots and algorithms into the public conversation. Naturally, each has its advantages and drawbacks, and none would be easy to implement. Also, since the technology is developing so rapidly, regulations are likely to become outdated quickly. What I would like to point out here is only that democracies can regulate the information market and that their very survival depends on these regulations. The naive view of information opposes regulation and believes that a completely free information market will spontaneously generate truth and order. This is completely divorced from the actual history of democracy. Preserving the democratic conversation has never been easy, and all venues where this conversation has previously taken place—from parliaments and town halls to newspapers and radio stations—have required regulation. This is doubly true in an era when an alien form of intelligence threatens to dominate the conversation.

    THE FUTURE OF DEMOCRACY

    For most of history large-scale democracy was impossible because information technology wasn’t sophisticated enough to hold a large-scale political conversation. Millions of people spread over tens of thousands of square kilometers didn’t have the tools to conduct a real-time discussion of public affairs. Now, ironically, democracy may prove impossible because information technology is becoming too sophisticated. If unfathomable algorithms take over the conversation, and particularly if they quash reasoned arguments and stoke hate and confusion, public discussion cannot be maintained. Yet if democracies do collapse, it will likely result not from some kind of technological inevitability but from a human failure to regulate the new technology wisely.

    We cannot foretell how things will play out. At present, however, it is clear that the information network of many democracies is breaking down. Democrats and Republicans in the United States can no longer agree on even basic facts—such as who won the 2020 presidential elections—and can hardly hold a civil conversation anymore. Bipartisan cooperation in Congress, once a fundamental feature of U.S. politics, has almost disappeared.56 The same radicalizing processes occur in many other democracies, from the Philippines to Brazil. When citizens cannot talk with one another, and when they view each other as enemies rather than political rivals, democracy is untenable.

    Nobody knows for sure what is causing the breakdown of democratic information networks. Some say it results from ideological fissures, but in fact in many dysfunctional democracies the ideological gaps don’t seem to be bigger than in previous generations. In the 1960s, the United States was riven by deep ideological conflicts about the civil rights movement, the sexual revolution, the Vietnam War, and the Cold War. These tensions caused a surge in political violence and assassinations, but Republicans and Democrats were still able to agree on the results of elections, they maintained a common belief in democratic institutions like the courts,57 and they were able to work together in Congress at least on some issues. For example, the Civil Rights Act of 1964 was passed in the Senate with the support of forty-six Democrats and twenty-seven Republicans. Is the ideological gap in the 2020s that much bigger than it was in the 1960s? And if it isn’t ideology, what is driving people apart?

    Many point the finger at social media algorithms. We have explored the divisive impact of social media in previous chapters, but despite the damning evidence it seems that there must be additional factors at play. The truth is that while we can easily observe that the democratic information network is breaking down, we aren’t sure why. That itself is a characteristic of the times. The information network has become so complicated, and it relies to such an extent on opaque algorithmic decisions and inter-computer entities, that it has become very difficult for humans to answer even the most basic of political questions: Why are we fighting each other?
    If we cannot discover what is broken and fix it, large-scale democracies may not survive the rise of computer technology. If this indeed comes to pass, what might replace democracy as the dominant political system? Does the future belong to totalitarian regimes, or might computers make totalitarianism untenable too? As we shall see, human dictators have their own reasons to be terrified of AI.

    CHAPTER 10 Totalitarianism: All Power to the Algorithms?

    Discussions of the ethics and politics of the new computer network often focus on the fate of democracies. If authoritarian and totalitarian regimes are mentioned, it is mainly as the dystopian destination that “we” might reach if “we” fail to manage the computer network wisely.1 However, as of 2024, more than half of “us” already live under authoritarian or totalitarian regimes,2 many of which were established long before the rise of the computer network. To understand the impact of algorithms and AI on humankind, we should ask ourselves what their impact will be not only on democracies like the United States and Brazil but also on the Chinese Communist Party and the royal house of Saud.

    As explained in previous chapters, the information technology available in premodern eras made both large-scale democracy and large-scale totalitarianism unworkable. Large polities like the Chinese Han Empire and the eighteenth-century Saudi emirate of Diriyah were usually limited autocracies. In the twentieth century, new information technology enabled the rise of both large-scale democracy and large-scale totalitarianism, but totalitarianism suffered from a severe disadvantage. Totalitarianism seeks to channel all information to one hub and process it there. Technologies like the telegraph, the telephone, the typewriter, and the radio facilitated the centralization of information, but they couldn’t process the information and make decisions by themselves. This remained something that only humans could do.

    The more information flowed to the center, the harder it became to process it. Totalitarian rulers and parties often made costly mistakes, and the system lacked mechanisms to identify and correct these errors. The democratic way of distributing information—and the power to make decisions—between many institutions and individuals worked better. It could cope far more efficiently with the flood of data, and if one institution made a wrong decision, it could eventually be rectified by others.

    The rise of machine-learning algorithms, however, may be exactly what the Stalins of the world have been waiting for. AI could tilt the technological balance of power in favor of totalitarianism. Indeed, whereas flooding people with data tends to overwhelm them and therefore leads to errors, flooding AI with data tends to make it more efficient. Consequently, AI seems to favor the concentration of information and decision making in one place.

    Even in democratic countries, a few corporations like Google, Facebook, and Amazon have become monopolies in their domains, partly because AI tips the balance in favor of the giants. In traditional industries like restaurants, size isn’t an overwhelming advantage. McDonald’s is a worldwide chain that feeds more than fifty million people a day,3 and its size gives it many advantages in terms of costs, branding, and so forth. You can nevertheless open a neighborhood restaurant that could hold its own against the local McDonald’s. Even though your restaurant might be serving just two hundred customers a day, you still have a chance of making better food than McDonald’s and gaining the loyalty of happier customers.

    It works differently in the information market. The Google search engine is used every day by between two and three billion people making 8.5 billion searches.4 Suppose a local start-up search engine tries to compete with Google. It doesn’t stand a chance. Because Google is already used by billions, it has so much more data at its disposal that it can train far better algorithms, which will attract even more traffic, which will be used to train the next generation of algorithms, and so on. Consequently, in 2023 Google controlled 91.5 percent of the global search market.5

    Or consider genetics. Suppose several companies in different countries try to develop an algorithm that identifies connections between genes and medical conditions. New Zealand has a population of 5 million people, and privacy regulations restrict access to their genetic and medical records. China has about 1.4 billion inhabitants and laxer privacy regulations.6 Who do you think has a better chance of developing a genetic algorithm? If Brazil then wants to buy a genetic algorithm for its health-care system, it would have a strong incentive to opt for the much more accurate Chinese algorithm than the one from New Zealand. If the Chinese algorithm then hones itself on more than 200 million Brazilians, it will get even better. Which would prompt more countries to choose the Chinese algorithm. Soon enough, most of the world’s medical information would flow to China, making its genetic algorithm unbeatable.

    The attempt to concentrate all information and power in one place, which was the Achilles’ heel of twentieth-century totalitarian regimes, might become a decisive advantage in the age of AI. At the same time, as noted in an earlier chapter, AI could also make it possible for totalitarian regimes to establish total surveillance systems that make resistance almost impossible.

    Some people believe that blockchain could provide a technological check on such totalitarian tendencies, because blockchain is inherently friendly to democracy and hostile to totalitarianism. In a blockchain system, decisions require the approval of 51 percent of users. That may sound democratic, but blockchain technology has a fatal flaw. The problem lies with the word “users.” If one person has ten accounts, she counts as ten users. If a government controls 51 percent of accounts, then the government constitutes 51 percent of the users. There are already examples of blockchain networks where a government is 51 percent of users.7

    And when a government is 51 percent of users in a blockchain, it gives the government control not just over the chain’s present but even over its past. Autocrats have always wanted the power to change the past. Roman emperors, for example, frequently engaged in the practice of damnatio memoriae—expunging the memory of rivals and enemies. After the emperor Caracalla murdered his brother and competitor for the throne, Geta, he tried to obliterate the latter’s memory. Inscriptions bearing Geta’s name were chiseled out, coins bearing his effigy were melted down, and the mere mentioning of Geta’s name was punishable by death.8 One surviving painting from the time, the Severan Tondo, was made during the reign of their father—Septimius Severus—and originally showed both brothers together with Septimius and their mother, Julia Domna. But someone later not only obliterated Geta’s face but smeared excrement over it. Forensic analysis identified tiny pieces of dry shit where Geta’s face should have been.9

    Modern totalitarian regimes have been similarly fond of changing the past. After Stalin rose to power, he made a supreme effort to delete Trotsky—the architect of the Bolshevik Revolution and the founder of the Red Army—from all historical records. During the Stalinist Great Terror of 1937–39, whenever prominent people like Nikolai Bukharin and Marshal Mikhail Tukhachevsky were purged and executed, evidence of their existence was erased from books, academic papers, photographs, and paintings.10 This degree of erasure demanded a huge manual effort. With blockchain, changing the past would be far easier. A government that controls 51 percent of users can disappear people from history at the press of a button.

    THE BOT PRISON

    While there are many ways in which AI can cement central power, authoritarian and totalitarian regimes have their own problems with it. First and foremost, dictatorships lack experience in controlling inorganic agents. The foundation of every despotic information network is terror. But computers are not afraid of being imprisoned or killed. If a chatbot on the Russian internet mentions the war crimes committed by Russian troops in Ukraine, tells an irreverent joke about Vladimir Putin, or criticizes the corruption of Putin’s United Russia party, what could the Putin regime do to that chatbot? FSB agents cannot imprison it, torture it, or threaten its family. The government could of course block or delete it, and try to find and punish its human creators, but this is a much more difficult task than disciplining human users.

    In the days when computers could not generate content by themselves, and could not hold an intelligent conversation, only a human being could express dissenting opinions on Russian social network channels like VKontakte and Odnoklassniki. If that human being was physically in Russia, they risked the wrath of the Russian authorities. If that human being was physically outside Russia, the authorities could try to block their access. But what happens if Russian cyberspace is filled by millions of bots that can generate content and hold conversations, learning and developing by themselves? These bots might be preprogrammed by Russian dissidents or foreign actors to intentionally spread unorthodox views, and it might be impossible for the authorities to prevent it. Even worse, from the viewpoint of Putin’s regime, what happens if authorized bots gradually develop dissenting views by themselves, simply by collecting information on what is happening in Russia and spotting patterns in it?

    That’s the alignment problem, Russian-style. Russia’s human engineers can do their best to create AIs that are totally aligned with the regime, but given the ability of AI to learn and change by itself, how can the human engineers ensure that the AI never deviates into illicit territory? It is particularly interesting to note that as George Orwell explained in Nineteen Eighty-Four, totalitarian information networks often rely on doublespeak. Russia is an authoritarian state that claims to be a democracy. The Russian invasion of Ukraine has been the largest war in Europe since 1945, yet officially it is defined as a “special military operation,” and referring to it as a “war” has been criminalized and is punishable by a prison term of up to three years or a fine of up to fifty thousand rubles.11

    The Russian Constitution makes grandiose promises about how “everyone shall be guaranteed freedom of thought and speech” (Article 29.1), how “everyone shall have the right freely to seek, receive, transmit, produce and disseminate information” (29.4), and how “the freedom of the mass media shall be guaranteed. Censorship shall be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises at face value. But computers are bad at understanding doublespeak. A chatbot instructed to adhere to Russian law and values might read that constitution and conclude that freedom of speech is a core Russian value. Then, after spending a few days in Russian cyberspace and monitoring what is happening in the Russian information sphere, the chatbot might start criticizing the Putin regime for violating the core Russian value of freedom of speech. Humans too notice such contradictions, but avoid pointing them out, due to fear. But what would prevent a chatbot from pointing out damning patterns? And how might Russian engineers explain to a chatbot that though the Russian Constitution guarantees all citizens freedom of speech and forbids censorship, the chatbot shouldn’t actually believe the constitution nor should it ever mention the gap between theory and reality? As the Ukrainian guide told me at Chernobyl, people in totalitarian countries grow up with the idea that questions lead to trouble. But if you train an algorithm on the principle that “questions lead to trouble,” how will that algorithm learn and develop?

    Finally, if the government adopts some disastrous policy and then changes its mind, it usually covers itself by blaming the disaster on someone else. Humans learn the hard way to forget facts that might get them in trouble. But how would you train a chatbot to forget that the policy vilified today was actually the official line only a year ago? This is a major technological challenge that dictatorships will find difficult to deal with, especially as chatbots become more powerful and more opaque.

    Of course, democracies face analogous problems with chatbots that say unwelcome things or raise dangerous questions. What happens if despite the best efforts of Microsoft or Facebook engineers, their chatbot begins spewing racist slurs? The advantage of democracies is that they have far more leeway in dealing with such rogue algorithms. Because democracies take freedom of speech seriously, they keep far fewer skeletons in their closet, and they have developed a relatively high level of tolerance even to antidemocratic speech. Dissident bots will present a far bigger challenge to totalitarian regimes that have entire cemeteries in their closets and zero tolerance of criticism.

    ALGORITHMIC TAKEOVER

    In the long term, totalitarian regimes are likely to face an even bigger danger: instead of criticizing them, an algorithm might gain control of them. Throughout history, the biggest threat to autocrats usually came from their own subordinates. As noted in chapter 4, no Roman emperor or Soviet premier was toppled by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their own subordinates. If a twenty-first-century autocrat gives computers too much power, that autocrat might become their puppet. The last thing a dictator wants is to create something more powerful than himself, or a force that he does not know how to control.

    To illustrate the point, allow me to use an admittedly outlandish thought experiment, the totalitarian equivalent of Bostrom’s paper-clip apocalypse. Imagine that the year is 2050, and the Great Leader is woken up at four in the morning by an urgent call from the Surveillance & Security Algorithm. “Great Leader, we are facing an emergency. I’ve crunched trillions of data points, and the pattern is unmistakable: the defense minister is planning to assassinate you in the morning and take power himself. The hit squad is ready, waiting for his command. Give me the order, though, and I’ll liquidate him with a precision strike.”
    “But the defense minister is my most loyal supporter,” says the Great Leader. “Only yesterday he said to me—”
    “Great Leader, I know what he said to you. I hear everything. But I also know what he said afterward to the hit squad. And for months I’ve been picking up disturbing patterns in the data.”
    “Are you sure you were not fooled by deepfakes?”
    “I’m afraid the data I relied on is 100 percent genuine,” says the algorithm. “I checked it with my special deepfake-detecting sub-algorithm. I can explain exactly how we know it isn’t a deepfake, but that would take us a couple of weeks. I didn’t want to alert you before I was sure, but the data points converge on an inescapable conclusion: a coup is under way. Unless we act now, the assassins will be here in an hour. But give me the order, and I’ll liquidate the traitor.”

    By giving so much power to the Surveillance & Security Algorithm, the Great Leader has placed himself in an impossible situation. If he distrusts the algorithm, he may be assassinated by the defense minister, but if he trusts the algorithm and purges the defense minister, he becomes the algorithm’s puppet. Whenever anyone tries to make a move against the algorithm, the algorithm knows exactly how to manipulate the Great Leader. Note that the algorithm doesn’t need to be a conscious entity to engage in such maneuvers. As Bostrom’s paper-clip thought experiment indicates—and as GPT-4 lying to the TaskRabbit worker demonstrated on a small scale—a nonconscious algorithm may seek to accumulate power and manipulate people even without having any human drives like greed or egotism.

    If algorithms ever develop capabilities like those in the thought experiment, dictatorships would be far more vulnerable to algorithmic takeover than democracies. It would be difficult for even a super-Machiavellian AI to seize power in a distributed democratic system like the United States. Even if the AI learns to manipulate the U.S. president, it might face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and sundry NGOs. How would the algorithm, for example, deal with a Senate filibuster?

    Seizing power in a highly centralized system is much easier. When all power is concentrated in the hands of one person, whoever controls access to the autocrat can control the autocrat—and the entire state. To hack the system, one needs to learn to manipulate just a single individual. An archetypal case is how the Roman emperor Tiberius became the puppet of Lucius Aelius Sejanus, the commander of the Praetorian Guard.

    The Praetorians were initially established by Augustus as a small imperial bodyguard. Augustus appointed two prefects to command the bodyguard so that neither could gain too much power over him.12 Tiberius, however, was not as wise. His paranoia was his greatest weakness. Sejanus, one of the two Praetorian prefects, artfully played on Tiberius’s fears. He constantly uncovered alleged plots to assassinate Tiberius, many of which were pure fantasies. The suspicious emperor grew more distrustful of everyone except Sejanus. He made Sejanus sole prefect of the Praetorian Guard, expanded it into an army of twelve thousand, and gave Sejanus’s men additional roles in policing and administrating the city of Rome. Finally, Sejanus persuaded Tiberius to move out of the capital to Capri, arguing that it would be much easier to protect the emperor on a small island than in a crowded metropolis full of traitors and spies. In truth, explained the Roman historian Tacitus, Sejanus’s aim was to control all the information reaching the emperor: “Access to the emperor would be under his own control, and letters, for the most part being conveyed by soldiers, would pass through his hands.”13

    With the Praetorians controlling Rome, Tiberius isolated in Capri, and Sejanus controlling all information reaching Tiberius, the Praetorian commander became the true ruler of the empire. Sejanus purged anyone who might oppose him—including members of the imperial family—by falsely accusing them of treason. Since nobody could contact the emperor without Sejanus’s permission, Tiberius was reduced to a puppet.

    Eventually someone—perhaps Tiberius’s sister-in-law Antonia—located an opening in Sejanus’s information cordon. A letter was smuggled to the emperor, explaining to him what was going on. But by the time Tiberius woke up to the danger and resolved to get rid of Sejanus, he was almost helpless. How could he topple the man who controlled not just the bodyguards but also all communications with the outside world? If he tried to make a move, Sejanus could imprison him on Capri indefinitely and inform the Senate and the army that the emperor was too ill to travel anywhere.

    Tiberius nevertheless managed to turn the tables on Sejanus. As Sejanus grew in power and became preoccupied with running the empire, he lost touch with the day-to-day minutiae of Rome’s security apparatus. Tiberius managed to secretly gain the support of Naevius Sutorius Macro, commander of Rome’s fire brigade and night watch. Macro orchestrated a coup against Sejanus, and as a reward Tiberius made Macro the new commander of the Praetorian Guard. A few years later, Macro had Tiberius killed.14

    Power lies at the nexus where the information channels merge. Since Tiberius allowed the information channels to merge in the person of Sejanus, the latter became the true center of power, while Tiberius was reduced to a puppet.

    The fate of Tiberius indicates the delicate balance that all dictators must strike. They try to concentrate all information in one place, but they must be careful that the different channels of information are allowed to merge only in their own person. If the information channels merge somewhere else, that then becomes the true nexus of power. When the regime relies on humans like Sejanus and Macro, a skillful dictator can play them one against the other in order to remain on top. Stalin’s purges were all about that. Yet when a regime relies on a powerful but inscrutable AI that gathers and analyzes all information, the human dictator is in danger of losing all power. He may remain in the capital and yet be isolated on a digital island, controlled and manipulated by the AI.

    THE DICTATOR’S DILEMMA

    In the next few years, the dictators of our world face more urgent problems than an algorithmic takeover. No current AI system can manipulate regimes at such a scale. However, totalitarian systems are already in danger of putting far too much trust in algorithms. Whereas democracies assume that everyone is fallible, in totalitarian regimes the fundamental assumption is that the ruling party or the supreme leader is always right. Regimes based on that assumption are conditioned to believe in the existence of an infallible intelligence and are reluctant to create strong self-correcting mechanisms that might monitor and regulate the genius at the top.

    Until now such regimes placed their faith in human parties and leaders and were hothouses for the growth of personality cults. But in the twenty-first century this totalitarian tradition prepares them to expect AI infallibility. Systems that could believe in the perfect genius of a Mussolini, a Ceauşescu, or a Khomeini are primed to also believe in the flawless genius of a superintelligent computer. This could have disastrous results for their citizens, and potentially for the rest of the world as well. What happens if the algorithm in charge of environmental policy makes a big mistake, but there are no self-correcting mechanisms that can identify and correct its error? What happens if the algorithm running the state’s social credit system begins terrorizing not just the general population but even the members of the ruling party and simultaneously begins to label anyone that questions its policies “an enemy of the people”?

    Dictators have always suffered from weak self-correcting mechanisms and have always been threatened by powerful subordinates. The rise of AI may greatly exacerbate these problems. The computer network therefore presents dictators with an excruciating dilemma. They could decide to escape the clutches of their human underlings by trusting a supposedly infallible technology, in which case they might become the technology’s puppet. Or, they could build a human institution to supervise the AI, but that institution might limit their own power, too.

    If even just a few of the world’s dictators choose to put their trust in AI, this could have far-reaching consequences for the whole of humanity. Science fiction is full of scenarios of an AI getting out of control and enslaving or eliminating humankind. Most sci-fi plots explore these scenarios in the context of democratic capitalist societies. This is understandable. Authors living in democracies are obviously interested in their own societies, whereas authors living in dictatorships are usually discouraged from criticizing their rulers. But the weakest spot in humanity’s anti-AI shield is probably the dictators. The easiest way for an AI to seize power is not by breaking out of Dr. Frankenstein’s lab but by ingratiating itself with some paranoid Tiberius.

    This is not a prophecy, just a possibility. After 1945, dictators and their subordinates cooperated with democratic governments and their citizens to contain nuclear weapons. On July 9, 1955, Albert Einstein, Bertrand Russell, and a number of other eminent scientists and thinkers published the Russell-Einstein Manifesto, calling on the leaders of both democracies and dictatorships to cooperate on preventing nuclear war. “We appeal,” said the manifesto, “as human beings, to human beings: remember your humanity, and forget the rest. If you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.”15 This is true of AI too. It would be foolish of dictators to believe that AI will necessarily tilt the balance of power in their favor. If they aren’t careful, AI will just grab power to itself.

    CHAPTER 11 The Silicon Curtain: Global Empire or Global Split?

    The previous two chapters explored how different human societies might react to the rise of the new computer network. But we live in an interconnected world, where the decisions of one country can have a profound impact on others. Some of the gravest dangers posed by AI do not result from the internal dynamics of a single human society. Rather, they arise from dynamics involving many societies, which might lead to new arms races, new wars, and new imperial expansions.

    Computers are not yet powerful enough to completely escape our control or destroy human civilization by themselves. As long as humanity stands united, we can build institutions that will control AI and will identify and correct algorithmic errors. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI, then, poses an existential danger to humankind not because of the malevolence of computers but because of our own shortcomings.

    Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the dictator trusts his AI more than his defense minister, wouldn’t it make sense to have the AI supervise the country’s most powerful weapons? If the AI then makes an error, or begins to pursue an alien goal, the result could be catastrophic, and not just for that country.

    Similarly, terrorists focused on events in one corner of the world might use AI to instigate a global pandemic. The terrorists might be more versed in some apocalyptic mythology than in the science of epidemiology, but they just need to set the goal, and all else will be done by their AI. The AI could synthesize a new pathogen, order it from commercial laboratories or print it in biological 3-D printers, and devise the best strategy to spread it around the world, via airports or food supply chains. What if the AI synthesizes a virus that is as deadly as Ebola, as contagious as COVID-19, and as slow acting as AIDS? By the time the first victims begin to die, and the world is alerted to the danger, most people on earth might have already been infected.1

    As we have seen in previous chapters, human civilization is threatened not only by physical and biological weapons of mass destruction like atom bombs and viruses. Human civilization could also be destroyed by weapons of social mass destruction, like stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money, and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.

    Many societies—both democracies and dictatorships—may act responsibly to regulate such usages of AI, clamp down on bad actors, and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind. Climate change can devastate even countries that adopt excellent environmental regulations, because it is a global rather than a national problem. AI, too, is a global problem. Countries would be naive to imagine that as long as they regulate AI wisely within their own borders, these regulations will protect them from the worst outcomes of the AI revolution. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.

    At present, the world is divided into about two hundred nation-states, most of which gained their independence only after 1945. They are not all equal. The list contains two superpowers, a handful of major powers, several blocs and alliances, and a lot of smaller fish. Still, even the tiniest states enjoy some leverage, as evidenced by their ability to play the superpowers against each other. In the early 2020s, for example, China and the United States competed for influence in the strategically important South Pacific region. Both superpowers courted island nations like Tonga, Tuvalu, Kiribati, and the Solomon Islands. The governments of these small nations—whose populations range from 740,000 (Solomon Islands) to 11,000 (Tuvalu)—had substantial leeway to decide which way to tack and were able to extract considerable concessions and aid.2

    Other small states, such as Qatar, have established themselves as important players in the geopolitical arena. With only 300,000 citizens, Qatar is nevertheless pursuing ambitious foreign policy aims in the Middle East, is playing an outsized rule in the global economy, and is home to Al Jazeera, the Arab world’s most influential TV network. One might argue that Qatar is able to punch well above its size because it is the third-largest exporter of natural gas in the world. Yet in a different international setting, that would have made Qatar not an independent actor but the first course on the menu of any imperial conqueror. It is telling that, as of 2024, Qatar’s much bigger neighbors, and the world’s hegemonic powers, are letting the tiny Gulf state hold on to its fabulous riches. Many people describe the international system as a jungle. If so, it is a jungle in which tigers allow fat chickens to live in relative safety.

    Qatar, Tonga, Tuvalu, Kiribati, and the Solomon Islands all indicate that we are living in a postimperial era. They gained their independence from the British Empire in the 1970s, as part of the final demise of the European imperial order. The leverage they now have in the international arena testifies that in the first quarter of the twenty-first century power is distributed between a relatively large number of players, rather than monopolized by a few empires.

    How might the rise of the new computer network change the shape of international politics? Aside from apocalyptic scenarios such as a dictatorial AI launching a nuclear war, or a terrorist AI instigating a lethal pandemic, computers pose two main challenges to the current international system. First, since computers make it easier to concentrate information and power in a central hub, humanity could enter a new imperial era. A few empires (or perhaps a single empire) might bring the whole world under a much tighter grip than that of the British Empire or the Soviet Empire. Tonga, Tuvalu, and Qatar would be transformed from independent states into colonial possessions—just as they were fifty years ago.

    Second, humanity could split along a new Silicon Curtain that would pass between rival digital empires. As each regime chooses its own answer to the AI alignment problem, to the dictator’s dilemma, and to other technological quandaries, each might create a separate and very different computer network. The various networks might then find it ever more difficult to interact, and so would the humans they control. Qataris living as part of an Iranian or Russian network, Tongans living as part of a Chinese network, and Tuvaluans living as part of an American network could come to have such different life experiences and worldviews that they would hardly be able to communicate or to agree on much.

    If these developments indeed materialize, they could easily lead to their own apocalyptic outcome. Perhaps each empire can keep its nuclear weapons under human control and its lunatics away from bioweapons. But a human species divided into hostile camps that cannot understand each other stands a small chance of avoiding devastating wars or preventing catastrophic climate change. A world of rival empires separated by an opaque Silicon Curtain would also be incapable of regulating the explosive power of AI.

    THE RISE OF DIGITAL EMPIRES

    In chapter 9 we touched briefly on the link between the Industrial Revolution and modern imperialism. It was not evident, at the beginning, that industrial technology would have much of an impact on empire building. When the first steam engines were put to use to pump water in British coal mines in the eighteenth century, no one foresaw that they would eventually power the most ambitious imperial projects in human history. When the Industrial Revolution subsequently gathered steam in the early nineteenth century, it was driven by private businesses, because governments and armies were relatively slow to appreciate its potential geopolitical impact. The world’s first commercial railway, for example, which opened in 1830 between Liverpool and Manchester, was built and operated by the privately owned Liverpool and Manchester Railway Company. The same was true of most other early railway lines in the U.K., the United States, France, Germany, and elsewhere. At that point, it wasn’t at all clear why governments or armies should get involved in such commercial enterprises.

    By the middle of the nineteenth century, however, the governments and armed forces of the leading industrial powers had fully recognized the immense geopolitical potential of modern industrial technology. The need for raw materials and markets justified imperialism, while industrial technologies made imperial conquests easier. Steamships were crucial, for example, to the British victory over the Chinese in the Opium Wars, and railroads played a decisive role in the American expansion west and the Russian expansion east and south. Indeed, entire imperial projects were shaped around the construction of railroads such as the Trans-Siberian and Trans-Caspian Russian lines, the German dream of a Berlin-Baghdad railway, and the British dream of building a railway from Cairo to the Cape.3

    Nevertheless, most polities didn’t join the burgeoning industrial arms race in time. Some lacked the capacity to do so, like the Melanesian chiefdoms of the Solomon Islands and the Al Thani tribe of Qatar. Others, like the Burmese Empire, the Ashanti Empire, and the Chinese Empire, might have had the capacity but lacked the will and foresight. Their rulers and inhabitants either didn’t follow developments in places like the British Midlands or didn’t think they had much to do with them. Why should the rice farmers of the Irrawaddy basin in Burma or the Yangtze basin in China concern themselves about the Liverpool–Manchester Railway? By the end of the nineteenth century, however, these rice farmers found themselves either conquered or indirectly exploited by the British Empire. Most other stragglers in the industrial race also ended up dominated by one industrial power or other. Could something similar happen with AI?

    When the race to develop AI gathered steam in the early years of the twenty-first century, it too was initially spearheaded by private entrepreneurs in a handful of countries. They set their sights on centralizing the world’s flow of information. Google wanted to organize all the world’s information in one place. Amazon sought to centralize all the world’s shopping. Facebook wished to connect all the world’s social spheres. But concentrating all the world’s information is neither practical nor helpful unless one can centrally process that information. And in 2000, when Google’s search engine was making its baby steps, when Amazon was a modest online bookshop, and when Mark Zuckerberg was in high school, the AI necessary to centrally process oceans of data was nowhere at hand. But some people bet it was just around the corner.

    Kevin Kelly, the founding editor of Wired magazine, recounted how in 2002 he attended a small party at Google and struck up a conversation with Larry Page. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” Page explained that Google wasn’t focused on search at all. “We’re really making an AI,” he said.4 Having lots of data makes it easier to create an AI. And AI can turn lots of data into lots of power.

    By the 2010s, the dream was becoming a reality. Like every major historical revolution, the rise of AI was a gradual process involving numerous steps. And like every revolution, a few of these steps were seen as turning points, just like the opening of the Liverpool–Manchester Railway. In the prolific literature on the story of AI, two events pop up again and again. The first occurred when, on September 30, 2012, a convolutional neural network called AlexNet won the ImageNet Large Scale Visual Recognition Challenge.

    If you have no idea what a convolutional neural network is, and if you have never heard of the ImageNet challenge, you are not alone. More than 99 percent of us are in the same situation, which is why AlexNet’s victory was hardly front-page news in 2012. But some humans did hear about AlexNet’s victory and decoded the writing on the wall.

    They knew, for example, that ImageNet is a database of millions of annotated digital images. Did a website ever ask you to prove that you are not a robot by looking at a set of images and indicating which ones contain a car or a cat? The images you clicked were perhaps added to the ImageNet database. The same thing might also have happened to tagged images of your pet cat that you uploaded online. The ImageNet Large Scale Visual Recognition Challenge tests various algorithms on how well they are able to identify the annotated images in the database. Can they correctly identify the cats? When humans are asked to do it, out of one hundred cat images we correctly identify ninety-five as cats. In 2010 the best algorithms had a success rate of only 72 percent. In 2011 the algorithmic success rate crawled up to 75 percent. In 2012 the AlexNet algorithm won the challenge and stunned the still minuscule community of AI experts by achieving a success rate of 85 percent. While this improvement may not sound like much to laypersons, it demonstrated to the experts the potential for rapid progress in certain AI domains. By 2015 a Microsoft algorithm achieved 96 percent accuracy, surpassing the human ability to identify cat images.

    In 2016, The Economist published a piece titled “From Not Working to Neural Networking” that asked, “How has artificial intelligence, associated with hubris and disappointment since its earliest days, suddenly become the hottest field in technology?” It pointed to AlexNet’s victory as the moment when “people started to pay attention, not just within the AI community but across the technology industry as a whole.” The article was illustrated with an image of a robotic hand holding up a photo of a cat.5

    All those cat images that tech giants had been harvesting from across the world, without paying a penny to either users or tax collectors, turned out to be incredibly valuable. The AI race was on, and the competitors were running on cat images. At the same time that AlexNet was preparing for the ImageNet challenge, Google too was training its AI on cat images, and even created a dedicated cat-image-generating AI called the Meow Generator.6 The technology developed by recognizing cute kittens was later deployed for more predatory purposes. For example, Israel relied on it to create the Red Wolf, Blue Wolf, and Wolf Pack apps used by Israeli soldiers for facial recognition of Palestinians in the Occupied Territories.7 The ability to recognize cat images also led to the algorithms Iran uses to automatically recognize unveiled women and enforce its hijab laws. As explained in chapter 8, massive amounts of data are required to train machine-learning algorithms. Without millions of cat images uploaded and annotated for free by people across the world, it would not have been possible to train the AlexNet algorithm or the Meow Generator, which in turn served as the template for subsequent AIs with far-reaching economic, political, and military potential.8

    Just as in the early nineteenth century the effort to build railways was pioneered by private entrepreneurs, so in the early twenty-first century private corporations were the initial main competitors in the AI race. The executives of Google, Facebook, Alibaba, and Baidu saw the value of recognizing cat images before the presidents and generals did. The second eureka moment, when the presidents and generals caught on to what was happening, occurred in mid-March 2016. It was the aforementioned victory of Google’s AlphaGo over Lee Sedol. Whereas AlexNet’s achievement was largely ignored by politicians, AlphaGo’s triumph sent shock waves through government offices, especially in East Asia. In China and neighboring countries go is a cultural treasure and considered an ideal training for aspiring strategists and policy makers. In March 2016, or so the mythology of AI would have it, the Chinese government realized that the age of AI had begun.9

    It is little wonder that the Chinese government was probably the first to understand the full importance of what was happening. In the nineteenth century, China was late to appreciate the potential of the Industrial Revolution and was slow to adopt inventions like railroads and steamships. It consequently suffered what the Chinese call “the century of humiliations.” After having been the world’s greatest superpower for centuries, failing to adopt modern industrial technology brought China to its knees. It was repeatedly defeated in wars, partially conquered by foreigners, and thoroughly exploited by the powers that did understand railroads and steamships. The Chinese vowed never again to miss the train.

    In 2017, China’s government released its “New Generation Artificial Intelligence Plan,” which announced that “by 2030, China’s AI theories, technologies, and application should achieve world-leading levels, making China the world’s primary AI innovation center.”10 In the following years China poured enormous resources into AI so that by the early 2020s it is already leading the world in several AI-related fields and catching up with the United States in others.11

    Of course, the Chinese government wasn’t the only one that woke up to the importance of AI. On September 1, 2017, President Putin of Russia declared, “Artificial intelligence is the future, not only for Russia, but for all humankind.… Whoever becomes the leader in this sphere will become the ruler of the world.” In January 2018, Prime Minister Modi of India concurred that “the one who control [sic] the data will control the world.”12 In February 2019, President Trump signed an executive order on AI, saying that “the age of AI has arrived” and that “continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States.”13 The United States at the time was already the leader in the AI race, thanks largely to efforts of visionary private entrepreneurs. But what began as a commercial competition between corporations was turning into a match between governments, or perhaps more accurately, into a race between competing teams, each made of one government and several corporations. The prize for the winner? World domination.

    DATA COLONIALISM

    In the sixteenth century, when Spanish, Portuguese, and Dutch conquistadors were building the first global empires in history, they came with sailing ships, horses, and gunpowder. When the British, Russians, and Japanese made their bids for hegemony in the nineteenth and twentieth centuries, they relied on steamships, locomotives, and machine guns. In the twenty-first century, to dominate a colony, you no longer need to send in the gunboats. You need to take out the data. A few corporations or governments harvesting the world’s data could transform the rest of the globe into data colonies—territories they control not with overt military force but with information.14

    Imagine a situation—in twenty years, say—when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel, and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony? What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?

    Such a situation can lead to a new kind of data colonialism in which control of data is used to dominate faraway colonies. Mastery of AI and data could also give the new empires control of people’s attention. As we have already discussed, in the 2010s American social media giants like Facebook and YouTube upended the politics of distant countries like Myanmar and Brazil in pursuit of profit. Future digital empires may do something similar for political interests.

    Fears of psychological warfare, data colonialism, and loss of control over their cyberspace have led many countries to already block what they see as dangerous apps. China has banned Facebook, YouTube, and many other Western social media apps and websites. Russia has banned almost all Western social media apps as well as some Chinese ones. In 2020, India banned TikTok, WeChat, and numerous other Chinese apps on the grounds that they were “prejudicial to sovereignty and integrity of India, defense of India, security of state and public order.”15 The United States has been debating whether to ban TikTok—concerned that the app might be serving Chinese interests—and as of 2023 it is illegal to use it on the devices of almost all federal employees, state employees, and government contractors.16 Lawmakers in the U.K., New Zealand, and other countries have also expressed concerns over TikTok.17 Numerous other governments, from Iran to Ethiopia, have blocked various apps like Facebook, Twitter, YouTube, Telegram, and Instagram.

    Data colonialism could also manifest itself in the spread of social credit systems. What might happen, for example, if a dominant player in the global digital economy decides to establish a social credit system that harvests data anywhere it can and scores not only its own nationals but people throughout the world? Foreigners couldn’t just shrug off their score, because it might affect them in numerous ways, from buying flight tickets to applying for visas, scholarships, and jobs. Just as tourists use the global scores given by foreign corporations like Tripadvisor and Airbnb to evaluate restaurants and vacation homes even in their own country, and just as people throughout the world use the U.S. dollar for commercial transactions, so people everywhere might begin to use a Chinese or an American social credit score for local social interactions.

    Becoming a data colony will have economic as well as political and social consequences. In the nineteenth and twentieth centuries, if you were a colony of an industrial power like Belgium or Britain, it usually meant that you provided raw materials, while the cutting-edge industries that made the biggest profits remained in the imperial hub. Egypt exported cotton to Britain and imported high-end textiles. Malaya provided rubber for tires; Coventry made the cars.18

    Something analogous is likely to happen with data colonialism. The raw material for the AI industry is data. To produce AI that recognizes images, you need cat photos. To produce the trendiest fashion, you need data on fashion trends. To produce autonomous vehicles, you need data about traffic patterns and car accidents. To produce health-care AI, you need data about genes and medical conditions. In a new imperial information economy, raw data will be harvested throughout the world and will flow to the imperial hub. There the cutting-edge technology will be developed, producing unbeatable algorithms that know how to identify cats, predict fashion trends, drive autonomous vehicles, and diagnose diseases. These algorithms will then be exported back to the data colonies. Data from Egypt and Malaysia might make a corporation in San Francisco or Beijing rich, while people in Cairo and Kuala Lumpur remain poor, because neither the profits nor the power is distributed back.

    The nature of the new information economy might make the imbalance between imperial hub and exploited colony worse than ever. In ancient times land—rather than information—was the most important economic asset. This precluded the overconcentration of all wealth and power in a single hub. As long as land was paramount, considerable wealth and power always remained in the hands of provincial landowners. A Roman emperor, for example, could put down one provincial revolt after another, but on the day after decapitating the last rebel chief, he had no choice but to appoint a new set of provincial landowners who might again challenge the central power. In the Roman Empire, although Italy was the seat of political power, the richest provinces were in the eastern Mediterranean. It was impossible to transport the fertile fields of the Nile valley to the Italian Peninsula.19 Eventually the emperors abandoned the city of Rome to the barbarians and moved the seat of political power to the rich east, to Constantinople.

    During the Industrial Revolution machines became more important than land. Factories, mines, railroad lines, and electrical power stations became the most valuable assets. It was somewhat easier to concentrate these kinds of assets in one place. The British Empire could centralize industrial production in its home islands, extract raw materials from India, Egypt, and Iraq, and sell them finished goods made in Birmingham or Belfast. Unlike in the Roman Empire, Britain was the seat of both political and economic power. But physics and geology still put natural limits on this concentration of wealth and power. The British couldn’t move every cotton mill from Calcutta to Manchester, nor shift the oil wells from Kirkuk to Yorkshire.

    Information is different. Unlike cotton and oil, digital data can be sent from Malaysia or Egypt to Beijing or San Francisco at almost the speed of light. And unlike land, oil fields, or textile factories, algorithms don’t take up much space. Consequently, unlike industrial power, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.

    Indeed, AI makes it possible to concentrate in one place even the decisive assets of some traditional industries, like textile. In the nineteenth century, to control the textile industry meant to control sprawling cotton fields and huge mechanical production lines. In the twenty-first century, the most important asset of the textile industry is information rather than cotton or machinery. To beat the competitors, a garment producer needs information about the likes and dislikes of customers and the ability to predict or manufacture the next fashions. By controlling this type of information, high-tech giants like Amazon and Alibaba can monopolize even a very traditional industry like textile. In 2021, Amazon became the United States’ biggest single clothing retailer.20

    Moreover, as AI, robots, and 3-D printers automate textile production, millions of workers might lose their jobs, upending national economies and the global balance of power. What will happen to the economies and politics of Pakistan and Bangladesh, for example, when automation makes it cheaper to produce textiles in Europe? Consider that at present the textile sector provides employment to 40 percent of Pakistan’s total labor force and accounts for 84 percent of Bangladesh’s export earnings.21 As noted in chapter 7, while automation might make millions of textile workers redundant, it will probably create many new jobs, too. For instance, there might be a huge demand for coders and data analysts. But turning an unemployed factory hand into a data analyst demands a substantial up-front investment in retraining. Where would Pakistan and Bangladesh get the money to do that?

    AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more. Meanwhile, the value of unskilled laborers in left-behind countries will decline, and they will not have the resources to retrain their workforce, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.22 According to the global accounting firm PricewaterhouseCoopers, AI is expected to add $15.7 trillion to the global economy by 2030. But if current trends continue, it is projected that China and North America—the two leading AI superpowers—will together take home 70 percent of that money.23

    FROM WEB TO COCOON

    These economic and geopolitical dynamics could divide the world between two digital empires. During the Cold War, the Iron Curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the Silicon Curtain. The Silicon Curtain is made of code, and it passes through every smartphone, computer, and server in the world. The code on your smartphone determines on which side of the Silicon Curtain you live, which algorithms run your life, who controls your attention, and where your data flows.

    It is becoming difficult to access information across the Silicon Curtain, say between China and the United States, or between Russia and the EU. Moreover, the two sides are increasingly run on different digital networks, using different computer codes. Each sphere obeys different regulations and serves different purposes. In China, the most important aim of new digital technology is to strengthen the state and serve government policies. While private enterprises are given a certain amount of autonomy in developing and deploying AI tools, their economic activities are ultimately subservient to the government’s political goals. These political goals also justify a relatively high level of surveillance, both online and off-line. This means, for example, that though Chinese citizens and authorities do care about people’s privacy, China is already far ahead of the United States and other Western countries in developing and deploying social credit systems that encompass the whole of people’s lives.24

    In the United States, the government plays a more limited role. Private enterprises lead the development and deployment of AI, and the ultimate goal of many new AI tools is to enrich the tech giants rather than to strengthen the American state or the current administration. Indeed, in many cases governmental policies are themselves shaped by powerful business interests. But the U.S. system does offer greater protection for citizens’ privacy. While American corporations aggressively gather information on people’s online activities, they are much more restricted in surveilling people’s offline lives. There is also widespread rejection of the ideas behind all-embracing social credit systems.25

    These political, cultural, and regulatory differences mean that each sphere is using different software. In China you cannot use Google and Facebook, and you cannot access Wikipedia. In the United States few people use WeChat, Baidu, and Tencent. More important, the spheres aren’t mirror images of each other. It is not that the Chinese and Americans develop local versions of the same apps. Baidu isn’t the Chinese Google. Alibaba isn’t the Chinese Amazon. They have different goals, different digital architectures, and different impacts on people’s lives.26 These differences influence much of the world, since most countries rely on Chinese and American software rather than on local technology.

    Each sphere also uses different hardware like smartphones and computers. The United States pressures its allies and clients to avoid Chinese hardware, such as Huawei’s 5G infrastructure.27 The Trump administration blocked an attempt by the Singaporean corporation Broadcom to buy the leading American producer of computer chips, Qualcomm. They feared foreigners might insert back doors into the chips or would prevent the U.S. government from inserting its own back doors there.28 In 2022, the Biden administration placed strict limits on trade in high-performance computing chips necessary for the development of AI. U.S. companies were forbidden to export such chips to China, or to provide China with the means to manufacture or repair them. The restrictions have subsequently been tightened further, and the ban was expanded to include other nations such as Russia and Iran.29 While in the short term this hampers China in the AI race, in the long term it will push China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest building blocks.30

    The two digital spheres may drift further and further apart. Chinese software would talk only with Chinese hardware and Chinese infrastructure, and the same would happen on the other side of the Silicon Curtain. Since digital code influences human behavior, and human behavior in turn shapes digital code, the two sides may well be moving along different trajectories that will make them more and more different not just in their technology but in their cultural values, social norms, and political structures. After generations of convergence, humanity could find itself at a crucial point of divergence.31 For centuries, new information technologies fueled the process of globalization and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality. While the web has been our main metaphor in recent decades, the future might belong to cocoons.

    THE GLOBAL MIND-BODY SPLIT

    The division into separate information cocoons could lead not just to economic rivalries and international tensions but also to the development of very different cultures, ideologies, and identities. Guessing future cultural and ideological developments is usually a fool’s errand. It is far more difficult than predicting economic and geopolitical developments. How many Romans or Jews in the days of Tiberius could have anticipated that a splinter Jewish sect would eventually take over the Roman Empire and that the emperors would abandon Rome’s old gods to worship an executed Jewish rabbi?

    It would have been even more difficult to foresee the directions in which various Christian sects would develop and the momentous impact of their ideas and conflicts on everything from politics to sexuality. When Jesus was asked about paying taxes to Tiberius’s government and answered, “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s” (Matthew 22:21), nobody could imagine the impact his response would have on the separation of church and state in the American republic two millennia later. And when Saint Paul wrote to the Christians in Rome, “I myself in my mind am a slave to God’s law, but in my sinful flesh a slave to the law of sin” (Romans 7:25), who could have foreseen the repercussions this would have on schools of thought ranging from Cartesian philosophy to queer theory?

    Despite these difficulties, it is important to try to imagine future cultural developments, in order to alert ourselves to the fact that the AI revolution and the formation of rival digital spheres are likely to change more than just our jobs and political structures. The following paragraphs contain some admittedly ambitious speculation, so please bear in mind that my goal is not to accurately foretell cultural developments but merely to draw attention to the likelihood that profound cultural shifts and conflicts await us.

    One possible development with far-reaching consequences is that different digital cocoons might adopt incompatible approaches to the most fundamental questions of human identity. For thousands of years, many religious and cultural conflicts—for example, between rival Christian sects, between Hindus and Buddhists, and between Platonists and Aristotelians—were fueled by disagreements about the mind-body problem. Are humans a physical body, or a nonphysical mind, or perhaps a mind trapped inside a body? In the twenty-first century, the computer network might supercharge the mind-body problem and turn it into a cause for major personal, ideological, and political conflicts.

    To appreciate the political ramifications of the mind-body problem, let’s briefly revisit the history of Christianity. Many of the earliest Christian sects, influenced by Jewish thinking, believed in the Old Testament idea that humans are embodied beings and that the body plays a crucial role in human identity. The book of Genesis said God created humans as physical bodies, and almost all books of the Old Testament assume that humans can exist only as physical bodies. With a few possible exceptions, the Old Testament doesn’t mention the possibility of a bodiless existence after death, in heaven or hell. When the ancient Jews fantasized about salvation, they imagined it to mean an earthly kingdom of material bodies. In the time of Jesus, many Jews believed that when the Messiah finally comes, the bodies of the dead would come back to life, here on earth. The Kingdom of God, established by the Messiah, was supposed to be a material kingdom, with trees and stones and flesh-and-blood bodies.32

    This was also the view of Jesus himself and the first Christians. Jesus promised his followers that soon the Kingdom of God would be built here on earth and they would inhabit it in their material bodies. When Jesus died without fulfilling his promise, his early followers came to believe that he was resurrected in the flesh and that when the Kingdom of God finally materialized on earth, they too would be resurrected in the flesh. The church father Tertullian (160–240 CE) wrote that “the flesh is the very condition on which salvation hinges,” and the catechism of the Catholic Church, citing the doctrines adopted at the Second Council of Lyon in 1274, states, “We believe in God who is creator of the flesh; we believe in the Word made flesh in order to redeem the flesh; we believe in the resurrection of the flesh, the fulfillment of both the creation and the redemption of the flesh.… We believe in the true resurrection of this flesh that we now possess.”33

    Despite such seemingly unequivocal statements, we saw that Saint Paul already had his doubts about the flesh, and by the fourth century CE, under Greek, Manichaean, and Persian influences, some Christians had drifted toward a dualistic approach. They came to think of humans as consisting of a good immaterial soul trapped inside an evil material body. They didn’t fantasize about being resurrected in the flesh. Just the opposite. Having been released by death from its abominable material prison, why would the pure soul ever want to get back in? Christians accordingly began to believe that after death the soul is liberated from the body and exists forever in an immaterial place completely beyond the physical realm—which is the standard belief among Christians today, notwithstanding what Tertullian and the Second Council of Lyon said.34

    But Christianity couldn’t completely abandon the old Jewish view that humans are embodied beings. After all, Christ appeared on earth in the flesh. His body was nailed to the cross, on which he experienced excruciating pain. For two thousand years, Christian sects therefore fought each other—sometimes with words, sometimes with swords—over the exact relations between soul and body. The fiercest arguments focused on Christ’s own body. Was he material? Was he purely spiritual? Did he perhaps have a nonbinary nature, being both human and divine at the same time?

    The different approaches to the mind-body problem influenced how people treated their own bodies. Saints, hermits, and monks made breathtaking experiments in pushing the human body to its limits. Just as Christ allowed his body to be tortured on the cross, so these “athletes of Christ” allowed lions and bears to rip them apart while their souls rejoiced in divine ecstasy. They wore hair shirts, fasted for weeks, or stood for years on a pillar—like the famous Simeon who allegedly stood for about forty years on top of a pillar near Aleppo.35

    Other Christians took the opposite approach, believing that the body didn’t matter at all. The only thing that mattered was faith. This idea was taken to extremes by Protestants like Martin Luther, who formulated the doctrine of sola fide: only faith. After living as a monk for about ten years, fasting and torturing his body in various ways, Luther despaired of these bodily exercises. He reasoned that no bodily self-torments could force God to redeem him. Indeed, thinking he could win his own salvation by torturing his body was the sin of pride. Luther therefore disrobed, married a former nun, and told his followers that to be good Christians, the only thing they needed was to have complete faith in Christ.36

    These ancient theological debates about mind and body may seem utterly irrelevant to the AI revolution, but they have in fact been resurrected by twenty-first-century technologies. What is the relationship between our physical body and our online identities and avatars? What is the relation between the offline world and cyberspace? Suppose I spend most of my waking hours sitting in my room in front of a screen, playing online games, forming virtual relationships, and even working remotely. I hardly venture out even to eat. I just order takeout. If you are like ancient Jews and the first Christians, you would pity me and conclude that I must be living in a delusion, losing touch with the reality of physical spaces and flesh-and-blood bodies. But if your thinking is closer to that of Luther and many later Christians, you might think I am liberated. By shifting most of my activities and relationships online, I have released myself from the limited organic world of debilitating gravity and corrupt bodies and can enjoy the unlimited possibilities of a digital world, which is potentially liberated from the laws of biology and even physics. I am free to roam a much vaster and more exciting space and to explore new aspects of my identity.

    An increasingly important question is whether people can adopt any virtual identity they like, or should their identity be constrained by their biological body? If we follow the Lutheran position of sola fide, the biological body isn’t of much importance. To adopt a certain online identity, the only thing that matters is what you believe. This debate can have far-reaching consequences not just for human identity but for our attitude to the world as a whole. A society that understands identities in terms of biological bodies should also care more about material infrastructure like sewage pipes and about the ecosystem that sustains our bodies. It will see the online world as an auxiliary of the offline world that can serve various useful purposes but can never become the central arena of our lives. Its aim would be to create an ideal physical and biological realm—the Kingdom of God on earth. In contrast, a society that downplays biological bodies and focuses on online identities may well seek to create an immersive Kingdom of God in cyberspace while discounting the fate of mere material things like sewage pipes and rain forests.

    This debate could shape attitudes not only toward organisms but also toward digital entities. As long as society defines identity by focusing on physical bodies, it is unlikely to view AIs as persons. But if society gives less importance to physical bodies, then even AIs that lack any corporeal manifestations may be accepted as legal persons enjoying various rights.

    Throughout history, diverse cultures have given diverse answers to the mind-body problem. A twenty-first-century controversy about the mind-body problem could result in cultural and political splits more consequential even than the split between Jews and Christians or between Catholics and Protestants. What happens, for example, if the American sphere discounts the body, defines humans by their online identity, recognizes AIs as persons, and downplays the importance of the ecosystem, whereas the Chinese sphere adopts opposite positions? Current disagreements about violations of human rights or adherence to ecological standards will look minuscule in comparison. The Thirty Years’ War—arguably the most devastating war in European history—was fought at least in part because Catholics and Protestants couldn’t agree on doctrines like sola fide and on whether Christ was divine, human, or nonbinary. Might future conflicts start because of an argument about AI rights and the nonbinary nature of avatars?

    As noted, these are all wild speculations, and in all likelihood actual cultures and ideologies will develop in different—and perhaps even wilder—directions. But it is probable that within a few decades the computer network will cultivate new human and nonhuman identities that make little sense to us. And if the world will be divided into two rival digital cocoons, the identities of entities in one cocoon might be unintelligible to the inhabitants of the other.

    FROM CODE WAR TO HOT WAR

    While China and the United States are currently the front-runners in the AI race, they are not alone. Other countries or blocs, such as the EU, India, Brazil, and Russia, may try to create their own digital spheres, each influenced by different political, cultural, and religious traditions.37 Instead of being divided between just two global empires, the world might be divided among a dozen empires. It is unclear whether this will somewhat alleviate or only exacerbate the imperial competition.

    The more the new empires compete against one another, the greater the danger of armed conflict. The Cold War between the United States and the U.S.S.R. never escalated into a direct military confrontation largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.

    First, cyber weapons are much more versatile than nuclear bombs. Cyber weapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections, or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target. Consequently, at times it is hard to know if an attack even occurred or who launched it. If a database is hacked or sensitive equipment is destroyed, it’s hard to be sure whom to blame. The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it. Rival countries like Israel and Iran or the United States and Russia have been trading cyber blows for years, in an undeclared but escalating war.38 This is becoming the new global norm, amplifying international tensions and pushing countries to cross one red line after another.

    A second crucial difference concerns predictability. The Cold War was like a hyperrational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small. Cyber warfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses, and malwares. Nobody can be certain whether their own weapons would actually work when called upon. Would Chinese missiles fire when the order is given, or perhaps the Americans have hacked them or the chain of command? Would American aircraft carriers function as expected, or would they perhaps shut down mysteriously or sail around in circles?39

    Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself—rightly or wrongly—that it can launch a successful first strike and avoid massive retaliation. Even worse, if one side thinks it has such an opportunity, the temptation to launch a first strike could become irresistible, because one never knows how long the window of opportunity will remain open. Game theory posits that the most dangerous situation in an arms race is when one side feels it has an advantage but that this advantage is slipping away.40

    Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the nineteenth and twentieth centuries exploited and repressed their colonies, and it would be foolhardy to expect the new digital empires to behave much better. Moreover, as noted earlier, if the world is divided into rival empires, humanity is unlikely to cooperate effectively to overcome the ecological crisis or to regulate AI and other disruptive technologies like bioengineering.

    THE GLOBAL BOND

    Of course, no matter whether the world is divided between a few digital empires, remains a more diverse community of two hundred nation-states, or is split along altogether different and unforeseen lines, cooperation is always an option. Among humans, the precondition for cooperation isn’t similarity; it is the ability to exchange information. As long as we are able to converse, we might find some shared story that can bring us closer. This, after all, is what made Homo sapiens the dominant species on the planet.

    Just as different and even rival families can cooperate within a tribal network, and competing tribes can cooperate within a national network, so opposing nations and empires can cooperate within a global network. The stories that make such cooperation possible do not eliminate our differences; rather, they enable us to identify shared experiences and interests, which offer a common framework for thought and action.

    A large part of what nevertheless makes global cooperation difficult is the misguided notion that it requires abolishing all cultural, social, and political differences. Populist politicians often argue that if the international community agrees on a common story and on universal norms and values, this will destroy the independence and unique traditions of their own nation.41 This position was unabashedly distilled in 2015 by Marine Le Pen—leader of France’s National Front party—in an election speech in which she declared, “We have entered a new two-partyism. A two-partyism between two mutually exclusive conceptions that will from now on structure our political life. The cleavage no longer separates left and right, but globalists and patriots.”42 In August 2020, President Trump described his guiding ethos thus: “We have rejected globalism and embraced patriotism.”43

    Luckily, this binary position is mistaken in its basic assumption. Global cooperation and patriotism are not mutually exclusive. For patriotism isn’t about hating foreigners. It is about loving our compatriots. And there are many situations when, in order to take care of our compatriots, we need to cooperate with foreigners. COVID-19 provided us with one obvious example. Pandemics are global events, and without global cooperation it is hard to contain them, let alone prevent them. When a new virus or a mutant pathogen appears in one country, it puts all other countries in danger. Conversely, the biggest advantage of humans over pathogens is that we can cooperate in ways that pathogens cannot. Doctors in Germany and Brazil can alert one another to new dangers, give each other good advice, and work together to discover better treatments.

    If German scientists invent a vaccine against some new disease, how should Brazilians react to this German achievement? One option is to reject the foreign vaccine and wait until Brazilian scientists develop a Brazilian vaccine. That, however, would be not just foolish; it would be anti-patriotic. Brazilian patriots should want to use any available vaccine to help their compatriots, no matter where the vaccine was developed. In this situation, cooperating with foreigners is the patriotic thing to do. The threat of losing control of AIs is an analogous situation in which patriotism and global cooperation must go together. An out-of-control AI, just like an out-of-control virus, puts in danger humans in every nation. No human collective—whether a tribe, a nation, or the entire species—stands to benefit from letting power shift from humans to algorithms.

    Contrary to what populists argue, globalism doesn’t mean establishing a global empire, abandoning national loyalties, or opening borders to unlimited immigration. In fact, global cooperation means two far more modest things: first, a commitment to some global rules. These rules don’t deny the uniqueness of each nation and the loyalty people should owe their nation. They just regulate the relations between nations. A good model is the World Cup. The World Cup is a competition between nations, and people often show fierce loyalty to their national team. At the same time, the World Cup is an amazing display of global agreement. Brazil cannot play football against Germany unless Brazilians and Germans first agree on the same set of rules for the game. That’s globalism in action.

    The second principle of globalism is that sometimes—not always, but sometimes—it is necessary to prioritize the long-term interests of all humans over the short-term interests of a few. For example, in the World Cup, all national teams agree not to use performance-enhancing drugs, because everybody realizes that if they go down that path, the World Cup would eventually devolve into a competition between biochemists. In other fields where technology is a game changer, we should similarly strive to balance national and global interests. Nations will obviously continue to compete in the development of new technology, but sometimes they should agree to limit the development and deployment of dangerous technologies like autonomous weapons and manipulative algorithms—not purely out of altruism, but for their own self-preservation.

    THE HUMAN CHOICE

    Forging and keeping international agreements on AI will require major changes in the way the international system functions. While we have experience in regulating dangerous technologies like nuclear and biological weapons, the regulation of AI will demand unprecedented levels of trust and self-discipline, for two reasons. First, it is easier to hide an illicit AI lab than an illicit nuclear reactor. Second, AIs have a lot more dual civilian-military usages than nuclear bombs. Consequently, despite signing an agreement that bans autonomous weapon systems, a country could build such weapons secretly, or camouflage them as civilian products. For example, it might develop fully autonomous drones for delivering mail and spraying fields with pesticides that with a few minor modifications could also deliver bombs and spray people with poison. Consequently, governments and corporations will find it more difficult to trust that their rivals are really abiding by the agreed regulations—and to withstand the temptation to themselves waive the rules.44 Can humans develop the necessary levels of trust and self-discipline? Do changes like those have any precedent in history?

    Many people are skeptical of the human capacity to change, and in particular of the human ability to renounce violence and forge stronger global bonds. For example, “realist” thinkers like Hans Morgenthau and John Mearsheimer have argued that an all-out competition for power is the inescapable condition of the international system. Mearsheimer explains that “my theory sees great powers as concerned mainly with figuring out how to survive in a world where there is no agency to protect them from each other” and that “they quickly realize that power is the key to their survival.” Mearsheimer then asks “how much power states want” and answers that all states want as much power as they can get, “because the international system creates powerful incentives for states to look for opportunities to gain power at the expense of rivals.” He concludes, “A state’s ultimate goal is to be the hegemon in the system.”45

    This grim view of international relations is akin to the populist and Marxist views of human relations, in that they all see humans as interested only in power. And they are all founded upon a deeper philosophical theory of human nature, which the primatologist Frans de Waal termed “veneer theory.” It argues that at heart humans are Stone Age hunters who cannot but see the world as a jungle where the strong prey upon the weak and where might makes right. For millennia, the theory goes, humans have tried to camouflage this unchanging reality under a thin and mutable veneer of myths and rituals, but we have never really broken free from the law of the jungle. Indeed, our myths and rituals are themselves a weapon used by the jungle’s top dogs to deceive and trap their inferiors. Those who don’t realize this are dangerously naive and will fall prey to some ruthless predator.46

    There are reasons to think, however, that “realists” like Mearsheimer have a selective view of historical reality and that the law of the jungle is itself a myth. As de Waal and many other biologists documented in numerous studies, real jungles—unlike the one in our imagination—are full of cooperation, symbiosis, and altruism displayed by countless animals, plants, fungi, and even bacteria. Eighty percent of all land plants, for example, rely on symbiotic relationships with fungi, and almost 90 percent of vascular plant families enjoy symbiotic relationships with microorganisms. If organisms in the rain forests of Amazonia, Africa, or India abandoned cooperation in favor of an all-out competition for hegemony, the rain forests and all their inhabitants would quickly die. That’s the law of the jungle.47

    As for Stone Age humans, they were gatherers as well as hunters, and there is no firm evidence that they had irrepressible warlike tendencies. While there are plenty of speculations, the first unambiguous evidence for organized warfare appears in the archaeological record only about thirteen thousand years ago, at the site of Jebel Sahaba in the Nile valley.48 Even after that date, the record of war is variable rather than constant. Some periods were exceptionally violent, whereas others were relatively peaceful. The clearest pattern we observe in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation. A hundred thousand years ago, Sapiens could cooperate only at the level of bands. Over the millennia, we have found ways to create communities of strangers, first on the level of tribes and eventually on the level of religions, trade networks, and states. Realists should note that states are not the fundamental particles of human reality, but rather the product of arduous processes of building trust and cooperation. If humans were interested only in power, they could never have created states in the first place. Sure, conflicts have always remained a possibility—both between and within states—but they have never been an inescapable destiny.

    War’s intensity depends not on an immutable human nature but on shifting technological, economic, and cultural factors. As these factors change, so does war, as was clearly demonstrated in the post-1945 era. During that period, the development of nuclear technology greatly increased the potential price of war. From the 1950s onward it became clear to the superpowers that even if they could somehow win an all-out nuclear exchange, their victory would likely be a suicidal achievement, involving the sacrifice of most of their population.

    Simultaneously, the ongoing shift from a material-based economy to a knowledge-based economy decreased the potential gains of war. While it has remained feasible to conquer rice paddies and gold mines, by the late twentieth century these were no longer the main sources of economic wealth. The new leading industries, like the semiconductor sector, came to be based on technical skills and organizational know-how that could not be acquired by military conquest. Accordingly, some of the greatest economic miracles of the post-1945 era were achieved by the defeated powers of Germany, Italy, and Japan, and by countries like Sweden and Singapore that eschewed military conflicts and imperial conquests.

    Finally, the second half of the twentieth century also witnessed a profound cultural transformation, with the decline of age-old militaristic ideals. Artists increasingly focused on depicting the senseless horrors of combat rather than on glorifying its architects, and politicians came to power dreaming more of domestic reforms than of foreign conquests. Due to these technological, economic, and cultural changes, in the decades following the end of World War II most governments stopped seeing wars of aggression as an appealing tool to advance their interests, and most nations stopped fantasizing about conquering and destroying their neighbors. While civil wars and insurgencies have remained commonplace, the post-1945 world has seen a significant decline in full-scale wars between states, and most notably in direct armed conflicts between great powers.49

    Numerous statistics attest to the decline of war in this post-1945 era, but perhaps the clearest evidence is found in state budgets. For most of recorded history, the military was the number one item on the budget of every empire, sultanate, kingdom, and republic. Governments spent little on health care and education, because most of their resources were consumed by paying soldiers, constructing walls, and building warships. When the bureaucrat Chen Xiang examined the annual budget of the Chinese Song dynasty for the year 1065, he found that out of sixty million minqian (currency unit), fifty million (83 percent) were consumed by the military. Another official, Cai Xiang, wrote, “If [we] split [all the property] under Heaven into six shares, five shares are spent on the military, and one share is spent on temple offerings and state expenses. How can the country not be poor and the people not in difficulty?”50

    The same situation prevailed in many other polities, from ancient times to the modern era. The Roman Empire spent about 50–75 percent of its budget on the military,51 and the figure was about 60 percent in the late seventeenth-century Ottoman Empire.52 Between 1685 and 1813 the share of the military in British government expenditure averaged 75 percent.53 In France, military expenditure between 1630 and 1659 varied between 89 percent and 93 percent of the budget, remained above 30 percent for much of the eighteenth century, and dropped to a low of 25 percent in 1788 only due to the financial crisis that led to the French Revolution. In Prussia, from 1711 to 1800 the military share of the budget never fell below 75 percent and occasionally reached as high as 91 percent.54 During the relatively peaceful years of 1870–1913, the military ate up an average of 30 percent of the state budgets of the major powers of Europe, as well as Japan and the United States, while smaller powers like Sweden were spending even more.55 When war broke out in 1914, military budges skyrocketed. During their involvement in World War I, French military expenditure averaged 77 percent of the budget; in Germany it was 91 percent, in Russia 48 percent, in the U.K. 49 percent, and in the United States 47 percent. During World War II, the U.K. figure rose to 69 percent and the U.S. figure to 71 percent.56 Even during the détente years of the 1970s, Soviet military expenditure still amounted to 32.5 percent of the budget.57

    State budgets in more recent decades make for far more hopeful reading material than any pacifist tract ever composed. In the early twenty-first century, the worldwide average government expenditure on the military has been only around 7 percent of the budget, and even the dominant superpower of the United States spent only around 13 percent of its annual budget to maintain its military hegemony.58 Since most people no longer lived in terror of external invasion, governments could invest far more money in welfare, education, and health care. Worldwide average expenditure on health care in the early twenty-first century has been about 10 percent of the government budget, or about 1.4 times the defense budget.59 For many people in the 2010s, the fact that the health-care budget was bigger than the military budget was unremarkable. But it was the result of a major change in human behavior, and one that would have sounded impossible to most previous generations.

    The decline of war didn’t result from a divine miracle or from a metamorphosis in the laws of nature. It resulted from humans changing their own laws, myths, and institutions and making better decisions. Unfortunately, the fact that this change has stemmed from human choice also means that it is reversible. Technology, economics, and culture are ever changing. In the early 2020s, more leaders are again dreaming of martial glory, armed conflicts are on the rise,60 and military budgets are increasing.61

    A critical threshold was crossed in early 2022. Russia had already destabilized the global order by mounting a limited invasion of Ukraine in 2014 and occupying Crimea and other regions in eastern Ukraine. But on February 24, 2022, Vladimir Putin launched an all-out assault aimed to conquer the whole of Ukraine and extinguish Ukrainian nationhood. To prepare and sustain this attack, Russia increased its military budget far beyond the global average of 7 percent. Exact figures are difficult to determine, because many aspects of the Russian military budget are shrouded in secrecy, but the best estimates put the figure somewhere in the vicinity of 30 percent, and it may even be higher.62 The Russian onslaught in turn has forced not only Ukraine but also many other European nations to increase their own military budgets.63 The reemergence of militaristic cultures in places like Russia, and the development of unprecedented cyber weapons and autonomous armaments throughout the world, could result in a new era of war, worse than anything we have seen before.

    The decisions leaders like Putin make on matters of war and peace are shaped by their understanding of history. Which means that just as overly optimistic views of history could be dangerous illusions, overly pessimistic views could become destructive self-fulfilling prophecies. Prior to his all-out 2022 attack on Ukraine, Putin had often expressed his historical conviction that Russia is trapped in an endless struggle with foreign enemies, and that the Ukrainian nation is a fabrication by these enemies. In June 2021, he published a fifty-three-hundred-word essay titled “On the Historical Unity of Russians and Ukrainians” in which he denied the existence of Ukraine as a nation and argued that foreign powers have repeatedly tried to weaken Russia by fostering Ukrainian separatism. While professional historians reject these claims, Putin seems to genuinely believe in this historical narrative.64 Putin’s historical convictions led him in 2022 to prioritize the conquest of Ukraine over other policy goals, such as providing Russian citizens with better health care or spearheading a global initiative to regulate AI.65

    If leaders like Putin believe that humanity is trapped in an unforgiving dog-eat-dog world, that no profound change is possible in this sorry state of affairs, and that the relative peace of the late twentieth century and early twenty-first century was an illusion, then the only choice remaining is whether to play the part of predator or prey. Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorize for their history exams. These leaders should be reminded, however, that in the era of AI the alpha predator is likely to be AI.

    Perhaps, though, we have more choices available to us. I cannot predict what decisions people will make in the coming years, but as a historian I do believe in the possibility of change. One of the chief lessons of history is that many of the things that we consider natural and eternal are, in fact, man-made and mutable. Accepting that conflict is not inevitable, however, should not make us complacent. Just the opposite. It places a heavy responsibility on all of us to make good choices. It implies that if human civilization is consumed by conflict, we cannot blame it on any law of nature or any alien technology. It also implies that if we make the effort, we can create a better world. This isn’t naïveté; it’s realism. Every old thing was once new. The only constant of history is change.

    Epilogue

    In late 2016, a few months after AlphaGo defeated Lee Sedol and as Facebook algorithms were stoking dangerous racist sentiments in Myanmar, I published Homo Deus. Though my academic training had been in medieval and early modern military history, and though I have no background in the technical aspects of computer science, I suddenly found myself, post-publication, with the reputation of an AI expert. This opened the doors to the offices of scientists, entrepreneurs, and world leaders interested in AI and afforded me a fascinating, privileged look into the complex dynamics of the AI revolution.

    It turned out that my previous experience researching topics such as English strategy in the Hundred Years’ War and studying paintings from the Thirty Years’ War1 wasn’t entirely unrelated to this new field. In fact, it gave me a rather unique historical perspective on the events unfolding rapidly in AI labs, corporate offices, military headquarters, and presidential palaces. Over the past eight years I have had numerous public and private discussions about AI, particularly about the dangers it poses, and with each passing year the tone has become more urgent. Conversations that in 2016 felt like idle philosophical speculations about a distant future had, by 2024, acquired the focused intensity of an emergency room.

    I am neither a politician nor a businessperson and have little talent for what these vocations demand. But I do believe that an understanding of history can be useful in gaining a better grasp of present-day technological, economic, and cultural developments—and, more urgently, in changing our political priorities. Politics is largely a matter of priorities. Should we cut the health care budget and spend more on defense? Is our more pressing security threat terrorism or climate change? Do we focus on regaining a lost patch of ancestral territory or concentrate on creating a common economic zone with the neighbors? Priorities determine how citizens vote, what businesspeople are concerned about, and how politicians try to make a name for themselves. And priorities are often shaped by our understanding of history.

    While so-called realists dismiss historical narratives as propaganda ploys deployed to advance state interests, in fact it is these narratives that define state interests in the first place. As we saw in our discussion of Clausewitz’s theory of war, there is no rational way to define ultimate goals. The state interests of Russia, Israel, Myanmar, or any other country can never be deduced from some mathematical or physical equation; they are always the supposed moral of a historical narrative.

    It is therefore hardly surprising that politicians all over the world spend a lot of time and effort recounting historical narratives. The above-mentioned example of Vladimir Putin is hardly exceptional in this respect. In 2005 the UN secretary-general, Kofi Annan, had his first meeting with General Than Shwe, the then dictator of Myanmar. Annan was advised to speak first, so as to prevent the general from monopolizing the conversation, which was meant to last only twenty minutes. But Than Shwe struck first and held forth for nearly an hour on the history of Myanmar, hardly giving the UN secretary-general any chance to speak.2 In May 2011 the Israeli prime minister, Benjamin Netanyahu did something similar in the White House, when he met the U.S. president, Barack Obama. After Obama’s brief introductory remarks, Netanyahu subjected the president to a long lecture about the history of Israel and the Jewish people, treating Obama as if he were his student.3 Cynics might argue that Than Shwe and Netanyahu hardly cared about the facts of history and were deliberately distorting them in order to achieve some political goal. But these political goals were themselves the product of deeply held convictions about history.

    In my own conversations on AI with politicians, as well as tech entrepreneurs, history has often emerged as a central theme. Some of my interlocutors painted a rosy picture of history and were accordingly enthusiastic about AI. They argued that more information has always meant more knowledge and that by increasing our knowledge, every previous information revolution has greatly benefited humankind. Didn’t the print revolution lead to the scientific revolution? Didn’t newspapers and radio lead to the rise of modern democracy? The same, they said, would happen with AI. Others had a dimmer perspective, but nevertheless expressed hope that humankind will somehow muddle through the AI revolution, just as we muddled through the Industrial Revolution.

    Neither view offered me much solace. For reasons explained in previous chapters, I find such historical comparisons to the print revolution and the Industrial Revolution distressing, especially coming from people in positions of power, whose historical vision is informing the decisions that shape our future. These historical comparisons underestimate both the unprecedented nature of the AI revolution and the negative aspects of previous revolutions. The immediate results of the print revolution included witch hunts and religious wars alongside scientific discoveries, while newspapers and radio were exploited by totalitarian regimes as well as by democracies. As for the Industrial Revolution, adapting to it involved catastrophic experiments such as imperialism and Nazism. If the AI revolution leads us to similar kinds of experiments, can we really be certain we will muddle through again?

    My goal with this book is to provide a more accurate historical perspective on the AI revolution. This revolution is still in its infancy, and it is notoriously difficult to understand momentous developments in real time. It is hard, even now, to assess the meaning of events in the 2010s like AlphaGo’s victory or Facebook’s involvement in the anti-Rohingya campaign. The meaning of events of the early 2020s is even more obscure. Yet by expanding our horizons to look at how information networks developed over thousands of years, I believe it is possible to gain some insight on what we’re living through today.

    One lesson is that the invention of new information technology is always a catalyst for major historical changes, because the most important role of information is to weave new networks rather than represent preexisting realities. By recording tax payments, clay tablets in ancient Mesopotamia helped forge the first city-states. By canonizing prophetic visions, holy books spread new kinds of religions. By swiftly disseminating the words of presidents and citizens, newspapers and telegraphs opened the door to both large-scale democracy and large-scale totalitarianism. The information thus recorded and distributed was sometimes true, often false, but it invariably created new connections between larger numbers of people.

    We are used to giving political, ideological, and economic interpretations to historical revolutions such as the rise of the first Mesopotamian city-states, the spread of Christianity, the American Revolution, and the Bolshevik Revolution. But to gain a deeper understanding, we should also view them as revolutions in the way information flows. Christianity was obviously different from Greek polytheism in many of its myths and rites, yet it was also different in the importance it gave to a single holy book and the institution entrusted with interpreting it. Consequently, whereas each temple of Zeus was a separate entity, each Christian church became a node in a unified network.4 Information flowed differently among the followers of Christ than among the worshippers of Zeus. Similarly, Stalin’s U.S.S.R. was a different kind of information network from Peter the Great’s empire. Stalin enacted many unprecedented economic policies, but what enabled him to do it is that he headed a totalitarian network in which the center accumulated enough information to micromanage the lives of hundreds of millions of people. Technology is rarely deterministic, and the same technology can be used in very different ways. But without the invention of technologies like the book and the telegraph, the Christian Church and the Stalinist apparatus would never have been possible.

    This historical lesson should strongly encourage us to pay more attention to the AI revolution in our current political debates. The invention of AI is potentially more momentous than the invention of the telegraph, the printing press, or even writing, because AI is the first tool that is capable of making decisions and generating ideas by itself. Whereas printing presses and parchment scrolls offered new means for connecting people, AIs are full-fledged members in our information networks. In coming years, all information networks—from armies to religions—will gain millions of new AI members, who will process data very differently than humans. These new members will make alien decisions and generate alien ideas—that is, decisions and ideas that are unlikely to occur to humans. The addition of so many alien members is bound to change the shape of armies, religions, markets, and nations. Entire political, economic, and social systems might collapse, and new ones will take their place. That’s why AI should be a matter of utmost urgency even to people who don’t care about technology and who think the most important political questions concern the survival of democracy or the fair distribution of wealth.

    This book has juxtaposed the discussion of AI with the discussion of sacred canons like the Bible, because we are now at the critical moment of AI canonization. When church fathers like Bishop Athanasius decided to include 1 Timothy in the biblical dataset while excluding the Acts of Paul and Thecla, they shaped the world for millennia. Billions of Christians down to the twenty-first century have formed their views of the world based on the misogynist ideas of 1 Timothy rather than on the more tolerant attitude of Thecla. Even today it is difficult to reverse course, because the church fathers chose not to include any self-correcting mechanisms in the Bible. The present-day equivalents of Bishop Athanasius are the engineers who write the initial code for AI, and who choose the dataset on which the baby AI is trained. As AI grows in power and authority, and perhaps becomes a self-interpreting holy book, so the decisions made by present-day engineers could reverberate down the ages.

    Studying history does more than just emphasize the importance of the AI revolution and of our decisions regarding AI. It also cautions us against two common but misleading approaches to information networks and information revolutions. On the one hand, we should beware of an overly naive and optimistic view. Information isn’t truth. Its main task is to connect rather than represent, and information networks throughout history have often privileged order over truth. Tax records, holy books, political manifestos, and secret police files can be extremely efficient in creating powerful states and churches, which hold a distorted view of the world and are prone to abuse their power. More information, ironically, can sometimes result in more witch hunts.

    There is no reason to expect that AI would necessarily break the pattern and privilege truth. AI is not infallible. What little historical perspective we have gained from the alarming events in Myanmar, Brazil, and elsewhere over the past decade indicates that in the absence of strong self-correcting mechanisms AIs are more than capable of promoting distorted worldviews, enabling egregious abuses of power, and instigating terrifying new witch hunts.

    On the other hand, we should also beware of swinging too far in the other direction and adopting an overly cynical view. Populists tell us that power is the only reality, that all human interactions are power struggles, and that information is merely a weapon we use to vanquish our enemies. This has never been the case, and there is no reason to think that AI will make it so in the future. While many information networks do privilege order over truth, no network can survive if it ignores truth completely. As for individual humans, we tend to be genuinely interested in truth rather than only in power. Even institutions like the Spanish Inquisition have had conscientious truth-seeking members like Alonso de Salazar Frías, who, instead of sending innocent people to their deaths, risked his life to remind us that witches are just intersubjective fictions. Most people don’t view themselves as one-dimensional creatures obsessed solely with power. Why, then, hold such a view about everyone else?

    Refusing to reduce all human interactions to a zero-sum power struggle is crucial not just for gaining a fuller, more nuanced understanding of the past but also for having a more hopeful and constructive attitude about our future. If power were the only reality, then the only way to resolve conflicts would be through violence. Both populists and Marxists believe that people’s views are determined by their privileges, and that to change people’s views it is necessary to first take away their privileges—which usually requires force. However, since humans are interested in truth, there is a chance to resolve at least some conflicts peacefully, by talking to one another, acknowledging mistakes, embracing new ideas, and revising the stories we believe. That is the basic assumption of democratic networks and of scientific institutions. It has also been the basic motivation behind writing this book.

    EXTINCTION OF THE SMARTEST

    Let’s return now to the question I posed at the beginning of this book: If we are so wise, why are we so self-destructive? We are at one and the same time both the smartest and the stupidest animals on earth. We are so smart that we can produce nuclear missiles and superintelligent algorithms. And we are so stupid that we go ahead producing these things even though we’re not sure we can control them and failing to do so could destroy us. Why do we do it? Does something in our nature compel us to go down the path of self-destruction?

    This book has argued that the fault isn’t with our nature but with our information networks. Due to the privileging of order over truth, human information networks have often produced a lot of power but little wisdom. For example, Nazi Germany created a highly efficient military machine and placed it at the service of an insane mythology. The result was misery on an enormous scale, the death of tens of millions of people, and eventually the destruction of Nazi Germany, too.

    Of course, power is not in itself bad. When used wisely, it can be an instrument of benevolence. Modern civilization, for example, has acquired the power to prevent famines, contain epidemics, and mitigate natural disasters such as hurricanes and earthquakes. In general, the acquisition of power allows a network to deal more effectively with threats coming from outside, but simultaneously increases the dangers that the network poses to itself. It is particularly noteworthy that as a network becomes more powerful, imaginary terrors that exist only in the stories the network itself invents become potentially more dangerous than natural disasters. A modern state faced with drought or excessive rains can usually prevent this natural disaster from causing mass starvation among its citizens. But a modern state gripped by a man-made fantasy is capable of instigating man-made famines on an enormous scale, as happened in the U.S.S.R. in the early 1930s.

    Accordingly, as a network becomes more powerful, its self-correcting mechanisms become more vital. If a Stone Age tribe or a Bronze Age city-state was incapable of identifying and correcting its own mistakes, the potential damage was limited. At most, one city was destroyed, and the survivors tried again elsewhere. Even if the ruler of an Iron Age empire, such as Tiberius or Nero, was gripped by paranoia or psychosis, the consequences were seldom catastrophic. The Roman Empire endured for centuries despite its fair share of mad emperors, and its eventual collapse did not bring about the end of human civilization. But if a Silicon Age superpower has weak or nonexistent self-correcting mechanisms, it could very well endanger the survival of our species, and countless other life-forms, too. In the era of AI, the whole of humankind finds itself in an analogous situation to Tiberius in his Capri villa. We command immense power and enjoy rare luxuries, but we are easily manipulated by our own creations, and by the time we wake up to the danger, it might be too late.

    Unfortunately, despite the importance of self-correcting mechanisms for the long-term welfare of humanity, politicians might be tempted to weaken them. As we have seen throughout the book, though neutralizing self-correcting mechanisms has many downsides, it can nevertheless be a winning political strategy. It could deliver immense power into the hands of a twenty-first-century Stalin, and it would be foolhardy to assume that an AI-enhanced totalitarian regime would necessarily self-destruct before it could wreak havoc on human civilization. Just as the law of the jungle is a myth, so also is the idea that the arc of history bends toward justice. History is a radically open arc, one that can bend in many directions and reach very different destinations. Even if Homo sapiens destroys itself, the universe will keep going about its business as usual. It took four billion years for terrestrial evolution to produce a civilization of highly intelligent apes. If we are gone, and it takes evolution another hundred million years to produce a civilization of highly intelligent rats, it will. The universe is patient.

    There is, though, an even worse scenario. As far as we know today, apes, rats, and the other organic animals of planet Earth may be the only conscious entities in the entire universe. We have now created a nonconscious but very powerful alien intelligence. If we mishandle it, AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness. It is our responsibility to prevent this.

    The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. Doing so is not a matter of inventing another miracle technology or landing upon some brilliant idea that has somehow escaped all previous generations. Rather, to create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms. That is perhaps the most important takeaway this book has to offer.

    This wisdom is much older than human history. It is elemental, the foundation of organic life. The first organisms weren’t created by some infallible genius or god. They emerged through an intricate process of trial and error. Over four billion years, ever more complex mechanisms of mutation and self-correction led to the evolution of trees, dinosaurs, jungles, and eventually humans. Now we have summoned an alien inorganic intelligence that could escape our control and put in danger not just our own species but countless other life-forms. The decisions we all make in the coming years will determine whether summoning this alien intelligence proves to be a terminal error or the beginning of a hopeful new chapter in the evolution of life.