sliceparcel and packagee basis是什么招投标方式

温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!&&|&&
作者:溯鹰 地震是地壳内能量的突变性释放。地震发生时,随着地震波而向外辐射的能量,对人类生命和财产的安全造成了致命的威胁。而除了地震那巨大的破坏力外, 它的不可预测性,也让人们对它的恐惧再次上升了一个数量级。人们并无法像其他地质灾害那样,能够显而易见地观察到地震的直接诱发源。如果你想避免火山灾 害,你可以远离火山;想避免泥石流灾害,你可以离开高落差的山区,想避免海啸,你不呆在海边就行。但是地震呢?翻阅人类自古至今留下的地震记录,无论城 市、海边、山区、平原…一切人类可能居住的地方,似乎留下了这地下恶魔的淫威。地震,就仿佛撒旦手中的骰子一样,那随机的灾难,说不定一不小心就甩到了哪 里。 在这致命的破坏性和不确定性面前,人们对地震的恐惧异化为了高度的警觉与敏感,一些事实上与地震并没有太多联系的现象,便会因为人们的恐惧与敏感而 聚焦起了大众的视线。但是,地震发生在哪里真的就毫无规律可言吗?这样的说法同样极端。虽然我们并没有让撒旦不掷骰子的本领,也依然无法知道那骰子随机落 在哪里,落出多大的点数,但最起码,经过地质学家们不懈的努力,我们至少知道了骰子必须洒在棋盘里。而棋盘在哪里——在板块构造学这门研究地球运动形态及 其动力机制的科学面前——便是一个确定性的命题了。 板块交界处的棋盘 板块构造学(Plate Tectonics)是一门全新的地球科学理论。它的诞生甚至比量子力学还要年轻几十年。虽然它的源头可以追溯到上世纪初大陆漂移假说的提出,但其核心理 论体系的建立,却是在上世纪中叶随着海洋磁异常条带的扩张、洋中脊及热点的发现才予以完成的。板块构造学说的理论框架主要立足于三个确定的事实以及两个基 本的假设。其中,软流圈的确定存在、星球级板块的确定存在、以及岩石圈确定可以发生大规模水平运动这“三大事实”给了板块构造学一个基本的图景,而地球总表面积不变、以及力在板块中以刚性传递这两假设,则为板块构造学所进行的一切解释给出了一个基本的前提条件。 如果要形象地描述建立在“三大事实”上的基本图景,简单地说便是这样的:在地核这个燃料炉的的加热下,具有流动性特征的软流圈产生滚滚对流环。而最上部的板块呢,则是飘在“软流圈之海”上的七巧板,随着前者的对流,随波逐流地运动在地球的最表面。 图景画出来后,为了描述问题,我们还得让它动起来。这就需要用到“两大假设”了。 虽然板块在对流环上是移动的,但地球表面却被板块完全覆盖,并没有丝毫未被板块填充的空间来容许这些板块自由地移动。根据第一假设地球的总表面积不变, 这就相当于站在一辆挤满了人的公交车上一样,车厢的体积是一定的。如果你要进行一点位移, 但同时却又扩不出来额外的自由空间,那么你的移动将必然同周围接触者产生作用:对前方人的挤压、相对侧面人的平移、以及相对身后的人的空间拉张。板块之间 也一样,只不过板块之间所进行的却是星球级的挤压、平移以及拉张。在直接接触的条件下进行大规模的相对移动,可想而知,必然会伴随着大量能量的释放与转 移。再根据第二假设力的刚性传递原则,由于板块是刚体,广袤的板块内部,并不会受到太多力的作用,它们相对运动时的能量释放及力学效应,便被集中在了板块之间针锋相对的接触边缘。 于是,板块边缘集中了最为剧烈的力的作用,使得挤压边缘由于强烈的撞击而形成宏伟的造山系,使得俯冲边缘由于深深陷入地下而形成深邃的海沟,使得平 移边缘由于强烈的剪切摩擦而产生平直绵延的走滑断裂带,使得拉张边缘由于破裂形成深大的裂谷或中脊…人们便是根据这些地貌学上的特征反映,将板块的边界一 点一点地勾勒出来,绘制成了如下所示的全球板块分布图。
【板块构造学所解释出的板块分布图,图片来源:learner.org】 回到我们今天的话题上来,我们知道,地震是地壳能量的释放。而板块构造告诉我们,板块之间的能量需要消耗在这些刚性板块的边缘,换句话说,地震的发生范围,其实就被板块构造学限定在了这板块的分界线上咯? 那就把人类历次记录的地震事件的坐标,投影到地球表面上吧。在投影的样本足够大之后,我们逐渐发现了问题的所在——地震发生的高频率地区呈现带状分布,与通过地貌特征而限定出的板块边缘几乎完全吻合!
【历次地震发生地的坐标在地球表面的投影。图片来源:alieninterview.org】 于是,板块的边缘,从此有了一个更为大家所熟知的名称——地震带。可以看到,无论从“经验的”统计数据上看也好,从“逻辑的”构造地质学理论推导来 看也罢,地震,只能发生在板块的接触带上。在广大的板块内部,地震活动是微乎其微的。这一条条星球级的黑色巨线,便是撒旦播下骰子的棋盘。 中国的板块与造山带分布情况 当然,你可能已经在平时知识的涉猎中了解到板块边缘和地震间的密切关系,但我相信大家应该还会有这样的疑问——就拿大家最关心的中国区域来说吧,从 上面地震分布的投影图上看,除了青藏高原这个欧亚板块与印度板块的接触带外,中国确实处于欧亚板块的内部,但为何“板块内部”依然还投有不少的点,为何依 然还会发生如此多的地震呢? 其实,上述经典的板块图示,只是对星球级的板块做出了描述而已。欧亚板块、太平洋板块、印度板块….它们是地球的最大的一级板块,而这些板块呢,则 是由次一级的小版块所组成的。我们现在便来讨论一下中国境内的次级板块。我们会发现,真正具体到中国尺度内的地震分布,对应的,则是这些次级板块的分布边 界。
【中国三大亚板块分布图&图片基于百度地图。笔者在此基础进行绘制。】 中国内部的基本构造格局,便是围绕着三个主要的次级板块(也称克拉通。以下不再讨论全球级板块,故简称板块)而形成的,他们分别是华北板块、华南板 块和塔里木板块。三大板块的分布如图所示:如果粗略地描述便是这样:三大板块中,华北处在两块板块中间,三者呈“7”字形排列。华北板块在南侧以秦岭-大 别造山带为界与华南板块分开,而在北西侧则以阿尔金造山带而与塔里木板块为邻。这两条造山带,分别是上述板块间相互相撞挤压的产物。 三个板块显然不能够覆盖中国的全部面积。充填在中国版图内其他区域的,除了这三大板块亲自碰撞产生的造山带外,便是在漫长的远古时期拼贴到这些板块 边部的一些微小陆块、岛弧了。譬如,我国更靠北一点的地方(新疆北部、内蒙古、东北一部分),便属于著名的中亚造山带的一部分。中亚造山带由堆挤在西伯利 亚板块和塔里木板块之间的一系列零零碎碎的小陆块小岛弧们组成。这条宽阔的造山带在西侧以天山与塔里木板块为界,绵延经过外蒙古,塑造出辽阔的蒙古高原, 最终在东方止于阴山-兴安岭山脉,从而与华北板块毗邻。 而整个青藏高原上面已经说过,则直接起源于印度板块向欧亚板块的碰撞。高原本身便是由一系列推挤的陆块拼贴而成的,它在北方以祁连山与塔里木隔山相望,而在东侧则跨越龙门-横断山脉进入华南板块的上扬子地区。 台湾则更为直接,它压根就是目前正在激烈活动的西太平洋岛弧的一部分。与日本列岛在地貌上呈平行分布。
【中国造山带分布图&图片来源:】 了解了中国内部板块和造山带的大致分布区间后,让我们在中国范围内再来演绎一次板块构造学的基本理论吧。“稳定的刚性板块在碰撞时,会在碰撞最为直 接的边缘产生大规模挤压,形成平行于板块边界的高耸绵延的造山带,并伴随着巨大的能量释放比如地震……”…..停!好的,让我们看看中国内部地震的频度投 影吧。是不是又一次与祖国一条条巍峨的山脉重合,从而勾勒出了次级板块们那隐隐约约的轮廓?
【中国地震带分布图&图片来源:】 答案自然不言而喻,除了一些早已经失活的造山带(譬如秦岭-大别、东中亚造山带)外,其他的造山带都以其尚不稳定的力学机制而在印度板块和太平洋板 块的作用下铺开了自己的棋盘,这是没有奇怪的。但是,仔细分析中国的地震分布图,我们发现,在上面第一部分结尾时提出的问题,随着板块尺度的缩小,不仅没 有顺利地解决,反而如同分形般,同样再更次一级的尺度上出现了。这个现象明显表现在华北板块的内部,“为什么次级板块的内部,还是分布着如此之多的地震投 影点呢?” 继续回答“二级板块的下面还有三级板块”?不,已经不行了。大量研究表明,在数亿年前太古代时,当单位陆块拼贴形成统一的华北板块后,显生宙中的华北板块,已然是一个无法再进行详细划分的整体了。 当然,我们还是心存侥幸,板块之间的作用力【主要】集中在板块的边缘嘛,在内部,偶尔发生一些小规模的应力场调整,也是情有可原的…..等等,情有可原? 1668年,郯城大地震,里氏8..5级,约5万余人死亡,山崩地裂,泉涌三丈… 1975年,海城大地震,里氏7.3级,官方统计伤亡共18308人… 1976年,唐山大地震,里氏7.8级,官方统计24.2万人死亡, 16.4万人重伤… 在上面这些撼动人心的数据面前,板块构造学家们好不容易用半个世纪建立起来的限定地震高发区的一点希望,便会随着华北大地上地震之魔的狂舞,而如同 震中的建筑一样,瞬间归于崩塌吗?在大自然无处不在的随机性面前,难道我们就真的只能用混沌学那“测不准”三个字来敷衍万事万物?变幻莫测的造化胎动,难 道就容纳不下一点确定的回答? 不。要知道,地质学人所着眼的那个世界,并不是量子世界。 华北苏醒! 这么说吧,我们知道,在时光的长河中,没有什么东西是一成不变的。地质学对沧海桑田的回答,已经使得地史变迁的这种印象早已深入人心。那么换位思考 一下,对于一个稳定的板块来说,暂且不谈别的板块对其的作用,我们能不能想象它自己会渐渐地活跃起来,逐步成为一个构造活动的高发区域呢?答案,是会的。 我们那从隐生宙时期形成以来便经历着数亿年沧桑,稳定地接受着时间洗礼的华北板块,在如今,已经从开始一点一点地失去它往日的厚重与沉稳,在时间中 睁开了它沉睡的双眼,一点点地活跃了起来,使得内部孕育了频繁的地质活动。这种作用被称为板块的活化(Reactivation),研究表明,板块的活化 往往与板块的减薄(Thinning)有关。而经过了岩石地球化学、地震波、地热梯度等等诸方面的验证,华北克拉通确实也有着减薄的事实。无论从其机制在 板块构造理论中的特殊性考虑也好,从地震预测及矿产勘探等应用方面考虑也罢,华北板块的减薄,都注定将引来地学界的重视。2007年,我国国家自然科学基 金委员会推出了为期8年的“华北克拉通”重大研究计划,便是与华北板块减薄相关方面的研究成为当前地学界热点的一个有力佐证。 根据相关研究,虽然华北版块内剧烈的地震活动并非直接来自相邻板块(太平洋板块)的推挤, 但太平洋板块确实也难逃干系。由太平洋板块俯冲所造成的挤压应力场的直接波及区域是日本岛弧的范围,而朝向大陆的方向,随着俯冲板块向下运动,在较靠后的 区域的地表范围内,却往往会发生拉张应力。我们有这样的生活经验,对一块橡皮泥或者面团,对它进行拉伸将不可避免地减少其厚度。板块也一样。在拉张应力场 下,岩石圈同样将遭受不可避免的减薄。只不过,对于刚性的板块,我们却无法像面团一样使之柔和地伸展。伴随着巨量的正断层和相关的岩浆活动,拉张应力在地 表的突然释放,便成为了一次又一次地震活动的第一推动力。而这,也正是太行山-燕山地震带的缘起之由了。 绵延万里的华北平原内部,还有一条横贯南北的大型平移断层,也就是我们所熟知的郯庐断裂带——它北起辽吉黑,南抵鄂东,波及河北、山东、江苏,并与 燕山地震带“共振”于京津,直接如同一把锋利的刻刀,在华北大地上留下百万年的伤痕。郯庐断裂带本来是中生代时由华北板块和华南板块碰撞而形成的,然而, 在随后太平洋俯冲作用下,伴随着太平洋俯冲所带来的巨大走滑分量,这条古老的断裂带重新活跃,成为了横插在板块内部的大地之刃。 当然,太行山正断层系以及郯庐断裂带,只是华北板块内部两条比较宏观的地震带,事实上,在太平洋板块俯冲作用中,俯冲入地幔的板块必然会受到地幔的 加热,从而产生脱水相变。脱水过程中新生物质的浮力上升,以及体积扩大,都成为了华北克拉通减薄的积极贡献因素,而对于人类来说,则又平添了诸多地震的隐 患。最直观的便是漫布在上述两大地震带之间的,深埋在地下的数以万计的断层与裂隙。记得不知是哪位地质学家说过,“华北板块的构造样式,就如同一个将盘子 在地上摔粉碎,”他这样形象地描述,“然后再踢上一脚。” 地震灾害的防护,要建立在对地质客观事实的承认上。我们无法改变华北板块内部地质背景异常复杂的客观事实,但是,这并不代表人们每天都要敏感到风声 鹤唳,看到一些稍微不正常的自然现象就“草木皆兵”。对地质现象的解答,主要还是要依据现代地质科学的成型理论。何况,在升级抗震设施技术上,在优化抗震 机制建设上,我们确实还有很大可以去努力的空间。踏踏实实地依靠专业科学理论的进展,实实在在地着手于抗震工程技术的提高,才是人类——这个以理性和智慧 而立身的种族——在亘古不灭的地质灾害面前能够给出的最有底气的,也是的唯一的答案。参考文献 [1] 雪歌, 2011. 地震云,事后之明?. 科学松鼠会/果壳网. [2] 溯鹰, 2012. 热点漫谈——接入“地球的内网”. 科学松鼠会. [3] 维基百科:四川构造期、华北构造期、喜马拉雅构造期、郯庐断裂带、郯城大地震、海城大地震、唐山大地震 [4] 嵇少丞等, 2008. 华北克拉通破坏与岩石圈减薄. 地质学报, 82(2): 174~187 [5] 刘本培等, 2006. 地史学教程. 北京: 地质出版社. [6] 吴泰然等, 2003. 普通地质学. 北京: 北京大学出版社.点击
Exploring the Discipline of Geography as a&ScienceFrom&Many secondary education institutions, particularly in the United States, include very minimal study of geography. They opt instead for separation and focus of many individual cultural and physical sciences, such as history, anthropology, , and , which are encompassed within the realms of both
and . History of Geography The trend to ignore geography in classrooms does seem to be , though. Universities are starting to recognize more the value of geographic study and training and thus provide more classes and degree opportunities. However, there is still a long way to go before geography is widely recognized by all as a true, individual, and progressive science. This article will briefly cover parts of the history of geography, important discoveries, uses of the discipline today, and the methods, models, and technologies that geography uses, providing evidence that geography qualifies as a valuable science. The discipline of geography is among the most ancient of all sciences, possibly even the oldest because it seeks to answer some of man’s most primitive questions. Geography was recognized anciently as a scholarly subject, and can be traced back to , a Greek scholar who lived around 276-196 B.C.E. and who is often called, “the father of geography.” Eratosthenes was able to estimate the circumference of the earth with relative accuracy, using the angles of shadows, the distance between two cities, and a mathematical formula. Claudius Ptolemaeus Another important ancient geographer was , a Roman scholar who lived from about 90-170 C.E. Ptolemy is best known for his writings, the Almagest (about astronomy and geometry), the Tetrabiblos (about astrology), and the Geography – which significantly advanced geographic understanding at that time. Geography used the first ever recorded grid coordinates, , discussed the important notion that a three dimensional shape such as the earth could not be perfectly represented on a two dimensional plane, and provided a large array of maps and pictures. Ptolemy’s work was not as accurate as today’s calculations, mostly due to inaccurate distances from place to place. His work influenced many cartographers and geographers after it was rediscovered during the Renaissance. Alexander von Humboldt , a German traveler, scientist, and geographer from , is commonly known as the “father of modern geography.” Von Humboldt contributed discoveries such as magnetic declination, permafrost, continentality, and created hundreds of detailed maps from his extensive traveling – including his own invention, isotherm maps (maps with
representing points of equal temperature). His greatest work, Kosmos, is a compilation of his knowledge about the earth and its relationship with humans and the universe – and remains one of the most important geographical works in the history of the discipline. Without Eratosthenes, Ptolemy, von Humboldt, and many other important geographers, important and essential discoveries, world exploration and expansion, and advancing technologies would not have taken place. Through their use of mathematics, observation, exploration, and research, mankind has been able to experience progress and see the world, in ways unimaginable to early man.
Science in Geography Modern geography, as well as many of the great, early geographers, adheres to the scientific method and pursues scientific principles and logic. Many important geographic discoveries and inventions were brought forth through complex understanding of the earth, its shape, size, rotation, and the mathematical equations that utilize that understanding. Discoveries like the compass, north and south poles, the earth’s magnetism, latitude and longitude, rotation and revolution, projections and maps, globes, and more modernly, geographic information systems (GIS), global positioning systems (GPS), and remote sensing – all come from rigorous study and a complex understanding of the earth, its resources, and mathematics. Today we use and teach geography much like we have for centuries. We often use simple maps, compasses and globes, and learn about the physical and cultural geography of different regions of the world. But today we also use and teach geography in very different ways as well. We are a world that is increasingly digital and computerized. Geography is not unlike other sciences that have broken into that realm to advance our understanding of the world. We not only possess digital maps and compasses, but GIS and remote sensing allows for understanding of the earth, the atmosphere, its regions, its different elements and processes, and how it can all relate to humans.
Jerome E. Dobson, president of the
writes (in his article ) that these modern geographic tools “constitute a macroscope that allows scientists, practitioners, and the public alike to view the earth as never before.” Dobson argues that geographic tools allow for scientific advancement, and therefore geography deserves a place among the fundamental sciences, but more importantly, it deserves more of a role in education. Recognizing geography as a valuable science and studying and utilizing the progressive geographical tools will allow for many more scientific discoveries in our world. 点击
原文链接:by 李世春&&& 戈革在中国是绝无仅有的,也是独一无二的。&&& 大凡对民族和国家有用的学者,基本上都是独一无二的,例如,李白,杜甫,曹雪芹,蒲松龄,等等。&&& 现在是“工业化、模具化”大批量生产学者的年代,如果不贴标签,你很难知道(读不出来)他们是谁。这样的学者,早晚要被扫进历史的垃圾桶里。 “批量生产的”学者格格不入的。&&&& 大毛忽洞和戈革先生曾经在一个教研室(物理),但是我们共事不多。我和戈革先生第一次见面是1982年2月,那时,我刚到东营的华东石油学院物理教研室。1982年,是6天工作制,星期六下午是政治学习。&&& 我报到后的第一个星期六,下午照例是政治学习。我和一起留校的助教(有10几个,其中有一位是戈革的预留研究生,1983年开始读)提前到了物理会议室,我找了一个角落的位置坐了下来。过了一会儿,戈革先生也来啦,大家都站起来向戈革先生问好。&&& 出乎我意料的是戈革竟然在他的准研究生指引下,径直向我的位置走来,而且向我问道:“你是吉林大学来的”。戈革本科毕业于北京大学物理系,研究生毕业于清华大学物理研究所,是余瑞璜先生的学生。当戈革知道我是从吉林大学物理系毕业的,就破例参加了一次政治学习。后来我才知道,戈革也曾经当了10多年的“右派”,戈革以前有很严重的胃病,“右派”劳动改造时,每天要拉车,因为戈革很“右”,所以让戈革驾辕。结果10几年“右派”下来,驾辕的活还真把戈革的胃病给治疗好了,而且还去了病根。&&& 当戈革向我打听余瑞璜先生的近况时,我不知道戈革也当过“右派”,我就毫无顾忌地介绍了余瑞璜先生的情况。我说余瑞璜先生很少上课,我只听过余瑞璜先生的一次演讲。(参见我的博文:),演讲时余瑞璜先生仍然穿着棉衣戴着棉帽,我还特别强调说,余瑞璜先生是吉林大学最后一个被平反的“右派”。&&& 当戈革听完我的最后一句话时,戈革就接过我的话题,毫不客气地开始批评当年整他的人,这些人就在会场里。戈革的讲话使我很难堪,很显然,戈革的发火是我点的炮。幸好这是我的第一次政治学习,因为我什么情况都不了解,他们都明白这一点。&&& 戈革先生是物理教研室水平最高的教授,在培养77级物理师资班立下了汗马功劳,再加上戈革的特殊背景,当时物理教研室的各种活动不硬性要求戈革参加,因此,如果不去戈革家里,很难见到戈革教授。&&& 据77级物理师资班毕业的同事说,戈革不但是水平最高的教授,戈革也是最厉害的教授。有一次物理课,内容是讲电容和电容器,戈革把自己认为该讲的都讲授完之后,就提前30分 钟下课了。教务处的人知道了戈革提前下课的事情后,就委托教研室的人找戈革说,提前下课是教学事故,是要受处分的。戈革却说,他们(教务处)什么都不懂, 什么叫教学事故?我该讲的都讲了,不该讲的让学生自己看,这不是教学法吗?戈革还骂了一些别的话。从此之后,再也没有人去计较戈革什么时候下课了,但是, 戈革上课的水平和认真程度依然如故。&&& 1983年暑假,我在承德避暑山庄参加一个为期2周固体物理研讨学习班,而戈革也来承德参加一个物理学史研讨班,我们住的很近,只进行过很短暂的交谈。&&& 后来,戈革从东营调到北京学院路的石油大院,住在56楼的筒子楼里。我出差时,如果是住在石油大院里,有时也去看看戈革。&&& 总之,戈革是余瑞璜先生的学生,是清华大学(和北京大学)的高材生,是石油大学最有水平的教授,可是在石油大学的三个校区,没有多少人知道石油大学还有个戈革。&&& 我们每天都在呼唤大师,实际上是叶公好龙。如果大师真的出现了,大家又都害怕他,戈革就是这样一位让官员非常害怕的绝无仅有的、独一无二的学者。&&& 处长、局长、部长,后面的队伍很长啊,我们不用担心。&&& 戈革后面可是没有排队的人,戈革没有了,是真的没有了。&附录:戈革小传(摘自:现代学者,古式文人——记科学史家、翻译家戈革先生,作者田松)2001 年6月5日,戈革先生接受了丹麦女王马格丽特二世授予的“丹麦国旗勋章”。这是戈革先生倾二十年之力独自翻译十一卷《尼耳斯·玻尔集》所应得的回报,是丹 麦对于一位传播本国优秀知识分子思想的外国人的真诚感谢。在此之前,曾有翻译家叶君健先生因为翻译安徒生而获得了同样的褒奖。  安徒生和玻尔, 是丹麦这样一个小国为人类奉献出的两位文化巨匠;叶君健和戈革,是向中国介绍人类优秀思想的两位翻译大家。当然,就全世界来说,了解安徒生的人要远远多于 知道玻尔的人;就中国而言,叶君健也远远比戈革有更高的知名度。这是他们各自所处的文化领域使然。相比之下,戈革先生的工作更为艰难,因为能够翻译玻尔的 人同样要远远少于能够翻译安徒生的人。当然,无论安徒生还是玻尔,无论叶君健还是戈革,他们对人类和中国的贡献,是不可以妄论高下的。戈 革先生在信中说:“不加分析地无限夸大和捏造中国古文化的‘伟大性’及其对西方文化的所谓‘影响’,这绝不能说是什么‘爱国主义’。因为这种荒谬作法只能 引起国际正派学者和其他有识之士的鄙视和耻笑。那其实是自己打自己的脸,是一种‘害国主义’,哪里有什么‘爱国’之可言!只有当你艰苦不懈作出比他们更坚 实的工作时,人家才会承认你,敬重你,那才真正能够为国争光,才是真正的爱国。”  戈革先生以他自己的工作,证明了他是一位高尚的爱国者。
原文链接:译言链接:(暂未翻译,)
Science 27 May 2011:
DOI: 10.1126/science.1207050
Explaining Human Behavioral Diversity
Department of Psychology, University of British Columbia, Vancouver, BC V6T1Z4, Canada. E-mail:
People have been captivated and puzzled by human diversity since ancient times. In today's globalized world, many of the key challenges facing humanity, such as reversing climate change, coordinating economic policies, and averting war, entail unprecedented cooperation between cultural groups on a global scale. Success depends on bridging cultural divides over social norms, habits of thinking, deeply held beliefs, and values deemed sacred. If we ignore, underestimate, or misunderstand behavioral differences, we do so at everyone's peril.
When it comes to understanding these differences, getting the science right is more important than ever. Ironically, one reason that the scientific study of human thought and behavior is so daunting, fascinating, and often controversial is precisely because, more than any other species, so much of human behavior is subject to considerable population variability. To better understand both this variability and humanity's shared characteristics, in recent years researchers in the social, behavioral, cognitive, and biological sciences have been using a variety of methods (including ethnographic and historical studies, experiments, and surveys) to deepen and extend our knowledge of cultural differences. These research programs are producing quantifiable, falsifiable, and replicable results. On page
of this issue, for example, Gelfand et al. () report on an ambitious 33-nation study that compares the degree to which societies regulate social behavior and sanction deviant behavior. It highlights differences between “tight” cultures with strong norms and high sanctioning, and “loose” cultures with weak norms and low sanctioning.
Gelfand et al. surveyed 6823 people in the 33 nations, asking them to rate the appropriateness of 12 behaviors (such as eating or crying) in 15 situations (such as being in a bank or at a party). Then, they compared the responses to an array of ecological and historical factors. Overall, they found that societies exposed to contemporary or historical threats, such as territorial conflict, resource scarcity, or exposure to high levels of pathogens, more strictly regulate social behavior and punish deviance. These societies are also more likely to have evolved institutions that strictly regulate social norms. At the psychological level, individuals in tightly regulated societies report higher levels of self-monitoring, more intolerant attitudes toward outsiders, and paying stricter attention to time. In this multilevel analysis, ecological, historical, institutional, and psychological variables comprise a loosely integrated system that defines a culture.
These findings complement a growing literature that reveals the power of the comparative approach in explaining critically important features of human behavior. For example, research suggests that the substantial variation in religious involvement among nations can be explained, in large part, by perceived levels of security. Religion thrives when existential threats to human security, such as war or natural disaster, are rampant, and declines considerably in societies with high levels of economic development, low income inequality and infant mortality, and greater access to social safety nets (). Another recent investigation () suggested that past agricultural practices—specifically the adoption of the plow or the hoe by farmers—can have long-term effects on contemporary gender-related social norms and behaviors. It found that, all else being equal, societies that adopted the plow at an earlier historical period tended to have greater contemporary gender inequality (such as lower levels of women's participation in the labor market and lower percentages of women in government). In contrast, societies that adopted the hoe tend to have greater gender equality today. Gelfand et al.'s findings are consistent with other research suggesting that population variability seeps deep into the workings of human minds, affecting, for example, seemingly basic processes such as perception, reasoning, self-concept, distinct motivation, and cooperative strategies in economic games ().
As more investigations enrich the cross-cultural database, two complex but critical questions open up for investigation. The first is: What are the causal pathways between variables (such as ecological, historical, and psychological variables), and how do they interact? Typically, for instance, researchers give causal precedence to chronologically earlier events and ecological factors, such as resource scarcity or pathogen levels, because they predate institutions, social practices, and individuals. In most cases, however, we know relatively little about the direction of causality. Do institutional structures socialize individuals to have certain values and preferences? Or do values and preferences lead to the creation of certain types of institutions? Or both? Knowledge of these pathways could shed light on a related question: How do sociocultural systems stabilize or change over time ()?
Differences. Researchers are exploring the origins of the vast behavioral diversity across human population. CREDIT:
The second question, which researchers are just beginning to be tackle, involves the precise origins of the underlying population variation in thought and behavior, such as the differences in conformity, prosocial emotions, and intolerance of outsiders measured by Gelfand et al. Current evolutionary models suggest at least three distinct but compatible possibilities. The first posits that the human species is a cultural species, whose behavioral repertoire depends not only on genetic transmission but also on a sophisticated cultural inheritance system (, ). This system causes rapid, cumulative, and divergent cultural evolution, the result of which is persistent intergroup variation in behavior, even when populations live in similar environments (, ). The second model holds that many population differences are likely the result of different environments and represent noncultural phenotypic plasticity (). Such a process could be reflected in the relationship between pathogen levels and stricter social norms reported by Gelfand et al. It remains to be seen, however, whether this plasticity is developmental—triggering locally adapted behavioral patterns early in an individual's life that then persist (an ontogenetic trajectory)—or whether it is “facultative,” triggering locally adapted behaviors that are more flexible, and shift over an individual's lifetime in response to variation in ecological conditions. A third possibility is that some population variability originates from a process known as gene-culture coevolution (). Although challenging to demonstrate conclusively, a growing research field is showing that human cultural practices directly alter parts the domestication of large milk-producing mammals, for instance, appears to have led to changes in gene frequencies coding for adult lactose absorption. A similar coevolutionary process may lurk behind some psychological differences, and this is an intriguing possible subject for future research. The relative contribution of these and other possible mechanisms, such as epigenetic (nongenetic) inheritance to behavioral diversity, is being actively debated ().
Progress on these questions will be easier if researchers overcome two immediate obstacles facing the behavioral sciences. One is the extremely narrow cultural database that characterizes the experimental branches of psychology, economics, and the cognitive sciences, including cognitive neuroscience. Recent surveys indicate that the overwhelming majority of research participants are convenience samples selected from Western, educated, industrialized, rich, democratic (sometimes known as WEIRD) societies that often occupy one end of the broad spectrum of human behavior (). Second, traditional disciplinary approaches typically focus on one level of analysis, ignoring others. As Gelfand et al.'s efforts illustrate, broad sampling and multiple approaches and methods are needed to investigate these different levels and their interrelations. Diverse samples, and collaborative teams that cross disciplinary boundaries (), will open up new horizons in the behavioral sciences.
References and Notes
M. J. Gelfan et al., Science 332, 1100 (2011).
P. Norris, R. Inglehart, Sacred and Secular: Religion and Politics Worldwide (Cambridge Univ. Press, Cambridge, 2004).
A. Alesin et al., NBER Working paper; .
J. Henrich, S. J. Heine, A. Norenzayan, Behav. Brain Sci. 33, 61, discussion 83 (2010).
D. Cohen, Psychol. Bull. 127, 451 (2001).
J. Richerson, R. Boyd, Not by Genes Alone: How Culture Transformed Human Evolution (Univ. of Chicago Press, Chicago, 2005).
D. Sperber, Explaining Culture: A Naturalistic Approach (Blackwell, Cambridge, MA, 1996).
S. W. Gangesta et al., Psychol. Inq. 17, 75 (2006).
K. N. Lalan et al., Nat. Rev. Genet. 11, 137 (2010).
G. R. Brown,T. E. Dickins,R. Sear, K. N. Leland, Phil. Trans. Roy. Soc. B 366, 313 (2011).
M. Bang, D. L. Medin, S. Atran, Proc. Natl. Acad. Sci. U.S.A. 104, 13868 (2007).
I acknowledge support by a Social Sciences and Humanities Research Council grant (410-).
原文链接:译言链接:(暂未翻译,欢迎认领)Published online 1 June 2011 | Nature 474, 15 (2011) | doi:10.a
Seismologists charged for giving apparent reassurances on Italian earthquake risks.
Nicola Nosengo
The perils of communicating scientific uncertainty when under the media spotlight are set to be probed in an Italian court later this year. The case, which was given the go-ahead by a judge last week, involves six Italian seismologists and one government official. They will be tried this autumn for the manslaughter of some of the 309 people who died in the earthquake that struck the city of L'Aquila on 6 April 2009. If convicted, they could face jail sentences of up to 12 years. The seven were on a committee tasked with assessing the risks of increased seismic activity in the area. At a press conference following a committee meeting a week before the earthquake, some members assured the public that they were in no danger. After the quake, many of the victims' relatives said that because of these reassurances they did not take precautionary measures, such as leaving their homes. L'Aquila's public prosecutor, Fabio Picuti, argued last week that although the committee members could not have predicted the earthquake, they had translated their scientific uncertainty into an overly optimistic message. The prosecution has focused on a statement made at the press conference by accused committee member Bernardo De Bernardinis, who was then deputy technical head of Italy's Civil Protection Agency. "The scientific community tells me there is no danger," he said at the time, "because there is an ongoing discharge of energy. The situation looks favourable." Many seismologists — including one of the accused, Enzo Boschi, president of the National Institute of Geophysics and Vulcanology in Rome — have since criticized the statement as scientifically unfounded. The statement does not appear in the minutes of the committee meeting itself, and the accused seismologists say they cannot be blamed for it. De Bernardinis's advocate insists that his client merely summarized what the scientists had told him. The prosecutor claims that because none of the other committee members immediately corrected De Bernardinis, they are all equally culpable.
Boschi says that he is "devastated" by the ruling. He notes that there are hundreds of seismic shocks every year in Italy: "If we were to alert the population every time, we would probably be indicted for unjustified alarm," he said, adding that poor building standards were the main cause of the tragedy. Vincenzo Vittorini, a physician in L'Aquila whose wife and daughter were killed in the earthquake and who is president of the local victims' association, hopes the trial will lead to a thorough investigation into what went wrong. "Nobody here wants to put science in the dock," he says. "All we wanted was clearer information on risks in order to make our choices". -------------相关资料:
(抗议对意大利地震学家过失杀人的指控)
作者:周春银原文地址:地球内部的基本结构和物质组成 Structure and Composition of the Earth's Interiors & &&&&&& 关于地球的基本结构和组成,有很多专业书籍介绍,本文仅作科普介绍。另外,本文关注的是地球内部的基本机构和组成,主要针对的是固体地球(solid Earth)部分,对于大气圈(atmosphere)、水圈(hydrosphere)和生物圈(biosphere)则不在讨论范围之内。 &&&&&& 地球已经有大约46亿年的历史(根据月球样品年龄推测而来),而地球上的大陆(continental crust)也在40亿年前就可能已存在(根据澳大利亚大陆发现的具有40亿年以上的锆石样品推测)。地球内部的物质组成和结构以及伴随的动力学活动,随着地质历史的变迁,也在不断地发生着变化,也就是说,地球就像一个具有生命的地球,其内部不是一片死寂,而是在不断地演化着。 &&&&&& 地球的一些基本参数,如半径、赤道周长、体积、质量、密度、重力等,有很多的相关介绍,此处不一一叙述。我们来关注地球内部的物质组成和结构。 &&&&&& 我们已经熟知地球内部是分为地壳(crust)、地幔(mantle)和地核(core)三部分的,地球内部具有圈层结构。
图1.地球内部剖面示意图(from Wikipedia) &&&&&& 地壳是指莫霍面(Moho)以上的固体地球部分。莫霍面,是地壳和地幔的分界面,由南斯拉夫地震学家莫霍洛维奇(Mohorovicic)发现,并以他的名字而命名该界面。莫霍面上下部分的物质性质差异较大,波速和密度突变,形成不连续面(有时称为间断面,discontinuity)。莫霍面的平均深度为大约33km(对应着相应的地壳厚度),随地区不同而差异较大,大陆地壳的厚度比大洋地壳的厚度大很多,大洋地壳的厚度一般为5-10km,而大陆地壳的厚度达数十km,如青藏高原地区地壳可能达到80km。 &&&&&& 陆壳(continental crust)和洋壳(oceanic crust)在成分和结构上也具有较大的不同。陆壳传统上分为上地壳和下地壳,分别为硅铝质和硅镁质成分。洋壳与陆壳不同,缺硅铝层,只有硅镁层,从上而下分别是沉积层(sediments)、玄武岩层(basalt)以及辉绿、辉长岩组成的席状岩墙。 &&&&&& 地幔是指莫霍面(~33km)以下古登堡面(~2900km)以上的固体地球部分。由于目前我们还无法获得深部地幔的样品,所以目前对深部地幔的成分的估计是建立在高温高压实验岩石学和地球物理观测基础上的模型。关于这些模型,本文不作一一介绍,这里主要采用目前地学界广泛通用的林伍德(A.E. Ringwood)的地幔岩(pyrolite)模型。
图2. 地球内部圈层结构简图(after Hirose and Lay,2008) & &&&&&& 地幔内部的结构是较为复杂的,尤其是对于转换带(mantle transition zone, MTZ)的认识,还需要更多的研究。地幔通常可以自上而下分为上地幔、转换带和下地幔(有的也将转换带划分到上地幔中,此处为便于叙述,考虑其特殊性,将转换带作为独立的一个单元)。具体来说,上地幔指莫霍面以下410km不连续面以上的地幔部分;转换带指410km和660km不连续面之间的地幔部分;下地幔指660km不连续面以下的地幔部分。410km不连续面和660km不连续面在地幔研究中具有重要的意义。 &&&&&& 上地幔的成分目前已经比较清楚,主要矿物是橄榄石olivine、斜方辉石opx、单斜辉石cpx、石榴子石garnet以及少量的钛铁矿ilmenite和铬铁矿chromite(Anderson,1989)。上地幔岩石在地表有大面积的出露,主要以橄榄岩(peridotite)为主,包括二辉橄榄岩(lherzolite)、方辉橄榄岩(harzburgite)和纯橄岩(dunite)。 &&&&&& 地幔转换带上边界410km不连续面通常认为是由橄榄石向高压相瓦兹利石(wadsleyite)相变引起的;在转换带内部大约520km不连续面(次级)处,瓦兹利石向林伍德石(ringwoodite)相变(也有人认为是Ca钙钛矿的生成形成的);在转换带底部660km不连续面处,林伍德石发生分解生成硅酸盐钙钛矿(silicate perovskite)和镁方铁矿(magnesiowustite, 有的称为ferropericlase,铁方镁石),该相变标志着下地幔的开始。
图3. 上地幔矿物相变及物质组成(after Akaogi,2007;相关数据来自于Ringwood和Irifune等的结果。Py,辉石;Mj,超硅石榴石;α,橄榄石;β,瓦兹利石;γ,林伍德石;Ca-pv,CaSiO3钙钛矿;Mg-pv,(Mg,Fe)SiO3钙钛矿;Mw,镁方铁矿) & &&&&&& 以上是橄榄石体系的高压相变过程,那么辉石和石榴石在地幔中是如何随深度发生相变的呢?首先是斜方辉石在上地幔中会转变成高压单斜辉石,随着压力(深度)进一步增大,辉石会在转换带中逐渐进入到石榴石结构中形成富Si的石榴石,称为超硅石榴石(majorite,有的成为镁铁榴石)。石榴石在转换带中具有较广的稳定域,可以稳定至转换带底部下地幔顶部。富Ca的石榴石在转换带中部就会逐渐转变成为Ca钙钛矿;富Al富Si的石榴石则在660km不连续面以下逐渐转变成钙钛矿。 &&&&&& 因此,地幔转换带的主要矿物成分为瓦兹利石、林伍德石、石榴石以及少量的辉石和Ca钙钛矿。
图4.下地幔中地幔岩pyrolite的相变(from Irifune & Tsuchiya,2007,符号说明参考图3) & &&&&&& 根据转换带的相变结果,可以知道下地幔的主要矿物成分为Ca钙钛矿,(Mg,Fe)SiO3钙钛矿和(Mg,Fe)O镁方铁矿。这三种矿物相都非常稳定,一直到下地幔底部都不再发生相变。但是随着2004年后钙钛矿(post-perovskite)的发现(见介绍),人们认识到(Mg,Fe)SiO3钙钛矿会在下地幔底部D"层发生相变。D"层是指下地幔底部、核幔边界(core-mantle boundary, CMB)以上大约200km的一个特殊层。由于下地幔中的矿物的稳定域都非常广,所以一直都将下地幔看作是较为均一的(homogeneous);但是后钙钛矿的发现以及相关的地球物理观测均显示,下地幔并非人们所想象的那么均一,至少局部是存在不均一性的(heterogeneity)。 &&&&&& 地幔的矿物组成,可以参考下图的总结:
图5. 地幔岩矿物组成随深度的变化(from Ono,2008) &&&&&& 地核的成分主要是通过地球物理观测和实验推测出来的,由于实验技术的限制,目前在地核条件下的实验比较有限。地核在大约5100km深度存在一个分界面,分开外核和内核。根据S波在外核中的消失现象,推测外核是液态的;又由于其密度比纯铁的密度要低,所以推测外核可能含有一些较轻的元素;因此外核可能是液态的铁合金。而内核则是固态的,主要成分是金属铁。
图6. 实验岩石学结果、地球物理观测以及地球内部结构图(from Bass and parise,2008,符号说明参考图3) 参考文献: D.L. Anderson, Theory of the Earth Blackwell Scientific, Boston,
pp.(以及2007年新版) K. Hirose, T. Lay, Discovery of post-perovskite and new views on the core-mantle boundary region, Elements 4(9. M. Akaogi, Phase transitions of minerals in the transition zone and upper part of the lower mantle, in: E. Ohtani, (Ed), Advances in High-Pressure Mineralogy, Geological Society of America, 2007, pp. 1-13. D.J. Frost, The upper mantle and transition zone, Elements 4(6. J.D. Bass, J.B. Parise, Deep Earth and Recent Developments in Mineral Physics,&Elements 4(3. T.Irifune, T.Tsuchiya, Mineralogy of the Earth – Phase Transitions and Mineralogy of the Lower Mantle, Treatise on Geophysics,vol2,Mineral Physics,33-62. S. Ono, Experimental constraints on the temperature profile in the lower mantle, Physics of the Earth and Planetary Interiors 170(3. 周春银,金振民,章军锋,地幔转换带:地球深部研究的重要方向,地学前缘,),90-113.(下载)
原文链接:译言链接:(暂未翻译,欢迎领取)
Science 4 March 2011:
DOI: 10.1126/science.331.
The debate over that question suggests that the discovery of dark matter—whenever it comes—will be a murky affair.
For decades, astronomers' observations have indicated that some elusive “dark matter” provides most of the gravity needed to keep the stars from flying out of the galaxies. In recent years, cosmologists' studies of the afterglow of the big bang, the cosmic microwave background, have indicated that dark matter makes up 80% of all matter in the universe. Now, many physicists expect that within 5 to 10 years they will finally discover particles of dark matter—that is, if they haven't already done so.
Data from three experiments all suggest that physicists have glimpsed dark matter particles much less massive than they had expected, or so argue Dan Hooper, a theorist at Fermi National Accelerator Laboratory in Batavia, Illinois, and his colleagues. Physicists working on other experiments say their results rule out such particles, but Hooper contends that a realistic look at the data and the uncertainties shows no fatal contradictions.
The case isn't conclusive, Hooper emphasizes. “I think it's fairly compelling,” he says, “but we all agree that we're going to need something else to convince us that what we're seeing is dark matter.” Still, Hooper's work has some physicists nervously tugging at their collars. “When I saw Dan's … analysis I thought, ‘Oh God, I better go back and take a second look [at our data] and make sure I didn't miss anything,’” says Peter Sorensen of Lawrence Livermore National Laboratory in California, who worked on an experiment that didn't quite have the sensitivity to test the idea.
Vindicated? For a decade, physicists with the DAMA/LIBRA detector (above) have claimed an observation. New results may bolster their case, one theorist says.
CREDIT: COURTESY OF R. BERNABEI/UNIVERSITY OF ROME TOR VERGATA
Whether or not Hooper's claim stands up, the debate surrounding it underscores two characteristics of the search for dark matter. First, nailing down the particles' properties will likely require connecting many subtle and ambiguous clues. “To discover the nature of dark matter will take a village,” says Rocky Kolb, a cosmologist at the University of Chicago in Illinois. “I don't expect a eureka moment,” when one decisive observation makes everything clear. Second, it's a particularly contentious field. With several relatively small teams (by particle physics standards) competing for a piece of a huge prize, researchers are cagey about discussing their results. And accusations of spinning the data to bolster one claim or another fly this way and that.
When seeking dark matter particles, physicists have three options. First, they can look for the particles floating by, as our galaxy supposedly lies embedded in a vast dark matter “halo.” Dark matter particles should barely interact with ordinary matter, so such “direct searches” require sensitive detectors housed deep underground, where levels of cosmic rays and ordinary radiation fall but dark matter can penetrate. If a dark matter particle strikes an atomic nucleus, then the recoiling nucleus can produce a tiny pulse of electricity, light, and heat.
Second, scientists can turn their eyes to the skies. When two dark matter particles collide, they may annihilate each other, producing gamma rays or other familiar particles. “Indirect searches” might spot the annihilations by detecting excess gamma rays coming from places such as the center of our galaxy with instruments like the orbiting Fermi Gamma-ray Space Telescope or the ground-based High-Energy Stereoscopic System (HESS) in the Khomas Highland of Namibia. Finally, an atom smasher might blast dark matter particles into existence in what's called an “accelerator-based search.”
All three methods may soon pay off, physicists say. That hope is bolstered by a notion called supersymmetry that solves conceptual problems in particle theorists' “standard model” and posits for every known particle a massive undiscovered partner. Some partners could be weakly interacting massive particles (WIMPs) that would make ideal dark matter particles. And if supersymmetry is going to patch up the standard model, then those WIMPs should be observable, Kolb says: “Within the next 10 years, we'll either have very strong evidence of what WIMPs are or—because you can never kill an idea—we'll have given the idea a near-death experience.”
But if Hooper is right, physicists may have already seen signs of such particles. In the past 6 months, he and his colleagues have laid out their case in four papers posted to the arXiv preprint server (). Their arguments rely on something old, something new, something borrowed, and something circling our blue planet.
The old is a controversial result from the Dark Matter (DAMA) detector in Italy's subterranean Gran Sasso National Laboratory. In 2000, physicists reported signs of dark matter particles striking nuclei in a 100-kilogram array of sodium iodide crystals to produce flashes of light. The rate of flashes varied over the year, peaking in June and bottoming out in December. That's what should happen if the galaxy spins in a dark matter halo so that the solar system faces a “wind” of dark matter particles blowing at 230 kilometers per second. As Earth's orbit carries the planet into that wind, it should appear to blow 30 kilometers per second faster, increasing the rate of particle detections. The rate should fall as Earth swings away from the wind.
The team has now traced that signal for 13 years, the last few with the enlarged DAMA/LIBRA detector, and nobody doubts it's there. DAMA researchers say they cannot identify anything other than dark matter particles that could produce a signal like the one they observed. “Nothing has been found or suggested by anyone in over a decade,” says the team's leader, Rita Bernabei of the University of Rome Tor Vergata. But other experiments have seen nothing, and other researchers say the DAMA team hasn't always explained its crosschecks. “In the past, they've been pretty tightlipped with the details,” says Livermore's Sorensen. “It's like they're saying, ‘We saw something. Now give us the Nobel Prize and go away.’”
Hot spot. Fermi (inset) may be seeing gamma rays from dark matter in the galaxy's core.
CREDIT: NASA
In a paper posted on 27 October, Hooper and colleagues argue that new data suggest DAMA may be seeing dark matter after all. The data were first presented at a conference in Marina Del Ray, California, in February 2010 by researchers working with the Coherent Germanium Neutrino Technology (CoGeNT) detector in the Soudan Underground Mine in northern Minnesota. The 440-gram cylinder of germanium produces an electrical signal when struck by a particle, and researchers saw a tantalizing excess of very low-energy events.
At first, many physicists thought the DAMA and CoGeNT results might agree. Their enthusiasm waned, however, when on a graph of particle mass versus strength of interactions with ordinary matter, the DAMA and CoGeNT results pointed to different regions, suggesting that they could not be signs of the same particles. However, the results depend on so-called quenching factors to relate a signal's size to the energy of the recoiling nucleus, and Hooper realized that the DAMA team used an average value with a tiny uncertainty. So he took from the literature a less-certain low-energy estimate—that's the something borrowed. “You really should include all the uncertainties,” he says. “When you do, these regions get a lot bigger and overlap.”
“I think [the case is] fairly compelling, but we all agree that we're going to need something else to convince us that what we're seeing is dark matter.” —DAN HOOPER, FERMI NATIONAL ACCELERATOR LABORATORY CREDIT: COURTESY OF DAN HOOPER
Then there are observations from the great blue beyond. In a paper posted on 31 December, Hooper and Lisa Goodenough of New York University examined publicly available data collected by the Fermi satellite, launched in June 2008, as it peered at the galactic center. The pair took into account emissions from the galactic disk and the broad bulge in its middle. They extrapolated from the higher-energy gamma ray spectrum measured by HESS to estimate the spectrum at lower energies expected from the galactic center. But the measured flux of gamma rays exceeded expectations for energies below 7 billion electron-volts (GeV). That excess could signal dark matter annihilations.
Not everyone is convinced. In fact, Juan Collar of the University of Chicago, who leads the CoGeNT team, says the team isn't claiming a signal. “We're calling it a background,” says Collar, a coauthor on Hooper's October paper. Peter Michelson of Stanford University in Palo Alto, California, says Fermi researchers' own analysis of gamma ray emission from the galactic center shows an excess around 3 GeV. But so far it's too small and uncertain to be significant, he says. “The bar must be very high for claimed detections of dark matter,” Michelson says. “We are not there yet.”
The dark matter particles would not be exactly what many expected, either. The data point to particles weighing about 7 GeV, or seven times as much as a proton. Physicists expect WIMPs to weigh 10 times more. The W in WIMP stands for the weak nuclear force through which the things would interact with ordinary matter. So WIMPs should weigh about as much as the particles that convey that force, the W and Z bosons, which weigh 80 and 91 GeV. Still, in papers posted 2 September and 5 November, Hooper and colleagues argue that a 7-GeV particle f its into established theory and supersymmetry. “They don't have to violate anything sacred” to do it, Kolb says.
Then there is the question of whether other experiments already skewer Hooper's claim. In September, results from a detector at Gran Sasso called XENON100, which contains 170 kilograms of liquid xenon, seemed to rule out such particles. But Sorensen, who worked on the earlier XENON10 experiment, and a colleague argue in a 31 January preprint that the XENON100 limits rely on an untenable extrapolation of a quenching factor.
A more serious challenge comes from a preprint posted 10 November by physicists working with the Cryogenic Dark Matter Search II (CDMS II) experiment at Soudan. CDMS II detects pulses of electricity and heat when particles strike ultracold disks of silicon and germanium. Looking at just the heat signals to snare the lowest-energy events and analyzing the data conservatively, CDMS II rules out the particles' existence, says Blas Cabrera, the team leader from Stanford. “We believe it robustly rules it out,” he says. “It's not even close.” Hooper acknowledges that the CDMS II limit imposes a constraint but says “there's still wiggle room” for light dark matter particles.
The debate over Hooper's claim reveals the contentious nature of the field, in which pretty much every team draws fire for overstating what its data show. For example, Collar isn't always as conservative as he professes to be about interpreting CoGeNT's excess, Sorensen says: “He may tell you that it's a background, but let me tell you, when he gives a talk he doesn't try to dissuade theorists from interpreting it as a signal.”
What would clinch the case for a light dark matter particle? Little doubt could remain if CoGeNT saw an annual cycle that matches DAMA's, Hooper says. But Collar won't be convinced of any sighting until the purported particles also emerge in collisions at the world's largest atom smasher, the Large Hadron Collider at the European particle physics laboratory, CERN, near Geneva, Switzerland. “If people don't see it in accelerator experiments and other ways, I'm just not going to believe it,” he says.
Even if his claim falls apart, Hooper says he won't regret having made it. “I enjoy taking something that might be true and exploring it rather than working all the time on something I know to be true,” he says. He's also had the pleasure of stirring up the community, which seems fairly easy to do with expectations running so high.
原文链接:译言链接:(暂未翻译,欢迎认领)
Science 11 February 2011:
pp. 712-714
DOI: 10.1126/science.1202828
and 1Jackson School of Geosciences, The University of Texas, Austin, TX 78712, USA.2Department of Radiology, University of California, San Diego, CA 92093, USA.*To whom correspondence should be addressed. E-mail:
Abstract Three-dimensional computing is driving what many would call a revolution in scientific visualization. However, its power and advancement are held back by the absence of sustainable archives for raw data and derivative visualizations. Funding agencies, professional societies, and publishers each have unfulfilled roles in archive design and data management policy.
Three-dimensional (3D) image acquisition, analysis, and visualization are increasingly important in nearly every field of science, medicine, engineering, and even the fine arts. This reflects rapid growth of 3D scanning instrumentation, visualization and analysis algorithms, graphic displays, and graduate training. These capabilities are recognized as critical for future advances in science, and the U.S. National Science Foundation (NSF) is one of many funding agencies increasing support for 3D imaging and computing. For example, one new initiative aims at digitizing biological research collections, and NSF’s Earth Sciences program will soon announce a new target in cyberinfrastructure, with a spotlight on 3D imaging of natural materials.
Many consider the advent of 3D imaging a “scientific revolution” (), but in many ways the revolution is still nascent and unfulfilled. Given the increasing ease of producing these data and rapidly increased funding for imaging in nonmedical applications, a major unmet challenge to ensure maximal advancement is archiving and managing the science that will be, and even has already been, produced. To illustrate the problems, we focus here on one domain, volume elements or “voxels,” the 3D equivalents of pixels, but the argument applies more broadly across 3D computing. Useful, searchable archiving requires infrastructure and policy to enable disclosure by data producers and to guarantee quality to data consumers (). Voxel data have generally lacked a coherent archiving and dissemination policy, and thus the raw data behind thousands of published reports are not released or available for validation and reuse. A solution requires new infrastructure and new policy to manage ownership and release ().
Technological advancements in both medical and industrial scanners have increased the resolution of 3D volumetric scanners to the point that they can now volumetrically digitize structures from the size of a cell to a blue whale, with exquisite sensitivity toward scientific targets ranging from tissue properties to the composition of meteorites. Voxel data sets are generated by rapidly diversifying instruments that include x-ray computed tomographic (CT) scanners (), magnetic resonance imaging (MRI) scanners (), confocal microscopes, synchrotron light sources, electron-positron scanners, and other innovative tools to digitize entire object volumes.
Fig. 1 (A) Photomicrograph of fossil tooth (Morganucodon sp.). (B) MicroXCT slice, 3.2-μ arrows show ring artifact, a correctable problem with raw data but an interpretive challenging with compressed data. (C) Digital 3D reconstruction. (D) Sli red arrows show growth bands in dentine, and blue arrows mark the enamel-dentine boundary, both observed in Morganucodon for the first time in these scans.
Fig. 2 Multimodal imaging, segmentation, and registration of Island Kelpfish, Alloclinus holderi. (A) Specimen (94 mm standard length); (B) 7T MRI sagittal slice from DFL (100-μm3 voxel size); (C) CT reconstruction from DigiM (D) combined CT bone and MRI soft-tissue 3D reconstructions from DFL.
CT and MRI have evolved across the widest range of applications. CT is sensitive to density. Its greatest strength is imaging dense materials like rocks, fossils, the bones in living organisms and, to a lesser extent, soft tissue. The first clinical CT scan, made in 1971, used an 80 by 80 matrix of 3 mm by 3 mm by 13 mm voxels, each slice measuring 6.4 Kb and taking up to 20 min to acquire. Each slice image took 7 min to reconstruct on a mainframe computer (). In 1984, the first fossil, a 14-cm-long skull of the extinct mammal Stenopsochoerus, was scanned in its entirety (), signaling CT’s impact beyond the clinical setting and in digitizing entire object volumes. The complete data set measured 3.28 Mb. With a mainframe computer, rock matrix was removed to visualize underlying bone, and surface models were reconstructed in computations taking all night. By 1992, industrial adaptations brought CT an order-of-magnitude higher resolution (high-resolution x-ray computed tomography, HRXCT) to inspect much smaller, denser objects. A fossil skull of the stem-mammal Thrinaxodon (68 mm long) was scanned in 0.2-mm-thick slices measuring 119 Kb each. Scanning the entire volume took 6 hours, and the complete raw data set occupied 18.4 Mb (). The scans revealed all internal details reported earlier, from destructive mechanical serial sectioning of a different specimen, and pushed the older technique toward extinction (). In 2008, the tiny tooth in
was scanned on a modern nanoXCT scanner, with the use of a cone beam to acquire the entire volume in multiple 20-s views, rather than in individual slices. The scan took 4.6 hours, generating 3.2-μm3 voxels in a data set consuming just over 1 Gb.
In MRI, accelerated data acquisition with greater resolution has also been achieved, with the development of increasingly sophisticated hardware and higher-field magnets. Standard clinical human scanners (3 T) can acquire full 3D volumes of data (e.g., human brain) at 0.5-mm isotropic resolution in minutes. High-field (7 and 11.7 T) small-bore scanners are currently standard for high-resolution small-animal biomedical imaging (~80 and ~40 μm3 voxels, respectively), but human 7-T systems have now been developed. Unlike other clinical scanners, MRI can discriminate soft tissues because of its sensitivity to the state of tissue water. This sensitivity can take many forms and affords the ability to create contrast based on a wide variety of variations in tissue microstructure and physiology, such as local water content, tissue relaxation times, diffusion, perfusion, and oxygenation state. Therefore, for a single spatial location, there might be associated multiple voxels, and the data associated with each voxel can be of much higher dimensionality. For example, a typical diffusion tensor MRI (DT-MRI) data set might consist of multiple (e.g., 60) uniquely diffusion-sensitized images (60 voxels per spatial location), and each voxel can have associated with it multiple diffusion-related parameters derived from these 60 voxels (local diffusion tensor, mean diffusivity, neural fiber reconstructions, etc.).
By noninvasively producing digital data to visualize and measure internal architecture and physiology, voxel scanners provide a rich source of new quantifiable scientific discovery for anyone with access to those data. Retrievable archived digital data lends itself to repeated reuse with the latest advances in computational power and sophistication. This has become evident in data visualization, for example, where surface meshes can be extracted, internal parts can be segmented for discrete visualization, and a new frontier of quantitative 3D analysis is opening. Moreover, fully 3D digitized data allow different modalities to be combined to build data sets that are more informative than the individual data sets by themselves (). Scanned objects and segmented parts can be printed as physical replicas, through the use of stereolithography, laser sintering, and other “rapid prototyping” devices. The near future promises holographic display, 3D shape queries, and other potential uses.
As better voxel data have been obtained, their half-life and utility have steadily grown. Early CT data were stored on magnetic tape, whereas Stenopsochoerus was stored on floppy disks, and these data all died as their storage media became obsolete. Thrinaxodon marked a turning point. The emergence of CD-ROM afforded inexpensive and more enduring storage and dissemination of not only the raw data set, but also derivative digital products, including animations, volumetric models, and scientific reports on the data. When published in 1993 on CD-ROM (), the Thrinaxodon archive consumed 623 Mb and demonstrated another trend, that derivative products consume larger volumes than the original data. The aforementioned DT-MRI data set would take ~100 Mb of storage, but its derivative products can easily take up 10 times that space. Data archiving and management thus become increasingly important in providing the access and information to facilitate the continuing use of such complex data, and require the development of a cyberinfrastructure commensurate with the evolving complexity of the raw data and its derivatives.
But how far into the future will today’s voxels survive, and just how much will the immense public investment already made in voxels return? Their potential power in fueling science into the future is well illustrated by genetic sequences, a much younger data “species” than the voxel. The rapid growth of genetics was facilitated by the ability to (i) inexpensively digitize sequences, (ii) analyze them on inexpensive computers, (iii) share data across the Internet, and (iv) reuse individual data sets and derivative products (e.g., alignment data sets) that were assembled into large, publicly accessible collections governed by policy on ownership and release. The last element immensely amplified the value and return on sequence data. For voxels, this key element is missing.
Voxel-based data underlie thousands of publications. Yet we have no census of how many data sets survive, and the basic scientific tenet of data disclosure is unfulfilled. Only two small prototype biological voxel collections have appeared. The DigiMorph project () now serves HRXCT data sets from a collection of 1018 vouchered biological and paleontological specimens scanned by the University of Texas since 1992 (), and the Digital Fish Library (DFL) project serves 295 MRI data sets () generated more recently at the University of California, San Diego, from specimens in the Scripps Institution of Oceanography Marine Vertebrates Collection (). Both treat accessioned data sets much like vouchered specimens. The Web site for DigiMorph went live in 2002 and that for DFL in 2005, and together they have reached more than 3 million unique visitors who have downloaded 3 Tb of data. The two most popular specimens at DigiMorph have each been accessed by ~80,000 unique visitors. More than 100 peer-reviewed scientific papers, theses, and dissertations are published on data sets from the two sites. Both collections effectively increased access to expensive instrumentation at reduced costs and delivered high-quality data to a large, diversifying global audience for use in both research and education.
As with genetics, our experience predicts that data disclosure, reuse, query efficiency, outreach, and other advances will be driven by strategically designed voxel archives. As data volumes increase, one strategic element is compression. For example, abstracting data sets into small (~1 to 5 Mb) animations was crucial to Web dissemination by DigiMorph and DFL. But compression can destabilize data as computing environments evolve, and it poses risks to interpretation and validation. A best practice is to work from the original raw data, and DigiMorph and DFL have received increasing numbers of requests for the full-resolution voxel data sets, which are not yet available online. A great measure of return from the huge investment already made in voxels will depend on sustainable archives with life spans and capacities equal to the utility and size of the raw data and its derivatives.
Technology for producing a voxel commons is less challenging than addressing the void in policies for handling voxel data. Views on data ownership, latency, and release, even within the academic community, are diffuse and polarized, which calls for standards set by publishers, societies, and funding agencies. Data quality and formats vary, and different disciplines have much to gain by developing their own standards, architectures, interface designs, and metadata tied to particular species of data. Many professional societies, however, still seem unaware that voxel data fundamental to new discoveries in their own disciplines are not being released, much less validated or reused, and in many cases, are not being saved or curated at all.
Online supplemental data limits are too small for voxels, nor is it the role of publishers to manage primary data collections. Extending the GenBank model to voxels is a solution within reach, if not without obstacles. Sustained funding is paramount. Another need is professional advancement, still bound to metrics of conventional publication, while the more fundamental tasks of data generation and management go unrewarded. Young careers are still best served by publishing words and pixels, and abandoning used voxels to get on to the next project. As funding agencies pour increasing millions into scanners and scanning, only negligible funding and thought have gone to data archiving or leveraging their initial investments. There is urgency to act. As second-generation voxel scientists have now begun to retire, their data are on track to die with them, as it did with the first voxel pioneers, even as we now train a third generation in 3D imaging and computation. Funding agencies can rejoice in the unexpected longevity and growing value in voxels they have already produced. But they must first secure the basic tenet of science by ensuring that researchers have the means to archive, disclose, validate, and repurpose their primary data.
References and Notes
B. A. Price, I. S. Small, R. M. Baecker, in Proceedings of the 25th Hawaii International Conference on System Sciences, 7 to 10 January 1992, vol. 2, p. 597.
J. L. Contreras, Information access. Prepublication data release, latency, and genome commons. Science 329, 393 (2010).
C. Hes

我要回帖

更多关于 slice and dice 模型 的文章

 

随机推荐