
医疗应用:大数据的容量足够用来创造更加个性化的“治疗处方”
1980年之前,临床医师们主要依赖“经验”、“直觉”以及“触摸不到的线索”来判断一个发烧了的小孩子到底是由较轻的疾病(如感冒)还是由比较严重的疾病(如急性肺炎或脑膜炎)引起的。换句话说,他们靠直觉来看病。在1980年,一个由研究者组成的小组研究了那些有经验的儿科医生是如何为他们的病人诊断的。他们发现了那些杰出的医师在直觉中参考了“输入信息”,而那些缺乏经验的医师在试图可靠地试用这些“输入信息”时就显得过于主观了。
在随后的研究中,研究人员从精确度和客观性两个方面上加强了他们的系统。在这个系统中,那些正在接受培训的儿科医师能够像有经验的医师那样接触到很多因严重疾病而导致发烧的儿童。事情发生了根本上的变化:直觉的建立被质化和量化地形成了一种形式,并且这种形式可以被那些经验并不丰富的医生所利用。如今,几乎所有正在为发烧儿童看病的医生都在证实这精妙的发现。
如果我们把目标确定为为每位儿童的每次就诊都提供最好的治疗,那么我们需要的就不仅仅是直觉和专业的技能了,因为人无完人。基于证据的医疗方法(EBM)通过把临床研究整合进治疗准则来帮助医师提高治疗水平。然而就普遍意义来说,EBM一般是基于“小数据”的研究——与动辄数十万或数百万的大数据不同,一个大型的EBM则是包含了数千例病例的系统。在这样的小样本规模系统中输入信息必须被良好地定义和形式化,随之而来的结果便是包含了所有这些信息的治疗准则在解释病人与病人之间的差异时就显得力不从心。因而EBM有时被人们嘲笑为“菜谱式治疗”,医生们只是机械地遵循着这些治疗的“配方”来治病。鸡肉与菠菜对于一些人来说也许是顿美味,但是当我们要为一位素食主义者上菜时又该怎么办呢?
大数据的容量足够用来创造更加个性化的“治疗菜谱”。利用一个容量为5亿人的数据集,你可以为一个体重超重且高胆固醇每天必须服用阿司匹林和立普妥的35岁男人,或者为一个与上述情况完全相同但是体重偏轻的人定制治疗方案。
大数据也可以允许我们通过在粗略的未经处理的数据集中逐条比对来发现微小但是强有力的线索,从而进行分析研究。小数据集中通常不能处理粗糙的原始数据,因为它不能分辨“心梗”与“心肌梗死”的区别,即便他们指的是同样的事情。并且由于在小数据集中只能使用单一的术语,使得我们无法做出确凿的归纳。同时小数据集也无法支持需要识别“心梗”与“心肌梗死”是同一种术语的研究。小数据集同样无法支持我们使用很细节的线索作为输入,因为它们在数据集的发生具有太大的随机性–确凿的归纳是无法从这样的小样本数据集中得到的。
目前有越来越多的争议在讨论大数据是否正在取代直觉在医疗中的地位。无论怎样,大数据仍是我们最大的希望–计算机可以在模仿人类专家直觉方面跟进一步,那时我们就再也不用依赖EBM这样的小数据集了。真正的问题并不是大数据正在威胁医疗中的直觉,而恰恰相反,是在于我能未能做到这一点。我们如今在医疗领域并未过于依赖大数据,因为这的确需要大数据量,而医学研究者们手中并没有真正的大型临床数据集。
建立,维护,标识以及保密临床临床数据集的代价太高昂了。泄露数据集信息的惩罚很重,而建立这样数据集的利益却几乎不存在。即便是政府支持的健康信息流通项目通常也不进行数据统计。取而代之的是,这些系统被用作让登陆者进入一个外部系统,一次只能取回一位患者的数据,并且得到的数据通常是摘要形式的。大数据分析是无法在这样的体系中实现的。
然而,大数据量医疗数据集受到的最大壁垒是医疗信息中盛行的所谓“最佳实践准则”,这一准则已经落后于其他行业一二十年了。医疗信息体系仍在持续强化使用陈旧的数据屏障,而这屏障正是维持“小数据集”研究的基础。在这个体系中,只有通过审核的,标准的,被编辑过的数据才能被接收——这里没有任何粗糙的原始数据!随之产生的数据集便是小数据集,因为屏障式的处理过程是强化数据源的瓶颈,由于缺乏一致性,许多可用的数据被拒之门外。这个屏障创造了同质化的数据,而排除了能使系统真正有用的多样性,这就如同白面包一样——一个被滤去了谷物最好营养物质的空空的净化盒。如果在大数据中使用了这样的屏障,谷歌和亚马逊就不可能成功,原始的大数据正是他们成功的原因。
除非每个医生都同时拥有无与伦比的直觉,否则计算机就应该用来提供更好的医疗。如果我们在处理过程中摒弃小数据思维,并开始建立真正的大数据,那么大数据在医疗支持中将会发挥更加巨大的作用。
Until 1980, clinicians relied heavily on “experience,” “instinct,” and “intangible clues” to determine whether a child with a fever had a minor disease (such as a cold) or something more serious (such as pneumonia or meningitis). In other words, they used “intuition.” In 1980, a group of researchers studied how experienced pediatricians assess these patients. They identified several findings that expert clinicians use as “inputs” to their intuition, but found they were too subjective to be reliably used by doctors with less experience.
In follow-up studies, the researchers honed their system to be more accurate and objective. Using this system, pediatricians in training were able to assess illness severity for children with fever almost as well as experienced pediatricians! In essence, the building blocks of this intuition had been identified and quantified into a form that new doctors could use despite their lack of experience. Today, nearly every doctor treating a febrile child documents these subtle findings.
If we aim for every clinician to provide perfect care every time, then we need more than just intuition and expertise, because nobody’s perfect. Evidence-based Medicine (EBM) helps clinicians provide better care by summarizing clinical studies into care guidelines. However, EBM is generally based on “Small Data” studies – a large EBM study consists of thousands of cases instead of the millions or billions typical of Big Data. With such small sample sizes the data inputs must be well-defined and perfectly formatted, and the resulting all-encompassing guidelines often fail to account for differences between patients. EBM is sometimes derided as “cookbook medicine” where doctors blindly follow “recipes” for care. Chicken and spinach might be a great meal for most people, but what if I’m serving a vegetarian?
Big Data is large enough to create “care recipes” that are far more individualized. With a dataset of 500 million people, you can create one care recipe for overweight, 35-year old men with high cholesterol on daily aspirin and Lipitor; and another recipe for the same group of men who differ only by being underweight.
Big Data also enables analytics that “read between the lines” to find subtle but powerful clues embedded in raw, unprocessed data. Small Data often can’t handle raw input because it can’t tell that “MI” and “myocardial infarction” both refer to the same thing, and there aren’t enough cases in Small Data to draw valid conclusions by using just one of those terms. Small Data is also too small to allow the analytics to “figure out” that “MI” and “myocardial infarction” are equivalent terms. Small Data also isn’t big enough to use subtle clues as inputs because they occur too infrequently in the dataset – valid conclusions cannot be drawn from such small sample sizes.
Arguments are raging about whether Big Data is usurping the role of intuition in Medicine. However, Big Data is our best hope for computers to do a better job at mimicking the intuition of human experts so that we no longer need to rely on Small Data EBM. The real problem with Big Data is not its threat to clinical intuition but rather our failure to do it at all. We aren’t using Big Data heavily in Medicine because it requires really big datasets, and medical researchers don’t have really big clinical datasets.
Building, maintaining, de-identifying, and securing clinical datasets has high costs. The penalties for failing to secure the data are severe, while the incentives to create such datasets are nearly non-existent. Even government-supported Health Information Exchanges often don’t actually aggregate data. Rather, they serve as record locators into external systems where data can be retrieved one patient at a time, often only in summary form. Big Data analytics cannot be done with that architecture.
However, the biggest barriers to big medical datasets are the prevailing “best practices” in medical informatics, which continue to lag other industries by ten to twenty years. Medical informaticists still insist upon enforcing the antiquated data curation processes required to support “Small Data” analytics. Only approved, standardized, encoded data can be accepted – no raw data or subtleties allowed here! The resulting dataset is small because the curation process is a resource-intensive bottleneck, and much of the available data is rejected for lack of conformity. Curation creates a homogenized data product devoid of the variety that makes it truly useful, not unlike white bread, an empty carb source stripped of its whole grain nutrient goodness. Google and Amazon do not succeed at Big Data despite a lack of curation, they succeed because of it.
Until every doctor has perfect intuition all the time, computers should help us provide better care. Big Data can play a big role in supporting care, provided that we start creating truly Big Data and stop trying to apply Small Data thinking to the process.
Dr. Jonathan Handler serves as the Chief Medical Information Officer at M*Modal and is a board-certified emergency physician with twenty years of experience in Medical Informatics.
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
2025 年,数据如同数字时代的 DNA,编码着人类社会的未来图景,驱动着商业时代的运转。从全球互联网用户每天产生的2.5亿TB数据, ...
2025-05-27CDA数据分析师证书考试体系(更新于2025年05月22日)
2025-05-26解码数据基因:从数字敏感度到逻辑思维 每当看到超市货架上商品的排列变化,你是否会联想到背后的销售数据波动?三年前在零售行 ...
2025-05-23在本文中,我们将探讨 AI 为何能够加速数据分析、如何在每个步骤中实现数据分析自动化以及使用哪些工具。 数据分析中的AI是什么 ...
2025-05-20当数据遇见人生:我的第一个分析项目 记得三年前接手第一个数据分析项目时,我面对Excel里密密麻麻的销售数据手足无措。那些跳动 ...
2025-05-20在数字化运营的时代,企业每天都在产生海量数据:用户点击行为、商品销售记录、广告投放反馈…… 这些数据就像散落的拼图,而相 ...
2025-05-19在当今数字化营销时代,小红书作为国内领先的社交电商平台,其销售数据蕴含着巨大的商业价值。通过对小红书销售数据的深入分析, ...
2025-05-16Excel作为最常用的数据分析工具,有没有什么工具可以帮助我们快速地使用excel表格,只要轻松几步甚至输入几项指令就能搞定呢? ...
2025-05-15数据,如同无形的燃料,驱动着现代社会的运转。从全球互联网用户每天产生的2.5亿TB数据,到制造业的传感器、金融交易 ...
2025-05-15大数据是什么_数据分析师培训 其实,现在的大数据指的并不仅仅是海量数据,更准确而言是对大数据分析的方法。传统的数 ...
2025-05-14CDA持证人简介: 万木,CDA L1持证人,某电商中厂BI工程师 ,5年数据经验1年BI内训师,高级数据分析师,拥有丰富的行业经验。 ...
2025-05-13CDA持证人简介: 王明月 ,CDA 数据分析师二级持证人,2年数据产品工作经验,管理学博士在读。 学习入口:https://edu.cda.cn/g ...
2025-05-12CDA持证人简介: 杨贞玺 ,CDA一级持证人,郑州大学情报学硕士研究生,某上市公司数据分析师。 学习入口:https://edu.cda.cn/g ...
2025-05-09CDA持证人简介 程靖 CDA会员大咖,畅销书《小白学产品》作者,13年顶级互联网公司产品经理相关经验,曾在百度、美团、阿里等 ...
2025-05-07相信很多做数据分析的小伙伴,都接到过一些高阶的数据分析需求,实现的过程需要用到一些数据获取,数据清洗转换,建模方法等,这 ...
2025-05-06以下的文章内容来源于刘静老师的专栏,如果您想阅读专栏《10大业务分析模型突破业务瓶颈》,点击下方链接 https://edu.cda.cn/g ...
2025-04-30CDA持证人简介: 邱立峰 CDA 数据分析师二级持证人,数字化转型专家,数据治理专家,高级数据分析师,拥有丰富的行业经验。 ...
2025-04-29CDA持证人简介: 程靖 CDA会员大咖,畅销书《小白学产品》作者,13年顶级互联网公司产品经理相关经验,曾在百度,美团,阿里等 ...
2025-04-28CDA持证人简介: 居瑜 ,CDA一级持证人国企财务经理,13年财务管理运营经验,在数据分析就业和实践经验方面有着丰富的积累和经 ...
2025-04-27数据分析在当今信息时代发挥着重要作用。单因素方差分析(One-Way ANOVA)是一种关键的统计方法,用于比较三个或更多独立样本组 ...
2025-04-25