京公网安备 11010802034615号
经营许可证编号:京B2-20210330
医疗应用:大数据的容量足够用来创造更加个性化的“治疗处方”
1980年之前,临床医师们主要依赖“经验”、“直觉”以及“触摸不到的线索”来判断一个发烧了的小孩子到底是由较轻的疾病(如感冒)还是由比较严重的疾病(如急性肺炎或脑膜炎)引起的。换句话说,他们靠直觉来看病。在1980年,一个由研究者组成的小组研究了那些有经验的儿科医生是如何为他们的病人诊断的。他们发现了那些杰出的医师在直觉中参考了“输入信息”,而那些缺乏经验的医师在试图可靠地试用这些“输入信息”时就显得过于主观了。
在随后的研究中,研究人员从精确度和客观性两个方面上加强了他们的系统。在这个系统中,那些正在接受培训的儿科医师能够像有经验的医师那样接触到很多因严重疾病而导致发烧的儿童。事情发生了根本上的变化:直觉的建立被质化和量化地形成了一种形式,并且这种形式可以被那些经验并不丰富的医生所利用。如今,几乎所有正在为发烧儿童看病的医生都在证实这精妙的发现。
如果我们把目标确定为为每位儿童的每次就诊都提供最好的治疗,那么我们需要的就不仅仅是直觉和专业的技能了,因为人无完人。基于证据的医疗方法(EBM)通过把临床研究整合进治疗准则来帮助医师提高治疗水平。然而就普遍意义来说,EBM一般是基于“小数据”的研究——与动辄数十万或数百万的大数据不同,一个大型的EBM则是包含了数千例病例的系统。在这样的小样本规模系统中输入信息必须被良好地定义和形式化,随之而来的结果便是包含了所有这些信息的治疗准则在解释病人与病人之间的差异时就显得力不从心。因而EBM有时被人们嘲笑为“菜谱式治疗”,医生们只是机械地遵循着这些治疗的“配方”来治病。鸡肉与菠菜对于一些人来说也许是顿美味,但是当我们要为一位素食主义者上菜时又该怎么办呢?
大数据的容量足够用来创造更加个性化的“治疗菜谱”。利用一个容量为5亿人的数据集,你可以为一个体重超重且高胆固醇每天必须服用阿司匹林和立普妥的35岁男人,或者为一个与上述情况完全相同但是体重偏轻的人定制治疗方案。
大数据也可以允许我们通过在粗略的未经处理的数据集中逐条比对来发现微小但是强有力的线索,从而进行分析研究。小数据集中通常不能处理粗糙的原始数据,因为它不能分辨“心梗”与“心肌梗死”的区别,即便他们指的是同样的事情。并且由于在小数据集中只能使用单一的术语,使得我们无法做出确凿的归纳。同时小数据集也无法支持需要识别“心梗”与“心肌梗死”是同一种术语的研究。小数据集同样无法支持我们使用很细节的线索作为输入,因为它们在数据集的发生具有太大的随机性–确凿的归纳是无法从这样的小样本数据集中得到的。
目前有越来越多的争议在讨论大数据是否正在取代直觉在医疗中的地位。无论怎样,大数据仍是我们最大的希望–计算机可以在模仿人类专家直觉方面跟进一步,那时我们就再也不用依赖EBM这样的小数据集了。真正的问题并不是大数据正在威胁医疗中的直觉,而恰恰相反,是在于我能未能做到这一点。我们如今在医疗领域并未过于依赖大数据,因为这的确需要大数据量,而医学研究者们手中并没有真正的大型临床数据集。
建立,维护,标识以及保密临床临床数据集的代价太高昂了。泄露数据集信息的惩罚很重,而建立这样数据集的利益却几乎不存在。即便是政府支持的健康信息流通项目通常也不进行数据统计。取而代之的是,这些系统被用作让登陆者进入一个外部系统,一次只能取回一位患者的数据,并且得到的数据通常是摘要形式的。大数据分析是无法在这样的体系中实现的。
然而,大数据量医疗数据集受到的最大壁垒是医疗信息中盛行的所谓“最佳实践准则”,这一准则已经落后于其他行业一二十年了。医疗信息体系仍在持续强化使用陈旧的数据屏障,而这屏障正是维持“小数据集”研究的基础。在这个体系中,只有通过审核的,标准的,被编辑过的数据才能被接收——这里没有任何粗糙的原始数据!随之产生的数据集便是小数据集,因为屏障式的处理过程是强化数据源的瓶颈,由于缺乏一致性,许多可用的数据被拒之门外。这个屏障创造了同质化的数据,而排除了能使系统真正有用的多样性,这就如同白面包一样——一个被滤去了谷物最好营养物质的空空的净化盒。如果在大数据中使用了这样的屏障,谷歌和亚马逊就不可能成功,原始的大数据正是他们成功的原因。
除非每个医生都同时拥有无与伦比的直觉,否则计算机就应该用来提供更好的医疗。如果我们在处理过程中摒弃小数据思维,并开始建立真正的大数据,那么大数据在医疗支持中将会发挥更加巨大的作用。
Until 1980, clinicians relied heavily on “experience,” “instinct,” and “intangible clues” to determine whether a child with a fever had a minor disease (such as a cold) or something more serious (such as pneumonia or meningitis). In other words, they used “intuition.” In 1980, a group of researchers studied how experienced pediatricians assess these patients. They identified several findings that expert clinicians use as “inputs” to their intuition, but found they were too subjective to be reliably used by doctors with less experience.
In follow-up studies, the researchers honed their system to be more accurate and objective. Using this system, pediatricians in training were able to assess illness severity for children with fever almost as well as experienced pediatricians! In essence, the building blocks of this intuition had been identified and quantified into a form that new doctors could use despite their lack of experience. Today, nearly every doctor treating a febrile child documents these subtle findings.
If we aim for every clinician to provide perfect care every time, then we need more than just intuition and expertise, because nobody’s perfect. Evidence-based Medicine (EBM) helps clinicians provide better care by summarizing clinical studies into care guidelines. However, EBM is generally based on “Small Data” studies – a large EBM study consists of thousands of cases instead of the millions or billions typical of Big Data. With such small sample sizes the data inputs must be well-defined and perfectly formatted, and the resulting all-encompassing guidelines often fail to account for differences between patients. EBM is sometimes derided as “cookbook medicine” where doctors blindly follow “recipes” for care. Chicken and spinach might be a great meal for most people, but what if I’m serving a vegetarian?
Big Data is large enough to create “care recipes” that are far more individualized. With a dataset of 500 million people, you can create one care recipe for overweight, 35-year old men with high cholesterol on daily aspirin and Lipitor; and another recipe for the same group of men who differ only by being underweight.
Big Data also enables analytics that “read between the lines” to find subtle but powerful clues embedded in raw, unprocessed data. Small Data often can’t handle raw input because it can’t tell that “MI” and “myocardial infarction” both refer to the same thing, and there aren’t enough cases in Small Data to draw valid conclusions by using just one of those terms. Small Data is also too small to allow the analytics to “figure out” that “MI” and “myocardial infarction” are equivalent terms. Small Data also isn’t big enough to use subtle clues as inputs because they occur too infrequently in the dataset – valid conclusions cannot be drawn from such small sample sizes.
Arguments are raging about whether Big Data is usurping the role of intuition in Medicine. However, Big Data is our best hope for computers to do a better job at mimicking the intuition of human experts so that we no longer need to rely on Small Data EBM. The real problem with Big Data is not its threat to clinical intuition but rather our failure to do it at all. We aren’t using Big Data heavily in Medicine because it requires really big datasets, and medical researchers don’t have really big clinical datasets.
Building, maintaining, de-identifying, and securing clinical datasets has high costs. The penalties for failing to secure the data are severe, while the incentives to create such datasets are nearly non-existent. Even government-supported Health Information Exchanges often don’t actually aggregate data. Rather, they serve as record locators into external systems where data can be retrieved one patient at a time, often only in summary form. Big Data analytics cannot be done with that architecture.
However, the biggest barriers to big medical datasets are the prevailing “best practices” in medical informatics, which continue to lag other industries by ten to twenty years. Medical informaticists still insist upon enforcing the antiquated data curation processes required to support “Small Data” analytics. Only approved, standardized, encoded data can be accepted – no raw data or subtleties allowed here! The resulting dataset is small because the curation process is a resource-intensive bottleneck, and much of the available data is rejected for lack of conformity. Curation creates a homogenized data product devoid of the variety that makes it truly useful, not unlike white bread, an empty carb source stripped of its whole grain nutrient goodness. Google and Amazon do not succeed at Big Data despite a lack of curation, they succeed because of it.
Until every doctor has perfect intuition all the time, computers should help us provide better care. Big Data can play a big role in supporting care, provided that we start creating truly Big Data and stop trying to apply Small Data thinking to the process.
Dr. Jonathan Handler serves as the Chief Medical Information Officer at M*Modal and is a board-certified emergency physician with twenty years of experience in Medical Informatics.
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在回归分析的结果解读中,R方(决定系数)是衡量模型拟合效果的核心指标——它代表因变量的变异中能被自变量解释的比例,取值通 ...
2025-12-04在城市规划、物流配送、文旅分析等场景中,经纬度热力图是解读空间数据的核心工具——它能将零散的GPS坐标(如外卖订单地址、景 ...
2025-12-04在CDA(Certified Data Analyst)数据分析师的指标体系中,“通用指标”与“场景指标”并非相互割裂的两个部分,而是支撑业务分 ...
2025-12-04每到“双十一”,电商平台的销售额会迎来爆发式增长;每逢冬季,北方的天然气消耗量会显著上升;每月的10号左右,工资发放会带动 ...
2025-12-03随着数字化转型的深入,企业面临的数据量呈指数级增长——电商的用户行为日志、物联网的传感器数据、社交平台的图文视频等,这些 ...
2025-12-03在CDA(Certified Data Analyst)数据分析师的工作体系中,“指标”是贯穿始终的核心载体——从“销售额环比增长15%”的业务结论 ...
2025-12-03在神经网络训练中,损失函数的数值变化常被视为模型训练效果的“核心仪表盘”——初学者盯着屏幕上不断下降的损失值满心欢喜,却 ...
2025-12-02在CDA(Certified Data Analyst)数据分析师的日常工作中,“用部分数据推断整体情况”是高频需求——从10万条订单样本中判断全 ...
2025-12-02在数据预处理的纲量统一环节,标准化是消除量纲影响的核心手段——它将不同量级的特征(如“用户年龄”“消费金额”)转化为同一 ...
2025-12-02在数据驱动决策成为企业核心竞争力的今天,A/B测试已从“可选优化工具”升级为“必选验证体系”。它通过控制变量法构建“平行实 ...
2025-12-01在时间序列预测任务中,LSTM(长短期记忆网络)凭借对时序依赖关系的捕捉能力成为主流模型。但很多开发者在实操中会遇到困惑:用 ...
2025-12-01引言:数据时代的“透视镜”与“掘金者” 在数字经济浪潮下,数据已成为企业决策的核心资产,而CDA数据分析师正是挖掘数据价值的 ...
2025-12-01数据分析师的日常,常始于一堆“毫无章法”的数据点:电商后台导出的零散订单记录、APP埋点收集的无序用户行为日志、传感器实时 ...
2025-11-28在MySQL数据库运维中,“query end”是查询执行生命周期的收尾阶段,理论上耗时极短——主要完成结果集封装、资源释放、事务状态 ...
2025-11-28在CDA(Certified Data Analyst)数据分析师的工具包中,透视分析方法是处理表结构数据的“瑞士军刀”——无需复杂代码,仅通过 ...
2025-11-28在统计分析中,数据的分布形态是决定“用什么方法分析、信什么结果”的底层逻辑——它如同数据的“性格”,直接影响着描述统计的 ...
2025-11-27在电商订单查询、用户信息导出等业务场景中,技术人员常面临一个选择:是一次性查询500条数据,还是分5次每次查询100条?这个问 ...
2025-11-27对数据分析从业者和学生而言,表结构数据是最基础也最核心的分析载体——CRM系统的用户表、门店的销售明细表、仓库的库存表,都 ...
2025-11-27在业务数据可视化中,热力图(Heat Map)是传递“数据密度与分布特征”的核心工具——它通过颜色深浅直观呈现数据值的高低,让“ ...
2025-11-26在企业数字化转型中,业务数据分析师是连接数据与决策的核心纽带。但“数据分析师”并非单一角色,从初级到高级,其职责边界、能 ...
2025-11-26