京公网安备 11010802034615号
经营许可证编号:京B2-20210330
想和数据挖掘沾点边,所以最近在复习一些算法,因为又学了点R,深感这是个统计分析挖掘的利器,所以想用R实现一些挖掘算法。
朴素贝叶斯法大概是最简单的一种挖掘算法了,《统计学习方法》在第四章做了很详细的叙述,无非是对于输入特征x,利用通过学习得到的模型计算后验概率分布,将后验概率最大的分类作为输出。
根据贝叶斯定理,后验概率P(Y=cx | X=x) = 条件概率P(X=x | Y=cx) * 先验概率P(Y = ck) / P(X=x),取P(X=x | Y=cx) * P(Y = ck)最大的分类作为输出。
下面是一个小数据集下使用R进行朴素贝叶斯分类的例子,代码如下:
#构造训练集
data <- matrix(c("sunny","hot","high","weak","no",
"sunny","hot","high","strong","no",
"overcast","hot","high","weak","yes",
"rain","mild","high","weak","yes",
"rain","cool","normal","weak","yes",
"rain","cool","normal","strong","no",
"overcast","cool","normal","strong","yes",
"sunny","mild","high","weak","no",
"sunny","cool","normal","weak","yes",
"rain","mild","normal","weak","yes",
"sunny","mild","normal","strong","yes",
"overcast","mild","high","strong","yes",
"overcast","hot","normal","weak","yes",
"rain","mild","high","strong","no"), byrow = TRUE,
dimnames = list(day = c(),
condition = c("outlook","temperature",
"humidity","wind","playtennis")), nrow=14, ncol=5);
#计算先验概率
prior.yes = sum(data[,5] == "yes") / length(data[,5]);
prior.no = sum(data[,5] == "no") / length(data[,5]);
#模型
naive.bayes.prediction <- function(condition.vec) {
# Calculate unnormlized posterior probability for playtennis = yes.
playtennis.yes <-
sum((data[,1] == condition.vec[1]) & (data[,5] == "yes")) / sum(data[,5] == "yes") * # P(outlook = f_1 | playtennis = yes)
sum((data[,2] == condition.vec[2]) & (data[,5] == "yes")) / sum(data[,5] == "yes") * # P(temperature = f_2 | playtennis = yes)
sum((data[,3] == condition.vec[3]) & (data[,5] == "yes")) / sum(data[,5] == "yes") * # P(humidity = f_3 | playtennis = yes)
sum((data[,4] == condition.vec[4]) & (data[,5] == "yes")) / sum(data[,5] == "yes") * # P(wind = f_4 | playtennis = yes)
prior.yes; # P(playtennis = yes)
# Calculate unnormlized posterior probability for playtennis = no.
playtennis.no <-
sum((data[,1] == condition.vec[1]) & (data[,5] == "no")) / sum(data[,5] == "no") * # P(outlook = f_1 | playtennis = no)
sum((data[,2] == condition.vec[2]) & (data[,5] == "no")) / sum(data[,5] == "no") * # P(temperature = f_2 | playtennis = no)
sum((data[,3] == condition.vec[3]) & (data[,5] == "no")) / sum(data[,5] == "no") * # P(humidity = f_3 | playtennis = no)
sum((data[,4] == condition.vec[4]) & (data[,5] == "no")) / sum(data[,5] == "no") * # P(wind = f_4 | playtennis = no)
prior.no; # P(playtennis = no)
return(list(post.pr.yes = playtennis.yes,
post.pr.no = playtennis.no,
prediction = ifelse(playtennis.yes >= playtennis.no, "yes", "no")));
}
#预测
naive.bayes.prediction(c("rain", "hot", "high", "strong"));
naive.bayes.prediction(c("sunny", "mild", "normal", "weak"));
naive.bayes.prediction(c("overcast", "mild", "normal", "weak"));
最后一个分类预测结果如下:
$post.pr.yes
[1] 0.05643739
$post.pr.no
[1] 0
$prediction
[1] "yes"
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
CDA一级知识点汇总手册 第二章 数据分析方法考点7:基础范式的核心逻辑(本体论与流程化)考点8:分类分析(本体论核心应用)考 ...
2026-02-18第一章:数据分析思维考点1:UVCA时代的特点考点2:数据分析背后的逻辑思维方法论考点3:流程化企业的数据分析需求考点4:企业数 ...
2026-02-16在数据分析、业务决策、科学研究等领域,统计模型是连接原始数据与业务价值的核心工具——它通过对数据的规律提炼、变量关联分析 ...
2026-02-14在SQL查询实操中,SELECT * 与 SELECT 字段1, 字段2,...(指定个别字段)是最常用的两种查询方式。很多开发者在日常开发中,为了 ...
2026-02-14对CDA(Certified Data Analyst)数据分析师而言,数据分析的核心不是孤立解读单个指标数值,而是构建一套科学、完整、贴合业务 ...
2026-02-14在Power BI实操中,函数是实现数据清洗、建模计算、可视化呈现的核心工具——无论是简单的数据筛选、异常值处理,还是复杂的度量 ...
2026-02-13在互联网运营、产品迭代、用户增长等工作中,“留存率”是衡量产品核心价值、用户粘性的核心指标——而次日留存率,作为留存率体 ...
2026-02-13对CDA(Certified Data Analyst)数据分析师而言,指标是贯穿工作全流程的核心载体,更是连接原始数据与业务洞察的关键桥梁。CDA ...
2026-02-13在机器学习建模实操中,“特征选择”是提升模型性能、简化模型复杂度、解读数据逻辑的核心步骤——而随机森林(Random Forest) ...
2026-02-12在MySQL数据查询实操中,按日期分组统计是高频需求——比如统计每日用户登录量、每日订单量、每日销售额,需要按日期分组展示, ...
2026-02-12对CDA(Certified Data Analyst)数据分析师而言,描述性统计是贯穿实操全流程的核心基础,更是从“原始数据”到“初步洞察”的 ...
2026-02-12备考CDA的小伙伴,专属宠粉福利来啦! 不用拼运气抽奖,不用复杂操作,只要转发CDA真题海报到朋友圈集赞,就能免费抱走实用好礼 ...
2026-02-11在数据科学、机器学习实操中,Anaconda是必备工具——它集成了Python解释器、conda包管理器,能快速搭建独立的虚拟环境,便捷安 ...
2026-02-11在Tableau数据可视化实操中,多表连接是高频操作——无论是将“产品表”与“销量表”连接分析产品销量,还是将“用户表”与“消 ...
2026-02-11在CDA(Certified Data Analyst)数据分析师的实操体系中,统计基本概念是不可或缺的核心根基,更是连接原始数据与业务洞察的关 ...
2026-02-11在数字经济飞速发展的今天,数据已成为核心生产要素,渗透到企业运营、民生服务、科技研发等各个领域。从个人手机里的浏览记录、 ...
2026-02-10在数据分析、实验研究中,我们经常会遇到小样本配对数据的差异检验场景——比如同一组受试者用药前后的指标对比、配对分组的两组 ...
2026-02-10在结构化数据分析领域,透视分析(Pivot Analysis)是CDA(Certified Data Analyst)数据分析师最常用、最高效的核心实操方法之 ...
2026-02-10在SQL数据库实操中,字段类型的合理设置是保证数据运算、统计准确性的基础。日常开发或数据分析时,我们常会遇到这样的问题:数 ...
2026-02-09在日常办公数据分析中,Excel数据透视表是最常用的高效工具之一——它能快速对海量数据进行分类汇总、分组统计,将杂乱无章的数 ...
2026-02-09