京公网安备 11010802034615号
经营许可证编号:京B2-20210330
1.准备数据
[plain] view plain copy
> install.packages("tree")
> library(tree)
> library(ISLR)
> attach(Carseats)
> High=ifelse(Sales<=8,"No","Yes") //set high values by sales data to calssify
> Carseats=data.frame(Carseats,High) //include the high data into the data source
> fix(Carseats)
2.生成决策树
[plain] view plain copy
> tree.carseats=tree(High~.-Sales,Carseats)
> summary(tree.carseats)
[plain] view plain copy
//output training error is 9%
Classification tree:
tree(formula = High ~ . - Sales, data = Carseats)
Variables actually used in tree construction:
[1] "ShelveLoc" "Price" "Income" "CompPrice" "Population"
[6] "Advertising" "Age" "US"
Number of terminal nodes: 27
Residual mean deviance: 0.4575 = 170.7 / 373
Misclassification error rate: 0.09 = 36 / 400
3. 显示决策树
[plain] view plain copy
> plot(tree . carseats )
> text(tree .carseats ,pretty =0)
4.Test Error
[plain] view plain copy
//prepare train data and test data
//We begin by using the sample() function to split the set of observations sample() into two halves, by selecting a random subset of 200 observations out of the original 400 observations.
> set . seed (1)
> train=sample(1:nrow(Carseats),200)
> Carseats.test=Carseats[-train,]
> High.test=High[-train]
//get the tree model with train data
> tree. carseats =tree (High~.-Sales , Carseats , subset =train )
//get the test error with tree model, train data and predict method
//predict is a generic function for predictions from the results of various model fitting functions.
> tree.pred = predict ( tree.carseats , Carseats .test ,type =" class ")
> table ( tree.pred ,High. test)
High. test
tree. pred No Yes
No 86 27
Yes 30 57
> (86+57) /200
[1] 0.715
5.决策树剪枝
[plain] view plain copy
/**
Next, we consider whether pruning the tree might lead to improved results. The function cv.tree() performs cross-validation in order to cv.tree() determine the optimal level of tree complexity; cost complexity pruning is used in order to select a sequence of trees for consideration.
For regression trees, only the default, deviance, is accepted. For classification trees, the default is deviance and the alternative is misclass (number of misclassifications or total loss).
We use the argument FUN=prune.misclass in order to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.
If the tree is regression tree,
> plot(cv. boston$size ,cv. boston$dev ,type=’b ’)
*/
> set . seed (3)
> cv. carseats =cv. tree(tree .carseats ,FUN = prune . misclass ,K=10)
//The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate(dev) and the value of the cost-complexity parameter used (k, which corresponds to α.
> names (cv. carseats )
[1] " size" "dev " "k" " method "
> cv. carseats
$size //the number of terminal nodes of each tree considered
[1] 19 17 14 13 9 7 3 2 1
$dev //the corresponding error rate
[1] 55 55 53 52 50 56 69 65 80
$k // the value of the cost-complexity parameter used
[1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000
2.0000000 4.2500000
[8] 5.0000000 23.0000000
$method //miscalss for classification tree
[1] " misclass "
attr (," class ")
[1] " prune " "tree. sequence "
[plain] view plain copy
//plot the error rate with tree node size to see whcih node size is best
> plot(cv. carseats$size ,cv. carseats$dev ,type=’b ’)
/**
Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error rate as a function of both size and k.
*/
> prune . carseats = prune . misclass ( tree. carseats , best =9)
> plot( prune . carseats )
> text( prune .carseats , pretty =0)
//get test error again to see whether the this pruned tree perform on the test data set
> tree.pred = predict ( prune . carseats , Carseats .test , type =" class ")
> table ( tree.pred ,High. test)
High. test
tree. pred No Yes
No 94 24
Yes 22 60
> (94+60) /200
[1] 0.77
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在Python开发中,HTTP请求是与外部服务交互的核心场景——调用第三方API、对接微服务、爬取数据等都离不开它。虽然requests库已 ...
2025-12-12在数据驱动决策中,“数据波动大不大”是高频问题——零售店长关心日销售额是否稳定,工厂管理者关注产品尺寸偏差是否可控,基金 ...
2025-12-12在CDA(Certified Data Analyst)数据分析师的能力矩阵中,数据查询语言(SQL)是贯穿工作全流程的“核心工具”。无论是从数据库 ...
2025-12-12很多小伙伴都在问CDA考试的问题,以下是结合 2025 年最新政策与行业动态更新的 CDA 数据分析师认证考试 Q&A,覆盖考试内容、报考 ...
2025-12-11在Excel数据可视化中,柱形图因直观展示数据差异的优势被广泛使用,而背景色设置绝非简单的“换颜色”——合理的背景色能突出核 ...
2025-12-11在科研实验、商业分析或医学研究中,我们常需要判断“两组数据的差异是真实存在,还是偶然波动”——比如“新降压药的效果是否优 ...
2025-12-11在CDA(Certified Data Analyst)数据分析师的工作体系中,数据库就像“数据仓库的核心骨架”——所有业务数据的存储、组织与提 ...
2025-12-11在神经网络模型搭建中,“最后一层是否添加激活函数”是新手常困惑的关键问题——有人照搬中间层的ReLU激活,导致回归任务输出异 ...
2025-12-05在机器学习落地过程中,“模型准确率高但不可解释”“面对数据噪声就失效”是两大核心痛点——金融风控模型若无法解释决策依据, ...
2025-12-05在CDA(Certified Data Analyst)数据分析师的能力模型中,“指标计算”是基础技能,而“指标体系搭建”则是区分新手与资深分析 ...
2025-12-05在回归分析的结果解读中,R方(决定系数)是衡量模型拟合效果的核心指标——它代表因变量的变异中能被自变量解释的比例,取值通 ...
2025-12-04在城市规划、物流配送、文旅分析等场景中,经纬度热力图是解读空间数据的核心工具——它能将零散的GPS坐标(如外卖订单地址、景 ...
2025-12-04在CDA(Certified Data Analyst)数据分析师的指标体系中,“通用指标”与“场景指标”并非相互割裂的两个部分,而是支撑业务分 ...
2025-12-04每到“双十一”,电商平台的销售额会迎来爆发式增长;每逢冬季,北方的天然气消耗量会显著上升;每月的10号左右,工资发放会带动 ...
2025-12-03随着数字化转型的深入,企业面临的数据量呈指数级增长——电商的用户行为日志、物联网的传感器数据、社交平台的图文视频等,这些 ...
2025-12-03在CDA(Certified Data Analyst)数据分析师的工作体系中,“指标”是贯穿始终的核心载体——从“销售额环比增长15%”的业务结论 ...
2025-12-03在神经网络训练中,损失函数的数值变化常被视为模型训练效果的“核心仪表盘”——初学者盯着屏幕上不断下降的损失值满心欢喜,却 ...
2025-12-02在CDA(Certified Data Analyst)数据分析师的日常工作中,“用部分数据推断整体情况”是高频需求——从10万条订单样本中判断全 ...
2025-12-02在数据预处理的纲量统一环节,标准化是消除量纲影响的核心手段——它将不同量级的特征(如“用户年龄”“消费金额”)转化为同一 ...
2025-12-02在数据驱动决策成为企业核心竞争力的今天,A/B测试已从“可选优化工具”升级为“必选验证体系”。它通过控制变量法构建“平行实 ...
2025-12-01