京公网安备 11010802034615号
经营许可证编号:京B2-20210330
1.准备数据
[plain] view plain copy
> install.packages("tree")
> library(tree)
> library(ISLR)
> attach(Carseats)
> High=ifelse(Sales<=8,"No","Yes") //set high values by sales data to calssify
> Carseats=data.frame(Carseats,High) //include the high data into the data source
> fix(Carseats)
2.生成决策树
[plain] view plain copy
> tree.carseats=tree(High~.-Sales,Carseats)
> summary(tree.carseats)
[plain] view plain copy
//output training error is 9%
Classification tree:
tree(formula = High ~ . - Sales, data = Carseats)
Variables actually used in tree construction:
[1] "ShelveLoc" "Price" "Income" "CompPrice" "Population"
[6] "Advertising" "Age" "US"
Number of terminal nodes: 27
Residual mean deviance: 0.4575 = 170.7 / 373
Misclassification error rate: 0.09 = 36 / 400
3. 显示决策树
[plain] view plain copy
> plot(tree . carseats )
> text(tree .carseats ,pretty =0)
4.Test Error
[plain] view plain copy
//prepare train data and test data
//We begin by using the sample() function to split the set of observations sample() into two halves, by selecting a random subset of 200 observations out of the original 400 observations.
> set . seed (1)
> train=sample(1:nrow(Carseats),200)
> Carseats.test=Carseats[-train,]
> High.test=High[-train]
//get the tree model with train data
> tree. carseats =tree (High~.-Sales , Carseats , subset =train )
//get the test error with tree model, train data and predict method
//predict is a generic function for predictions from the results of various model fitting functions.
> tree.pred = predict ( tree.carseats , Carseats .test ,type =" class ")
> table ( tree.pred ,High. test)
High. test
tree. pred No Yes
No 86 27
Yes 30 57
> (86+57) /200
[1] 0.715
5.决策树剪枝
[plain] view plain copy
/**
Next, we consider whether pruning the tree might lead to improved results. The function cv.tree() performs cross-validation in order to cv.tree() determine the optimal level of tree complexity; cost complexity pruning is used in order to select a sequence of trees for consideration.
For regression trees, only the default, deviance, is accepted. For classification trees, the default is deviance and the alternative is misclass (number of misclassifications or total loss).
We use the argument FUN=prune.misclass in order to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.
If the tree is regression tree,
> plot(cv. boston$size ,cv. boston$dev ,type=’b ’)
*/
> set . seed (3)
> cv. carseats =cv. tree(tree .carseats ,FUN = prune . misclass ,K=10)
//The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate(dev) and the value of the cost-complexity parameter used (k, which corresponds to α.
> names (cv. carseats )
[1] " size" "dev " "k" " method "
> cv. carseats
$size //the number of terminal nodes of each tree considered
[1] 19 17 14 13 9 7 3 2 1
$dev //the corresponding error rate
[1] 55 55 53 52 50 56 69 65 80
$k // the value of the cost-complexity parameter used
[1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000
2.0000000 4.2500000
[8] 5.0000000 23.0000000
$method //miscalss for classification tree
[1] " misclass "
attr (," class ")
[1] " prune " "tree. sequence "
[plain] view plain copy
//plot the error rate with tree node size to see whcih node size is best
> plot(cv. carseats$size ,cv. carseats$dev ,type=’b ’)
/**
Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error rate as a function of both size and k.
*/
> prune . carseats = prune . misclass ( tree. carseats , best =9)
> plot( prune . carseats )
> text( prune .carseats , pretty =0)
//get test error again to see whether the this pruned tree perform on the test data set
> tree.pred = predict ( prune . carseats , Carseats .test , type =" class ")
> table ( tree.pred ,High. test)
High. test
tree. pred No Yes
No 94 24
Yes 22 60
> (94+60) /200
[1] 0.77
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在数据分析、质量控制、科研实验等场景中,数据波动性(离散程度)的精准衡量是判断数据可靠性、稳定性的核心环节。标准差(Stan ...
2026-01-29在数据分析、质量检测、科研实验等领域,判断数据间是否存在本质差异是核心需求,而t检验、F检验是实现这一目标的经典统计方法。 ...
2026-01-29统计制图(数据可视化)是数据分析的核心呈现载体,它将抽象的数据转化为直观的图表、图形,让数据规律、业务差异与潜在问题一目 ...
2026-01-29箱线图(Box Plot)作为数据分布可视化的核心工具,能清晰呈现数据的中位数、四分位数、异常值等关键统计特征,广泛应用于数据分 ...
2026-01-28在回归分析、机器学习建模等数据分析场景中,多重共线性是高频数据问题——当多个自变量间存在较强的线性关联时,会导致模型系数 ...
2026-01-28数据分析的价值落地,离不开科学方法的支撑。六种核心分析方法——描述性分析、诊断性分析、预测性分析、规范性分析、对比分析、 ...
2026-01-28在机器学习与数据分析领域,特征是连接数据与模型的核心载体,而特征重要性分析则是挖掘数据价值、优化模型性能、赋能业务决策的 ...
2026-01-27关联分析是数据挖掘领域中挖掘数据间潜在关联关系的经典方法,广泛应用于零售购物篮分析、电商推荐、用户行为路径挖掘等场景。而 ...
2026-01-27数据分析的基础范式,是支撑数据工作从“零散操作”走向“标准化落地”的核心方法论框架,它定义了数据分析的核心逻辑、流程与目 ...
2026-01-27在数据分析、后端开发、业务运维等工作中,SQL语句是操作数据库的核心工具。面对复杂的表结构、多表关联逻辑及灵活的查询需求, ...
2026-01-26支持向量机(SVM)作为机器学习中经典的分类算法,凭借其在小样本、高维数据场景下的优异泛化能力,被广泛应用于图像识别、文本 ...
2026-01-26在数字化浪潮下,数据分析已成为企业决策的核心支撑,而CDA数据分析师作为标准化、专业化的数据人才代表,正逐步成为连接数据资 ...
2026-01-26数据分析的核心价值在于用数据驱动决策,而指标作为数据的“载体”,其选取的合理性直接决定分析结果的有效性。选对指标能精准定 ...
2026-01-23在MySQL查询编写中,我们习惯按“SELECT → FROM → WHERE → ORDER BY”的语法顺序组织语句,直觉上认为代码顺序即执行顺序。但 ...
2026-01-23数字化转型已从企业“可选项”升级为“必答题”,其核心本质是通过数据驱动业务重构、流程优化与模式创新,实现从传统运营向智能 ...
2026-01-23CDA持证人已遍布在世界范围各行各业,包括世界500强企业、顶尖科技独角兽、大型金融机构、国企事业单位、国家行政机关等等,“CDA数据分析师”人才队伍遵守着CDA职业道德准则,发挥着专业技能,已成为支撑科技发展的核心力量。 ...
2026-01-22在数字化时代,企业积累的海量数据如同散落的珍珠,而数据模型就是串联这些珍珠的线——它并非简单的数据集合,而是对现实业务场 ...
2026-01-22在数字化运营场景中,用户每一次点击、浏览、交互都构成了行为轨迹,这些轨迹交织成海量的用户行为路径。但并非所有路径都具备业 ...
2026-01-22在数字化时代,企业数据资产的价值持续攀升,数据安全已从“合规底线”升级为“生存红线”。企业数据安全管理方法论以“战略引领 ...
2026-01-22在SQL数据分析与业务查询中,日期数据是高频处理对象——订单创建时间、用户注册日期、数据统计周期等场景,都需对日期进行格式 ...
2026-01-21