Bouckaert, R. R. 2004. Bayesian network classifiers in Weka. Working Paper 14/2004,
Department of Computer Science, University of Waikato, New Zealand.
Brachman, R. J., and H. J. Levesque, editors. 1985. Readings in knowledge represen-
tation. San Francisco: Morgan Kaufmann.
Brefeld, U., and T. Scheffer. 2004. Co-EM support vector learning. In R. Greiner
and D. Schuurmans, editors, Proceedings of the Twenty-First International
Conference on Machine Learning, Banff, Alberta, Canada. New York: ACM, pp.
121–128.
Breiman, L. 1996a. Stacked regression. Machine Learning 24(1):49–64.
———. 1996b. Bagging predictors. Machine Learning 24(2):123–140.
———. 1999. Pasting small votes for classification in large databases and online.
Machine Learning 36(1–2):85–103.
———. 2001. Random forests. Machine Learning 45(1):5–32.
Breiman, L., J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Classification and
regression trees. Monterey, CA: Wadsworth.
Brin, S., R. Motwani, J. D. Ullman, and S. Tsur. 1997. Dynamic itemset counting
and implication rules for market basket data. ACM SIGMOD Record
26(2):255–264.
Brodley, C. E., and M. A. Friedl. 1996. Identifying and eliminating mislabeled train-
ing instances. In Proceedings of the Thirteenth National Conference on
Artificial Intelligence, Portland, OR. Menlo Park, CA: AAAI Press, pp. 799–805.
Brownstown, L., R. Farrell, E. Kant, and N. Martin. 1985. Programming expert
systems in OPS5. Reading, MA: Addison-Wesley.
Buntine, W. 1992. Learning classification trees. Statistics and Computing 2(2):63–73.
Burges, C. J. C. 1998. A tutorial on support vector machines for pattern recogni-
tion. Data Mining and Knowledge Discovery 2(2): 121–167.
Cabena, P., P. Hadjinian, R. Stadler, J. Verhees, and A. Zanasi. 1998. Discovering data
mining: From concept to implementation. Upper Saddle River, NJ: Prentice
Hall.
Califf, M. E., and R. J. Mooney. 1999. Relational learning of pattern-match rules for
information extraction. In Proceedings of the Sixteenth National Conference on
Artificial Intelligence, Orlando, FL. Menlo Park, AC: AAAI Press, pp. 328–334.
Cardie, C. 1993. Using decision trees to improve case-based learning. In P. Utgoff,
editor, Proceedings of the Tenth International Conference on Machine Learning,
Amherst, MA. San Francisco: Morgan Kaufmann, pp. 25–32.
R E F E R E N C E S
4 8 7
P088407-REF.qxd 4/30/05 11:24 AM Page 487
Cavnar, W. B., and J. M. Trenkle. 1994. N-Gram-based text categorization. Proceed-
ings of the Third Symposium on Document Analysis and Information Retrieval.
Las Vegas, NV, UNLV Publications/Reprographics, pp. 161–175.
Cendrowska, J. 1998. PRISM: An algorithm for inducing modular rules. Interna-
tional Journal of Man-Machine Studies 27(4):349–370.
Chakrabarti, S. 2003. Mining the web: discovering knowledge from hypertext data. San
Francisco, CA: Morgan Kaufmann.
Cheeseman, P., and J. Stutz. 1995. Bayesian classification (AutoClass): Theory and
results. In U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy,
editors, Advances in Knowledge Discovery and Data Mining. Menlo Park, CA:
AAAI Press, pp. 153–180.
Chen, M.S., J. Jan, and P. S. Yu. 1996. Data mining: An overview from a database
perspective. IEEE Transactions on Knowledge and Data Engineering 8(6):
866–883.
Cherkauer, K. J., and J. W. Shavlik. 1996. Growing simpler decision trees to facili-
tate knowledge discovery. In E. Simoudis, J. W. Han, and U. Fayyad, editors,
Proceedings of the Second International Conference on Knowledge Discovery and
Data Mining, Portland, OR. Menlo Park, CA: AAAI Press, pp. 315–318.
Cleary, J. G., and L. E. Trigg. 1995. K*: An instance-based learner using an entropic
distance measure. In A. Prieditis and S. Russell, editors, Proceedings of the
Twelfth International Conference on Machine Learning, Tahoe City, CA. San
Francisco: Morgan Kaufmann, pp. 108–114.
Cohen, J. 1960. A coefficient of agreement for nominal scales. Educational and
Psychological Measurement 20:37–46.
Cohen, W. W. 1995. Fast effective rule induction. In A. Prieditis and S. Russell,
editors, Proceedings of the Twelfth International Conference on Machine
Learning, Tahoe City, CA. San Francisco: Morgan Kaufmann, pp. 115–123.
Cooper, G. F., and E. Herskovits. 1992. A Bayesian method for the induction of prob-
abilistic networks from data. Machine Learning 9(4):309–347.
Cortes, C., and V. Vapnik. 1995. Support vector networks. Machine Learning
20(3):273–297.
Cover, T. M., and P. E. Hart. 1967. Nearest-neighbor pattern classification. IEEE
Transactions on Information Theory IT-13:21–27.
Cristianini, N., and J. Shawe-Taylor. 2000. An introduction to support vector machines
and other kernel-based learning methods. Cambridge, UK: Cambridge
University Press.
4 8 8
R E F E R E N C E S
P088407-REF.qxd 4/30/05 11:24 AM Page 488
Cypher, A., editor. 1993. Watch what I do: Programming by demonstration.
Cambridge, MA: MIT Press.
Dasgupta, S. 2002. Performance guarantees for hierarchical clustering. In J. Kivinen
and R. H. Sloan, editors, Proceedings of the Fifteenth Annual Conference on
Computational Learning Theory, Sydney, Australia. Berlin: Springer-Verlag, pp.
351–363.
Datta, S., H. Kargupta, and K. Sivakumar. 2003. Homeland defense, privacy-sensi-
tive data mining, and random value distortion. In Proceedings of the Workshop
on Data Mining for Counter Terrorism and Security, San Francisco. Society for
International and Applied Mathematics, Philadelphia, PA.
Demiroz, G., and A. Guvenir. 1997. Classification by voting feature intervals. In M.
van Someren and G. Widmer, editors, Proceedings of the Ninth European
Conference on Machine Learning, Prague, Czech Republic. Berlin: Springer-
Verlag, pp. 85–92.
Devroye, L., L. Györfi, and G. Lugosi. 1996. A probabilistic theory of pattern recog-
nition. New York: Springer-Verlag.
Dhar, V., and R. Stein. 1997. Seven methods for transforming corporate data into busi-
ness intelligence. Upper Saddle River, NJ: Prentice Hall.
Diederich, J., J. Kindermann, E. Leopold, and G. Paass. 2003. Authorship attribu-
tion with support vector machines. Applied Intelligence 19(1):109–123.
Dietterich, T. G. 2000. An experimental comparison of three methods for con-
structing ensembles of decision trees: Bagging, boosting, and randomization.
Machine Learning 40(2):139–158.
Dietterich, T. G., and G. Bakiri. 1995. Solving multiclass learning problems via error-
correcting output codes. Journal Artificial Intelligence Research 2:263–286.
Domingos, P. 1997. Knowledge acquisition from examples via multiple models. In
D. H. Fisher Jr., editor, Proceedings of the Fourteenth International Conference
on Machine Learning, Nashville, TN. San Francisco: Morgan Kaufmann, pp.
98–106.
———. 1999. MetaCost: A general method for making classifiers cost sensitive. In
U. M. Fayyad, S. Chaudhuri, and D. Madigan, editors, Proceedings of the Fifth
International Conference on Knowledge Discovery and Data Mining, San Diego,
CA. New York: ACM, pp. 155–164.
Dougherty, J., R. Kohavi, and M. Sahami. 1995. Supervised and unsupervised dis-
cretization of continuous features. In A. Prieditis and S. Russell, editors,
Proceedings of the Twelfth International Conference on Machine Learning, Tahoe
City, CA. San Francisco: Morgan Kaufmann, pp. 194–202.
R E F E R E N C E S
4 8 9
P088407-REF.qxd 4/30/05 11:24 AM Page 489
Dostları ilə paylaş: |