Gennari, J. H., P. Langley, and D. Fisher. 1990. Models of incremental concept for-
mation. Artificial Intelligence 40:11–61.
Ghani, R. 2002. Combining labeled and unlabeled data for multiclass text catego-
rization. In C. Sammut and A. Hoffmann, editors, Proceedings of the Nine-
teenth International Conference on Machine Learning, Sydney, Australia. San
Francisco: Morgan Kaufmann, pp. 187–194.
Gilad-Bachrach, R., A. Navot, and N. Tishby. 2004. Margin based feature selection:
Theory and algorithms. In R. Greiner and D. Schuurmans, editors, Proceed-
ings of the Twenty-First International Conference on Machine Learning, Banff,
Alberta, Canada. New York: ACM, pp. 337–344.
Giraud-Carrier, C. 1996. FLARE: Induction with prior knowledge. In J. Nealon and
J. Hunt, editors, Research and Development in Expert Systems XIII. Cambridge,
UK: SGES Publications, pp. 11–24.
Gluck, M., and J. Corter. 1985. Information, uncertainty, and the utility of cate-
gories. In Proceedings of the Annual Conference of the Cognitive Science Society,
Irvine, CA. Hillsdale, NJ: Lawrence Erlbaum, pp. 283–287.
Goldberg, D. E. 1989. Genetic algorithms in search, optimization, and machine learn-
ing. Reading, MA: Addison-Wesley.
Good, P. 1994. Permutation tests: A practical guide to resampling methods for testing
hypotheses. Springer-Verlag, New York, NY.
Grossman, D., and P. Domingos. 2004. Learning Bayesian network classifiers by
maximizing conditional likelihood. In R. Greiner and D. Schuurmans, editors,
Proceedings of the Twenty-First International Conference on Machine Learning,
Banff, Alberta, Canada. New York: ACM, pp. 361–368.
Groth, R. 1998. Data mining: A hands-on approach for business professionals. Upper
Saddle River, NJ: Prentice Hall.
Guo, Y., and R. Greiner. 2004. Discriminative model selection for belief net struc-
tures. Department of Computing Science, TR04-22, University of Alberta,
Canada.
Guyon, I., J. Weston, S. Barnhill, and V. Vapnik. 2002. Gene selection for cancer
classification using support vector machines. Machine Learning 46(1–3):
389–422.
Hall, M. 2000. Correlation-based feature selection for discrete and numeric class
machine learning. In P. Langley, editor, Proceedings of the Seventeenth Inter-
national Conference on Machine Learning, Stanford, CA. San Francisco:
Morgan Kaufmann, pp. 359–366.
R E F E R E N C E S
4 9 3
P088407-REF.qxd 4/30/05 11:24 AM Page 493
Hall, M., G. Holmes, and E. Frank. 1999. Generating rule sets from model trees. In
N. Y. Foo, editor, Proceedings of the Twelfth Australian Joint Conference on
Artificial Intelligence, Sydney, Australia. Berlin: Springer-Verlag, pp. 1–12.
Han, J., and M. Kamber. 2001. Data mining: Concepts and techniques. San Francisco:
Morgan Kaufmann.
Hand, D. J., H. Mannila, and P. Smyth. 2001. Principles of data mining. Cambridge,
MA: MIT Press.
Hartigan, J. A. 1975. Clustering algorithms. New York: John Wiley.
Hastie, T., and R. Tibshirani. 1998. Classification by pairwise coupling. Annals of
Statistics 26(2):451–471.
Hastie, T., R. Tibshirani, and J. Friedman. 2001. The elements of statistical learning.
New York: Springer-Verlag.
Heckerman, D., D. Geiger, and D. M. Chickering. 1995. Learning Bayesian networks:
The combination of knowledge and statistical data. Machine Learning
20(3):197–243.
Hochbaum, D. S., and D. B. Shmoys. 1985. A best possible heuristic for the k-center
problem. Mathematics of Operations Research 10(2):180–184.
Holmes, G., and C. G. Nevill-Manning. 1995. Feature selection via the discovery of
simple classification rules. In G. E. Lasker and X. Liu, editors, Proceedings of
the International Symposium on Intelligent Data Analysis. Baden-Baden,
Germany: International Institute for Advanced Studies in Systems Research
and Cybernetics, pp. 75–79.
Holmes, G., B. Pfahringer, R. Kirkby, E. Frank, and M. Hall. 2002. Multiclass alter-
nating decision trees. In T. Elomaa, H. Mannila, and H. Toivonen, editors,
Proceedings of the Thirteenth European Conference on Machine Learning,
Helsinki, Finland. Berlin: Springer-Verlag, pp. 161–172.
Holte, R. C. 1993. Very simple classification rules perform well on most commonly
used datasets. Machine Learning 11:63–91.
Huffman, S. B. 1996. Learning information extraction patterns from examples. In
S. Wertmer, E. Riloff, and G. Scheler, editors, Connectionist, statistical, and sym-
bolic approaches to learning for natural language processing, Springer Verlag,
Berlin, pp. 246–260.
Jabbour, K., J. F. V. Riveros, D. Landsbergen, and W. Meyer. 1988. ALFA: Automated
load forecasting assistant. IEEE Transactions on Power Systems 3(3):908–914.
John, G. H. 1995. Robust decision trees: Removing outliers from databases. In U.
M. Fayyad and R. Uthurusamy, editors, Proceedings of the First International
4 9 4
R E F E R E N C E S
P088407-REF.qxd 4/30/05 11:24 AM Page 494
Conference on Knowledge Discovery and Data Mining. Montreal, Canada.
Menlo Park, CA: AAAI Press, pp. 174–179.
———. 1997. Enhancements to the data mining process. PhD Dissertation, Com-
puter Science Department, Stanford University, Stanford, CA.
John, G. H., and P. Langley. 1995. Estimating continuous distributions in Bayesian
classifiers. In P. Besnard and S. Hanks, editors, Proceedings of the Eleventh
Conference on Uncertainty in Artificial Intelligence, Montreal, Canada. San
Francisco: Morgan Kaufmann, pp. 338–345.
John, G. H., R. Kohavi, and P. Pfleger. 1994. Irrelevant features and the subset
selection problem. In H. Hirsh and W. Cohen, editors, Proceedings of the
Eleventh International Conference on Machine Learning, New Brunswick, NJ.
San Francisco: Morgan Kaufmann, pp. 121–129.
Johns, M. V. 1961. An empirical Bayes approach to nonparametric two-way classi-
fication. In H. Solomon, editor, Studies in item analysis and prediction. Palo
Alto, CA: Stanford University Press.
Kass, R., and L. Wasserman. 1995. A reference Bayesian test for nested hypotheses
and its relationship to the Schwarz criterion. Journal of the American
Statistical Association 90:928–934.
Keerthi, S. S., S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy. 2001. Improve-
ments to Platt’s SMO algorithm for SVM classifier design. Neural Computa-
tion 13(3):637–649.
Kerber, R. 1992. Chimerge: Discretization of numeric attributes. In W. Swartout,
editor, Proceedings of the Tenth National Conference on Artificial Intelligence,
San Jose, CA. Menlo Park, CA: AAAI Press, pp. 123–128.
Kibler, D., and D. W. Aha. 1987. Learning representative exemplars of concepts:
An initial case study. In P. Langley, editor, Proceedings of the Fourth
Machine Learning Workshop, Irvine, CA. San Francisco: Morgan Kaufmann,
pp. 24–30.
Kimball, R. 1996. The data warehouse toolkit. New York: John Wiley.
Kira, K., and L. Rendell. 1992. A practical approach to feature selection. In D.
Sleeman and P. Edwards, editors, Proceedings of the Ninth International
Workshop on Machine Learning, Aberdeen, Scotland. San Francisco: Morgan
Kaufmann, pp. 249–258.
Kittler, J. 1978. Feature set search algorithms. In C. H. Chen, editor, Pattern recog-
nition and signal processing. The Netherlands: Sijthoff an Noordhoff.
Koestler, A. 1964. The act of creation. London: Hutchinson.
R E F E R E N C E S
4 9 5
P088407-REF.qxd 4/30/05 11:24 AM Page 495
Dostları ilə paylaş: |