6
CHAPTER 1. DATA MINING
1.2.3
An Example of Bonferroni’s Principle
Suppose there are believed to be some “evil-doers” out there, and we want
to detect them. Suppose further that we have reason to believe that periodi-
cally, evil-doers gather at a hotel to plot their evil. Let us make the following
assumptions about the size of the problem:
1. There are one billion people who might be evil-doers.
2. Everyone goes to a hotel one day in 100.
3. A hotel holds 100 people. Hence, there are 100,000 hotels – enough to
hold the 1% of a billion people who visit a hotel on any given day.
4. We shall examine hotel records for 1000 days.
To find evil-doers in this data, we shall look for people who, on two different
days, were both at the same hotel. Suppose, however, that there really are no
evil-doers. That is, everyone behaves at random, deciding with probability 0.01
to visit a hotel on any given day, and if so, choosing one of the 10
5
hotels at
random. Would we find any pairs of people who appear to be evil-doers?
We can do a simple approximate calculation as follows. The probability of
any two people both deciding to visit a hotel on any given day is .0001. The
chance that they will visit the same hotel is this probability divided by 10
5
,
the number of hotels. Thus, the chance that they will visit the same hotel on
one given day is 10
−9
. The chance that they will visit the same hotel on two
different given days is the square of this number, 10
−18
. Note that the hotels
can be different on the two days.
Now, we must consider how many events will indicate evil-doing. An “event”
in this sense is a pair of people and a pair of days, such that the two people
were at the same hotel on each of the two days. To simplify the arithmetic, note
that for large n,
n
2
is about n
2
/2. We shall use this approximation in what
follows. Thus, the number of pairs of people is
10
9
2
= 5
× 10
17
. The number
of pairs of days is
1000
2
= 5
× 10
5
. The expected number of events that look
like evil-doing is the product of the number of pairs of people, the number of
pairs of days, and the probability that any one pair of people and pair of days
is an instance of the behavior we are looking for. That number is
5
× 10
17
× 5 × 10
5
× 10
−18
= 250, 000
That is, there will be a quarter of a million pairs of people who look like evil-
doers, even though they are not.
Now, suppose there really are 10 pairs of evil-doers out there. The police
will need to investigate a quarter of a million other pairs in order to find the real
evil-doers. In addition to the intrusion on the lives of half a million innocent
people, the work involved is sufficiently great that this approach to finding
evil-doers is probably not feasible.
1.3. THINGS USEFUL TO KNOW
7
1.2.4
Exercises for Section 1.2
Exercise 1.2.1 :
Using the information from Section 1.2.3, what would be the
number of suspected pairs if the following changes were made to the data (and
all other numbers remained as they were in that section)?
(a) The number of days of observation was raised to 2000.
(b) The number of people observed was raised to 2 billion (and there were
therefore 200,000 hotels).
(c) We only reported a pair as suspect if they were at the same hotel at the
same time on three different days.
! Exercise 1.2.2 :
Suppose we have information about the supermarket pur-
chases of 100 million people. Each person goes to the supermarket 100 times
in a year and buys 10 of the 1000 items that the supermarket sells. We believe
that a pair of terrorists will buy exactly the same set of 10 items (perhaps the
ingredients for a bomb?) at some time during the year. If we search for pairs of
people who have bought the same set of items, would we expect that any such
people found were truly terrorists?
3
1.3
Things Useful to Know
In this section, we offer brief introductions to subjects that you may or may
not have seen in your study of other courses. Each will be useful in the study
of data mining. They include:
1. The TF.IDF measure of word importance.
2. Hash functions and their use.
3. Secondary storage (disk) and its effect on running time of algorithms.
4. The base e of natural logarithms and identities involving that constant.
5. Power laws.
1.3.1
Importance of Words in Documents
In several applications of data mining, we shall be faced with the problem of
categorizing documents (sequences of words) by their topic. Typically, topics
are identified by finding the special words that characterize documents about
that topic. For instance, articles about baseball would tend to have many
occurrences of words like “ball,” “bat,” “pitch,”, “run,” and so on. Once we
3
That is, assume our hypothesis that terrorists will surely buy a set of 10 items in common
at some time during the year. We don’t want to address the matter of whether or not terrorists
would necessarily do so.
8
CHAPTER 1. DATA MINING
have classified documents to determine they are about baseball, it is not hard
to notice that words such as these appear unusually frequently. However, until
we have made the classification, it is not possible to identify these words as
characteristic.
Thus, classification often starts by looking at documents, and finding the
significant words in those documents. Our first guess might be that the words
appearing most frequently in a document are the most significant. However,
that intuition is exactly opposite of the truth. The most frequent words will
most surely be the common words such as “the” or “and,” which help build
ideas but do not carry any significance themselves. In fact, the several hundred
most common words in English (called stop words) are often removed from
documents before any attempt to classify them.
In fact, the indicators of the topic are relatively rare words. However, not
all rare words are equally useful as indicators. There are certain words, for
example “notwithstanding” or “albeit,” that appear rarely in a collection of
documents, yet do not tell us anything useful. On the other hand, a word like
“chukker” is probably equally rare, but tips us off that the document is about
the sport of polo. The difference between rare words that tell us something and
those that do not has to do with the concentration of the useful words in just a
few documents. That is, the presence of a word like “albeit” in a document does
not make it terribly more likely that it will appear multiple times. However,
if an article mentions “chukker” once, it is likely to tell us what happened in
the “first chukker,” then the “second chukker,” and so on. That is, the word is
likely to be repeated if it appears at all.
The formal measure of how concentrated into relatively few documents are
the occurrences of a given word is called TF.IDF (Term Frequency times In-
verse Document Frequency). It is normally computed as follows. Suppose we
have a collection of N documents. Define f
ij
to be the frequency (number of
occurrences) of term (word) i in document j. Then, define the term frequency
TF
ij
to be:
TF
ij
=
f
ij
max
k
f
kj
That is, the term frequency of term i in document j is f
ij
normalized by dividing
it by the maximum number of occurrences of any term (perhaps excluding stop
words) in the same document. Thus, the most frequent term in document j
gets a TF of 1, and other terms get fractions as their term frequency for this
document.
The IDF for a term is defined as follows. Suppose term i appears in n
i
of the N documents in the collection. Then IDF
i
= log
2
(N/n
i
). The TF.IDF
score for term i in document j is then defined to be TF
ij
× IDF
i
. The terms
with the highest TF.IDF score are often the terms that best characterize the
topic of the document.
Example 1.3 :
Suppose our repository consists of 2
20
= 1,048,576 documents.
Suppose word w appears in 2
10
= 1024 of these documents. Then IDF
w
=
Dostları ilə paylaş: |