Streaming algorithm

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes (typically just one). These algorithms have limited memory available to them (much less than the input size) and also limited processing time per item.

These constraints may mean that an algorithm produces an approximate answer based on a summary or "sketch" of the data stream in memory.

History

An early theoretical foundation of streaming algorithms for data mining, pattern discovery and machine learning was developed in 1990 by a group at Johns Hopkins University. The theoretical model produces trade-offs between available memory and the number of passes through a stream of data in the form of labelled samples. The paper by Heath, Kasif, Kosaraju and Salzberg and Sullivan was published in AAAI 1991 [1] and proved lower and upper bounds for segmenting two class samples in one dimension, and extensions to learning classical "concepts" such as hyper-rectangles (bumps in statistics) and decision trees in high dimensions. The work was a chapter in David's Heath PhD thesis at Johns Hopkins University. Though streaming algorithms had already been studied by Munro and Paterson[2] as well as Flajolet and Martin,[3] the field of streaming algorithms was first formalized and popularized in a paper by Noga Alon, Yossi Matias, and Mario Szegedy.[4] For this paper, the authors later won the Gödel Prize in 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing.

Semi-streaming algorithms were introduced in 2005 as an extension of streaming algorithms that allows for a constant or logarithmic number of passes over the dataset [1].

Models

In the data stream model, some or all of the input data that are to be operated on are not available for random access from disk or memory, but rather arrive as one or more continuous data streams.

Streams can be denoted as an ordered sequence of points (or "updates") that must be accessed in order and can be read only once or a small number of times.

Much of the streaming literature is concerned with computing statistics on frequency distributions that are too large to be stored. For this class of problems, there is a vector \mathbf{a} = (a_1, \dots, a_n) (initialized to the zero vector \mathbf{0}) that has updates presented to it in a stream. The goal of these algorithms is to compute functions of \mathbf{a} using considerably less space than it would take to represent \mathbf{a} precisely. There are two common models for updating such streams, called the "cash register" and "turnstile" models.[5]

In the cash register model each update is of the form \langle i,
c\rangle, so that a_i is incremented by some positive integer c. A notable special case is when c = 1 (only unit insertions are permitted).

In the turnstile model each update is of the form \langle i,
c\rangle, so that a_i is incremented by some (possibly negative) integer c. In the "strict turnstile" model, no a_i at any time may be less than zero.

Several papers also consider the "sliding window" model. In this model, the function of interest is computing over a fixed-size window in the stream. As the stream progresses, items from the end of the window are removed from consideration while new items from the stream take their place.

Besides the above frequency-based problems, some other types of problems have also been studied. Many graph problems are solved in the setting where the adjacency matrix or the adjacency list of the graph is streamed in some unknown order. There are also some problems that are very dependent on the order of the stream (i.e., asymmetric functions), such as counting the number of inversions in a stream and finding the longest increasing subsequence.

Evaluation

The performance of an algorithm that operates on data streams is measured by three basic factors:

  • The number of passes the algorithm must make over the stream.
  • The available memory.
  • The running time of the algorithm.

These algorithms have many similarities with online algorithms since they both require decisions to be made before all data are available, but they are not identical. Data stream algorithms only have limited memory available but they may be able to defer action until a group of points arrive, while online algorithms are required to take action as soon as each point arrives.

If the algorithm is an approximation algorithm then the accuracy of the answer is another key factor. The accuracy is often stated as an (\epsilon,\delta) approximation meaning that the algorithm achieves an error of less than \epsilon with probability 1-\delta.

Applications

Streaming algorithms have several applications in networking such as monitoring network links for elephant flows, counting the number of distinct flows, estimating the distribution of flow sizes, and so on.[6] They also have applications in databases, such as estimating the size of a join[citation needed].

Some streaming problems

Frequency moments

The kth frequency moment of a set of frequencies \mathbf{a} is defined as F_k(\mathbf{a}) = \sum_{i=1}^n
a_i^k.

The first moment F_1 is simply the sum of the frequencies (i.e., the total count). The second moment F_2 is useful for computing statistical properties of the data, such as the Gini coefficient of variation. F_{\infty} is defined as the frequency of the most frequent item(s).

The seminal paper of Alon, Matias, and Szegedy dealt with the problem of estimating the frequency moments.

Calculating Frequency Moments

A direct approach to find the frequency moments requires to maintain a register mi for all distinct elements ai ∈ (1,2,3,4,...,N) which requires at least memory of order \Omega(N).[4] But we have space limitations and requires an algorithm that computes in much lower memory. This can be achieved by using approximations instead of exact values. An algorithm that computes an (ε,δ)approximation of Fk, where F'k is the (ε,δ)- approximated value of Fk.[7] Where ε is the approximation parameter and δ is the confidence parameter.[8]

Calculating F0 (Distinct Elements in a DataStream)
FM-Sketch Algorithm

Flajolet et al. in [3] introduced probabilistic method of counting which was inspired from a paper by Robert Morris Counting large numbers of events in small registers. Morris in his paper says that if the requirement of accuracy is dropped, a counter n can be replaced by a counter log n which can be stored in log log n bits.[9] Flajolet et al. in [3] improved this method by using a hash function h which is assumed to uniformly distribute the element in the hash space (a binary string of length L).

h:[m] \rightarrow [0,2^{L}-1]

Let bit(y,k) represent the kth bit in binary representation of y

y = \sum_{k\geq0} \mathrm{bit}(y,k)*2^{k}

Let \rho(y) represents the position of least significant 1-bit in the binary representation of yi with a suitable convention for \rho(0).

\rho(y)=\begin{cases}
	\mathrm{Min}(\mathrm{bit}(y,k)) &\text{if } y>0\\
	L &\text{if } y=0
\end{cases}

Let A be the sequence of data stream of length M whose cardinality need to be determined. Let BITMAP [0...L − 1] be the

hash space where the ρ(hashedvalues) are recorded. The below algorithm the determines approximate cardinality of A.

Procedure FM-Sketch:

    for i in 0 to L − 1 do
        BITMAP[i]:=0 
    end for
    for x in A: do
        Index:=ρ(hash(x))
        if BITMAP[Index] = 0 then
            BITMAP[Index] := 1 
        end if
    end for
    B:= Position of left most 0 bit of BITMAP[] 
    return 2^B

If there are N distinct elements in a data stream.

  • For i \gg \log(N) then BITMAP[i] is certainly 0
  • For i \ll \log(N) then BITMAP[i] is certainly 1
  • For i \approx \log(N) then BITMAP[i] is a fringes of 0's and 1's
K-Minimum Value Algorithm

The previous algorithm describes the first attempt to approximate F0 in the data stream by Flajolet and Martin. Their algorithm picks a random hash function which they assume to uniformly distribute the hash values in hash space.

Bar-Yossef et al. in,[8] introduces k-minimum value algorithm for determining number of distinct elements in data stream. They uses a similar hash function h which can be normalized to [0,1] as h:[m] \rightarrow [0,1]. But they fixed a limit t to number of values in hash space. The value of t is assumed of the order O\left(\dfrac{1}{\varepsilon_{2}}\right) (i.e. less approximation-value ε requires more t). KMV algorithm keeps only t-smallest hash values in the hash space. After all the m values of stream are arrived, \upsilon= \mathrm{Max}(h(a_{i} )) is used to calculateF'_{0}=\dfrac{t}{\upsilon}. That is, in a close-to uniform hash space, they expect at-least t elements to be less than O\left(\dfrac{t}{F_{0}}\right).

Procedure 2 K-Minimum Value

Initialize first t values of KMV 
for a in a1 to an do
	if h(a) < Max(KMV) then
		Remove Max(KMV) from KMV set
		Insert h(a) to KMV 
	end if
end for 
return t/Max(KMV)
Complexity analysis of KMV

KMV algorithm can be implemented in O\left(\left(\dfrac{1}{\varepsilon_{2}}\right)\cdot\log(m)\right) memory bits space. Each hash value requires space of order O(\log(m)) memory bits. There are hash values of the order O\left(\dfrac{1}{\varepsilon_{2}}\right). The access time can be reduced if we store the t hash values in a binary tree. Thus the time complexity will be reduced to O\left(\log\left(\dfrac{1}{\varepsilon}\right)\cdot\log(m)\right).

Calculating Fk

Alon et al. in [4] estimates Fk by defining random variables that can be computed within given space and time. The expected value of random variable gives the approximate value of Fk.

Let us assume length of sequence m is known in advance.

Construct a random variable X as follows:

  • Select ap be a random member of sequence A with index at p, a_p=l \in(1,2,3,\ldots,n)
  • Let r=|\{q:q\geq p,a_p=l\}|, represents the number of occurrences of l within the members of the sequence A following ap.
  • Random variable X=m(r^k-(r-1)^k).

Assume S1 be of the order O(n^{1-1/k}/\lambda^{2}) and S2 be of the order O(\log(1/\varepsilon)). Algorithm takes S2 random variable Y1,Y2,...,YS2 and outputs the median Y . Where Yi is the average of Xij where 1 ≤ jS1.

Now calculate expectation of random variable E(X).

\begin{array}{lll}
E(X) &=& \sum_{i=1}^{n} \sum_{i=1}^{m_i} (j^k-(j-1)^k) \\
&=& \frac{m}{m} [(1^k+(2^k-1^k)+\ldots+ (m_{1}^{k} - (m_{1}-1)^{k})) \\
&&\;+\; (1^k+(2^k-1^k)+\ldots+ (m_{2}^{k} - (m_{2}-1)^{k}))+\ldots \\
&&\;+\; (1^k+(2^k-1^k)+\ldots+ (m_{n}^{k} - (m_{n}-1)^{k}))] \\
&=& \sum_{i=1}^{n} m_{i}^{k} = F_{k}
\end{array}
Complexity of Fk

From the algorithm to calculate Fk discussed above, we can see that each random variable X stores value of ap and r. So, to compute X we need to maintain only log(n) bits for storing ap and log(n) bits for storing r. Total number of random variable X will be the S_1 * S_2.

Hence the total space complexity the algorithm takes is of the order of O\left(\dfrac{k\log\left(\dfrac{1}{\varepsilon}\right)}{\lambda^{2}}n^{1-\dfrac{1}{k}}\left(\log n + \log m\right)\right)

Simpler approach to calculate F2

The previous algorithm calculates F_2 in order of O( \sqrt{n}(\log m + \log n)) memory bits. Alon et al. in [4] simplified this algorithm using four-wise independent random variable with values mapped to \{-1,1\}.

This further reduces the complexity to calculate F_2 to O\left(\dfrac{\log\left(\dfrac{1}{\varepsilon}\right)}{\lambda^{2}}\left(\log n + \log m\right)\right)

Heavy hitters

Find the most frequent (popular) elements in a data stream. Some notable algorithms are:

Event detection

Detecting events in data streams is often done using a heavy hitters algorithm as listed above: the most frequent items and their frequency are determined using one of these algorithms, then the largest increase over the previous time point is reported as trend. This approach can be refined by using exponentially weighted moving averages and variance for normalization.[10]

Counting distinct elements

Counting the number of distinct elements in a stream (sometimes called the F0 moment) is another problem that has been well studied. The first algorithm for it was proposed by Flajolet and Martin. In 2010, D. Kane, J. Nelson and D. Woodruff found an asymptotically optimal algorithm for this problem.[11] It uses O(ε2 + log d) space, with O(1) worst-case update and reporting times, as well as universal hash functions and a r-wise independent hash family where r = Ω(log(1/ε) / log log(1/ε)).

Entropy

The (empirical) entropy of a set of frequencies \mathbf{a} is defined as F_k(\mathbf{a}) = \sum_{i=1}^n
\frac{a_i}{m}\log{\frac{a_i}{m}}, where m = \sum_{i=1}^n a_i.

Estimation of this quantity in a stream has been done by:

  • McGregor et al.
  • Do Ba et al.
  • Lall et al.
  • Chakrabarti et al.

Online learning

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Learn a model (e.g. a classifier) by a single pass over a training set.


Lower bounds

Lower bounds have been computed for many of the data streaming problems that have been studied. By far, the most common technique for computing these lower bounds has been using communication complexity.

See also

Notes

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Munro & Paterson (1980)
  3. 3.0 3.1 3.2 Flajolet & Martin (1985)
  4. 4.0 4.1 4.2 4.3 Alon, Matias & Szegedy (1996)
  5. Gilbert et al. (2001)
  6. Xu (2007)
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. 8.0 8.1 Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Kane, Nelson & Woodruff (2010)

References

  • Lua error in package.lua at line 80: module 'strict' not found.. First published as Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Heath, D., Kasif, S., Kosaraju, R., Salzberg, S., Sullivan, G., "Learning Nested Concepts With Limited Storage", Proceeding IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2, Pages 777-782, Morgan Kaufmann Publishers Inc. San Francisco, CA, USA ©1991

External links

Tutorials and surveys
Courses