**1. Introduction **

Consider a set of random variables with the support such that are uniformly distributed () over . What happens if we relax the notion of uniformity a little bit? How large does have to be if we need to be only approximately uniformly distributed?

There are quite a few well studied notions of what is meant to approximate some distribution. One such idea is of the statistical distance:

Definition 1 (Statistical Distance)If and are two probability distributions on some (finite) space , then the statistical distance between and is defined as

Suppose, given an , one could efficiently determine what and were. Then this definition inspires a statistical test to identify if and are far apart: Find out and for a fixed and see if they differ by a lot. If we were trying to prove that and are -close, clearly, the worst input we need to consider to confirm this is the set that satisfies equation 1 with equality. In other words, low-statistical distance guarantees that and can pass such a *closeness* test.

Suppose a BPP algorithm say uses uniformly distributed random variables to return the right answer with error probability at most and we want to replace them by some *approximately* uniform random variables to reduce the amount of randomness required. can be regarded as a function of the input and the random variables . Suppose functions correctly when for some . Then

If is some approximately uniform distribution with then,

Thus, a distribution that is close to the uniform distribution in statistical distance can be used to replace purely uniform bits. However, statistical distance may be too strong for us and may not allow us enough freedom to significantly decrease the amount of randomness required. Suppose that the set of elements of that make work correctly on input , say . We can then replace the Uniform distribution to generate the random bits by any distribution such that for all . This may not need, depending on , that be within statistical distance of . In this case, it makes sense to have a definition that considers errors on only some subsets of ( ) and not all of them.

In discrepancy theory, a notion of -approximation is studied which turns out to be interesting in this context.

Definition 2 (-approximation)Let be a set system where . A subset is said to be an -approximation of with respect to , if for every ,

Equivalently, if denotes the uniform distribution over elements of the set , then

An -approximation measures tries to approximate the uniform distribution upto an error with respect to all the sets in . If were then we recover the definition of the statistical distance.

Let be and be the set of all parity functions. Then an -approximation for the set system is called as an -biased sample space. In other words, an -biased sample space on fools all parity functions of . The challenge is to construct small -biased spaces. As we will see, such a relaxation dramatically reduces the sample space size from to .

Definition 3 (-Biased Random Variables)over is said to be -biased collection of random variables and an epsilon biased space, if for every

**2. Construction of -biased spaces **

First of all, what does a -biased space correspond to? As you might imagine, there is only once such space and that is the uniform distribution on itself. How do we prove this? Consider any probability function that assigns a probability to every element of . Now, -bias condition induces several linear constraints (in fact equality constraints) on such a probability function. In fact, they are exactly in number. One can also quite easily see that they are all independent. Add to it the constraint that the probabilities over all elements in must sum up to . Then we have independent linear equality constraints and variables. It is easy to see that satisfies all these constraints. Also such a system cannot have more than one solution.

Let’s now discuss a simple but cool construction of -biased space. Let . Choose . We show how to construct an -biased space of size .

Let and be drawn uniformly randomly from . Let . Here the inner product is the standard product where we see the elements of as vectors in . We claim that for are -biased. The proof is based on two very simple ideas:

- A degree univariate polynomial over has at most zeros in .
- The elements of = can be seen as vectors of length on for the purpose of addition. That is in is same as when and are seen as elements in .

Proposition 4Let and be drawn uniformly at random from for and let for . Then s are -biased.

*Proof:* Let .

Let Then,

Now, is a polynomial of degree at most . Thus it can have at most roots. if is a zero of the polynomial . Now, . Also since is uniformly random vector in , for a non sero , .

Thus, we have:

This completes the proof.