Time-Space Tradeoffs and Query Complexity in Statistics, Coding Theory, and Quantum Computing
MetadataShow full item record
Computational complexity is the field that studies the computational resources needed by algorithms to correctly solve computational problems. In this thesis, we derive lower bounds on the computational complexity of different problems for different classes of algorithms. The computational resources we consider are the running time, the memory usage and the input query complexity; i.e., the number of positions in the input that the algorithm reads. While deriving upper bounds involves constructing algorithms that run efficiently in their computational resources, lower bounds for a computational problem establish limits on how fast or space-efficient any algorithm solving that problem can be. We consider two main themes of lower bounds: time-space tradeoffs and query complexity lower bounds. Time-space tradeoffs define a relation between the running time and space of any algorithm solving a certain problems. They give a complete picture of the hardness of a problem by deriving a lower bound on one resource when the other is limited. Query complexity measures the depth of a decision tree computing the problem. We derive several lower bounds and upper bounds for different problems on different types of classical computational models and on quantum computers, where the queries to the input are represented by quantum operations operating on the qubits of the algorithm: 1) We derive the largest time-space tradeoff known for a randomized algorithm solving an explicit problem. We consider the pointer jumping problem, also known as the outdegree-1 graph accessibility problem and derive a T in Ω(n log(n/S) log log(n/S)) time-space tradeoff for any randomized oblivious algorithm computing it. Oblivious algorithms have the property that the sequence of operations performed is independent of the value of the input. We derive a similar lower bound for randomized oblivious algorithms with boolean inputs. 2) Frequency moments and order statistics are attributes computed on a sequence of numbers and are extremely relevant in various data analysis settings. We consider computing several copies of these functions on overlapping sequences of numbers by considering a window of a fixed length and sliding it over the input; these versions are called sliding-window versions of these functions. We give hardness separations between the sliding-window versions of these statistics. On one hand, we give a TS in Ω(n<super>2</super>) lower bounds for any algorithm computing sliding-window frequency moments. To complement our lower bound, we give a TS in O(n<super>2</super> log<super>c</super> n) comparison-based algorithm for sliding-window frequency moments. On the other hand we describe a T<super>2</super>S in O(n<super>3</super> log<super>c</super> n) algorithm for sliding-window element distinctness. For order statistics, we derive a similar lower bound of TS in Ω(n<super>2</super>) for any algorithm computing a sliding-widow version of the median, while for the sliding window version of maximum (and minimum), we construct an O(n log n)-time, logarithmic space algorithm. 3) We study the increase in complexity on quantum computers when considering sliding-window frequency moments. We show that quantum computers do not require the same increase in complexity as their classical counterparts by constructing a quantum algorithm for sliding-window version of the zero<super>th</super> frequency moments running in time O(n<super>3/2</super>) and logarithmic space, thus beating the classical lower bound for this problem. 4) In the area of error-correcting codes, we study the query complexity of error detection and correction algorithms. We prove that linear codes over large alphabets that are small in size and concentrated in terms of their codeword weights have testers and correctors that require only a constant number of queries to a word received across a noisy channel. Such codes are called locally-testable and self-correctable. As random codes of small size have the property that their codeword weights are concentrated in a small range of values, they are also locally-testable and self-correctable with high probability. 5) For linear codes with information-efficient encoding, we study the tradeoff between the time and space of their encoders, and their error-correction property. We derive properties on generator matrices of asymptotically good linear codes that guarantee an optimal time-space tradeoff of TS in Ω(n<super>2</super>) for any algorithm encoding them. Moreover, we prove a first step in an approach to establish these properties for all generator matrices of such codes. 6) To compare classical computers to their quantum counterparts, we consider the query complexity of quantum algorithms computing frequency moments and the onto function that takes as input a function and determines whether it is surjective. We prove that any quantum computer computing the onto function or the parity of the zero<super>th</super> frequency moment requires a linear number of queries to the input. Since the onto function with boolean inputs has a simple polynomial-size, constant-depth circuit, we obtain an almost linear lower bound on the query complexity of AC<super>0</super>, a special class of circuits that have polynomial size, constant depth and unbounded fan-in for their gates.