Monday, 25 March 2019

We will begin with a brief survey of known results on learning arithmetic circuits and its connection to lower bounds. Then we will discuss two results in details: sparse polynomial interpolation and reconstruction of read-once oblivious algebraic branching programs (ROABP). If time permits, we will give an overview of Kaltofen's polynomial factorization. The first tutorial will end with a short discussion on how the ROABP reconstruction technique (as it is) does not work for some other models for which good lower bounds are known, and the "need" for considering non-degenerate circuits. The second tutorial will focus on a recently introduced paradigm for learning non-degenerate circuits. The paradigm is based on decomposing a vector space into subspaces that remain invariant under the action of suitable linear operator spaces. We plan to demonstrate the effectiveness of this paradigm by considering non-degenerate homogeneous depth three circuits, depth four po wering circuits etc.

A matrix X is called a linear matrix if all its entries are affine forms. Given oracle access to w^2 degree d polynomials in n variables that are entries of a w x w matrix F, we wish to compute a factorization of F as a product of linear matrices, if such a factorization exists. Finding such a representation is closely related to finding an optimal way to compute a given polynomial via an algebraic branching program. Over rationals and finite fields of sufficiently large characteristic, we give a poly(n,d,b) time randomized algorithm to compute such a factorization in the average-case, when w \leq \frac{\sqrt{n}}{2}. Here b is the bit length of the coefficients of the polynomials in F. Over finite fields the output of our algorithm is over the base field, whereas over Q the output is over a small extension of Q. Our algorithm actually gives an efficient worst-case randomized reconstruction algorithm for pure matrix products (which is a non-degeneracy notion that we define in this work). Additionally, we show that if we are given just one entry of F then we can recover a suitable factorization in poly(d^{w^3},n, b) time. This is joint work with Neeraj Kayal, and Chandan Saha.

A useful corollary of the famous Sylvester-Gallai theorem, which found many applications in theoretical computer science, states that if a finite set of linear forms L satisfy that for every two forms, L1, L2 ∈ L there is a subset I ⊂ L, such that L1, L2 ∈ I / and whenever L1 and L2 vanish then also ∏i∈I Li vanishes, then the linear span of L has dimension at most 3. This corollary is used to achieve deterministic algorithms for certain depth-3 polynomial identities and to bound performance of locally correctable codes. In this work we prove that if a finite set of irreducible quadratic polynomials Q satisfy that for every two polynomials Q1, Q2 ∈ Q there is a subset I ⊂ Q, such that Q1, Q2 ∈ I / and whenever Q1 and Q2 vanish then also ∏i∈I Qi vanishes, then the linear span of the polynomials in Q has dimension O(1). This extended an earlier result by [Shp18], that showed a similar conclusion when |I| = 1. This result brings us one step closer towards obtaining a polynomial time deterministic algorithm for the polynomial identity question of certain depth-4 identities. To obtain our main theorem we prove a new result classifying the possible ways that a product of quadratic polynomials ∏i Qi can vanish when two other quadratic polynomials vanish.

Let $C$ be an arithmetic circuit given as input that computes a polynomial $f\in\mathbb{F}[X], X=\{x_1,x_2,\ldots,x_n\}$. We obtain new algorithms for the following two problems first studied by Koutis and Williams \cite{Kou08, Wi09, KW16}. (k,n)-MLC: Compute the sum of the coefficients of all degree-$k$ multilinear monomials in the polynomial $f$. k-MMD: Test if there is a nonzero degree-$k$ multilinear monomial in the polynomial $f$.

* Our algorithms are based on the fact that the Hadamard product $f\circ S_{n,k}$, is the degree-$k$ multilinear part of $f$, where $S_{n,k}$ is the $k^{th}$ elementary symmetric polynomial.

* Our approach to computing $f\circ S_{n,k}$ involves a symmetrization trick which leads us to study the \emph{noncommutative symmetrized elementary symmetric polynomial} $S_{n,k}^{*}$ defined by Nisan\cite{Ni91}. We obtain an explicit $O^*(\binom{n}{\downarrow k/2})$ size algebraic branching program (ABP) for $S_{n,k}^*$, and an explicit $O^{*}(2^k)$ size ABP for a polynomial \emph{weakly equivalent} to $S_{n,k}^{*}$. We also briefly discuss the complexity of other polynomials related to $S_{n,k}^*$: rectangular permanent and rectangular determinant.

* As applications of our explicit ABP construction of $S^*_{n,k}$ mentioned above, for (k,n)-MLC we get a deterministic algorithm of run time $O^*(n^{k/2+c\log k})$ (where $c$ is a constant), answering an open question in Koutis and Williams \cite[ICALP'09]{KW16}. As corollaries we get $O^*(\binom{n}{\downarrow k/2})$-time exact counting algorithms for several combinatorial problems: k-Tree, t-Dominating set, m-Dimensional k-Matching.

* For k-MMD we obtain a randomized algorithm of $4.32^k\cdot poly(n,s)$ time and $poly(n,k,s)$ space. This matches the run time of a recent algorithm \cite{BDH18} for k-MMD which requires exponential (in $k$) space.

Tuesday, 26 March 2019

We will discuss a short and self contained proof (based on joint work with Chi-Ning Chou and Noam Solomon) of a classical result of Kaltofen from the mid 80's where he showed that if an n variate degree d polynomial f can be computed by an arithmetic circuit of size s, then each of its factors can be computed by an arithmetic circuit of size at most poly(s, n, d).

Polynomial factorization is one of the fundamental questions in algebraic complexity. Given a multivariate polynomial f in the form of a succinct representation (e.g., arithmetic circuit), the task is finding a succinct representation for its factors. In addition to being a natural question, polynomial factorization has many applications such as hardness versus randomness and decoding Reed-Solomon codes.

An arithmetic circuit class C is closed under factor if the following holds: let f be a polynomial in class C, then its factors also lie in C. In this talk, I’m going to focus on two recent closure results in polynomial factorization. The first one is for constant depth arithmetic circuits of poly-logarithmic degree and the second one is for VNP. Some applications of these results and future direction will also be discussed.

The classical lemma of Ore-DeMillo-Lipton-Schwartz-Zippel states that any nonzero polynomial f(x1,...,xn) of degree at most s will evaluate to a nonzero value at some point on an n-dimensional grid S^n with side length |S| > s . This gives an explicit hitting set for all n-variate degree s, size s algebraic circuits of size (s+1)^n. We prove the following results: - Let eps > 0 be a constant. For a sufficiently large constant n and all s >= n, if we have an explicit hitting set of size (s+1)^{n - eps} for the class of n-variate degree s polynomials that are computable by algebraic circuits of size s, then for all large s, we have an explicit hitting set of size s^{exp(exp(O(log*s)))} for s-variate circuits of degree s and size s. That is, if we can obtain a barely non-trivial exponent compared to the trivial (s+1)^n sized hitting set even for constant variate circuits, we can get an almost complete derandomization of PIT. - The above result holds when "circuits" are replaced by "formulas" or "algebraic branching programs". This extends a recent surprising result of Agrawal, Ghosh and Saxena (STOC 2018) who proved the same conclusion for the class of algebraic circuits, if the hypothesis provided a hitting set of size at most (s^{n^{0.5 - eps}}) (where eps > 0 is any constant). Hence, our work significantly weakens the hypothesis of Agrawal, Ghosh and Saxena to only require a slightly non-trivial saving over the trivial hitting set, and also presents the first such result for algebraic branching programs and formulas. This is joint work with Mrinal Kumar and Ramprasad Saptharishi.

We consider the problem of obtaining parameterized lower bounds for the size of arithmetic circuits computing polynomials with the degree of the polynomial as the parameter. We obtain a parameterized lower bound (i.e n^{t(k)} for some function t of k) on the size of ROABPs computing a hard multilinear polynomial that can be computed by a depth four circuit of size g(k)poly(n) for some computable function g.

Derandomization of blackbox identity testing reduces to extremely special circuit models. After a line of work, it is known that focusing on circuits with constant-depth and constantly many variables is enough (Agrawal,Ghosh,Saxena, STOC'18) to get to general hitting-sets and circuit lower bounds. This inspires us to study circuits with few variables, eg.~logarithmic in the size $s$.

We give the first poly($s$)-time blackbox identity test for sieze $s$ and $n=O(\log s)$ variate depth-$3$ diagonal circuits (or $\Sigma\wedge\Sigma^n$). The former model is well-studied but no poly$(s2^n)$-time identity test was known before us. We introduce the concept of {\em cone-closed} basis isolation and prove its usefulness in studying log-variate circuits. It subsumes the previous notions of rank-concentration studied extensively in the context of ROABP models.

It is a joint work with Michael A. Forbes and Nitin Saxena

Wednesday, 27 March 2019

The talk is an introduction to geometric complexity theory, which is an approach towards complexity lower bounds in algebraic complexity theory that was initiated by Mulmuley and Sohoni in 2001. I define the basic notions from algebraic geometry, representation theory, and algebraic combinatorics, and state some fundamental open problems and remark on some recent progress.

Geometric Complexity Theory as initiated by Mulmuley and Sohoni in two papers (SIAM J Comput 2001, 2008) aims to separate algebraic complexity classes via representation theoretic multiplicities in coordinate rings of specific group varieties. The papers also conjecture that the vanishing behavior of these multiplicities would be sufficient to separate complexity classes (so-called occurrence obstructions). The existence of such strong occurrence obstructions has been recently disproven in 2016 in two successive papers, Ikenmeyer-Panova (Adv. Math.) and Bürgisser-Ikenmeyer-Panova (J. AMS). This raises the question whether separating group varieties via representation theoretic multiplicities is stronger than separating them via occurrences. This paper provides for the first time a setting where separating with multiplicities can be achieved, while the separation with occurrences is provably impossible. Our setting is surprisingly simple and natural: We study the variety of products of homogeneous linear forms (the so-called Chow variety) and the variety of polynomials of bounded border Waring rank (i.e. a higher secant variety of the Veronese variety).

This is joint work with Julian Dörfler and Greta Panova.

In a Nisan-Wigderson design polynomial, denoted NW, every pair of monomials has few variables in common and this property has been used in recent years to prove lower bounds for several classes of arithmetic circuits. But unlike other well known polynomial families, like the Permanent, the Determinant and the iterated matrix multiplication polynomials, many natural questions on the design polynomial family are open. Some of them are as follows: * Is the design polynomial family VNP complete? * What is the structure of the group of symmetries of NW? * Is NW characterized by its symmetries over the complex field, real field and finite fields? * Given an arithmetic circuit C, can we efficiently test if C computes NW? * Given oracle access to a polynomial f, can we determine efficiently if f is equivalent to NW, i.e. if f = NW(AX) for some invertible matrix A, where X is a column vector of variables. * Given a point v in the Boolean hypercube, can it be checked efficiently if NW(v)=0? In this talk, we would discuss answers to a few of these questions. Joint work with Chandan Saha.

We consider the problem of constructing explicit matrices which cannot be expressed as a product of a few sparse matrices. In addition to being a natural mathematical question, this problem appears in various incarnations in computer science. In the talk we will survey the background and motivation, prove some lower bounds, and present the main open problems.

Based on a joint work with Mrinal Kumar.

For a matrix with entries given by multivariate polynomials (over a field F) of degrees bounded by a constant d, a problem of interest is to find its rank over F(x1,...,xn). The problem seems to be of fundamental nature, as it generalizes several computational problems arising in algebra, geometry, and combinatorics. For instance, when the entries are given by homogeneous linear forms, this rank computation problem already captures the problems like testing for perfect matching in graphs, and identity testing for algebraic branching programs. Its higher degree cases capture more general problems of algebro-geometric nature, like testing algebraic independence of polynomials using the Jacobian matrix, and finding the dimension of the dual varieties of hypersurfaces using the Hessian matrix.

Although the problem has a simple and efficient randomized algorithm using the Schwartz-Zippel lemma, an efficient deterministic computation of the rank is a major open problem, since a deterministic algorithm for exactly computing the rank in the linear case is already equivalent to the celebrated Polynomial Identity Testing (PIT) problem which itself would imply circuit complexity lower bounds (Kabanets-Impagliazzo'03).

In this talk, we will see a simple deterministic algorithm for computing the rank up to a factor of (1 - \epsilon), which runs in polynomial time (i.e. we give a PTAS) when \epsilon and d are constants.

(joint work with Vishwas Bhargava, Markus Bläser, and Gorav Jindal)

Thursday, 28 March 2019

Invariant theory is a classical field of mathematics that studies actions of groups on vector spaces and the underlying symmetries. Many fundamental problems in algebraic complexity can be stated in the language of invariant theory. We will survey recent progress on designing optimization based algorithms for various problems in invariant theory such as null cone membership and orbit-closure intersection, and also mention plenty of open problems.

In this talk we will see recent optimization based algorithms for the null-cone problem and the orbit closure intersection problem for the left-right action, and how these problems from invariant theory relate to a Polynomial Identity Testing for a restricted class of arithmetic circuits.

Joint work with Zeyuan-Allen Zhu, Ankit Garg, Yuanzhi Li and Avi Wigderson

It is a fundamental problem in the study of computational complexity to understand if access to more resources would mean strictly more computational power. In classical complexity, we have seen such examples in terms of Time Hierarchy and Space Hierarchy theorems. Here, time and space are the resources. It is natural to ask such a question in the setting of algebraic complexity setting. Here, access to more resources translates to allowing the model to perform more number of operations which in turn is allowing the "algebraic formulas" to have larger size.

Arithmetic formulas are directed trees where the leaves are labelled by variables and elements of the field, and internal vertices are labelled by + or x. Every internal vertex computes a polynomial by operating on its inputs. It is easy to see that they are a natural model of computation of polynomials, syntactically (symbolically). The size of the arithmetic formulas refers to the number of + and x operations needed to compute the polynomial. Rephrasing the question in the algebraic setting we ask the following: for any s, \varepsilon >0, are there polynomial computations that can be efficiently computed by arithmetic formulas of size s but not by any arithmetic formula of size s^{1-\varepsilon}?

In this talk, we will restrict ourselves to arithmetic formulas where computation at every vertex is a multilinear polynomial and here we show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. The multilinear restriction is a reasonable one since most polynomials of interest are indeed multilinear. Formally, we will show that for any s = poly(n) and \delta > 0, we show that there are explicit polynomial families that have multilinear formulas of size at most s but no 'small'-depth multilinear formulas of size less than s^{0.5 - \delta}. Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using \epsilon-biased spaces. (Joint work with Nutan Limaye and Srikanth Srinivasan. Appeared in the proceedings of ICALP 2018.)

Algebraic Branching Programs (ABPs) whose computational power is encapsulated between that of arithmetic formulas and arithmetic circuits are standard models for computing polynomials. The absence of considerable progress on proving lower bounds for ABPs calls for further restrictions such as homogeneity and mutlilnearity. An ABP is said to be a syntactic multilinear ABP (smABP) if every variable appears as an edge label at most once on any path from source to sink. The best known size lower bound for smABPs is barely quadratic in the number of variables. Obtaining super-polynomial lower bounds for smABPs remains to be a challenging open problem in algebraic complexity theory.

By a simple divide and conquer approach, any smABP of polynomial size can be converted into an equivalent multilinear formula with a super-polynomial blow up in size. A crucial observation is that, although the above conversion of an smABP into a multilinear formula blows up the size, the resulting formula has far more structure than an arbitrary multilinear formula of super-polynomial size. In this talk, we try to identify and exploit the structural limitations of multilinear formulas thus obtained from smABPs. Using a finer analysis of these multilinear formulas, we outline a few approaches to prove super-polynomial lower bounds for smABPs.

It is a long-standing open problem in computer algebra to find a deterministic polynomial-time algorithm that factorizes a given univariate polynomial $f(X)$ over a finite field $\mathbb{F}_p$. Such an algorithm is not known even under the generalized Riemann hypothesis (GRH). In this talk, I will discuss a unifying approach for this problem based on a family of combinatorial objects called P-schemes. Our approach subsumes known GRH-based results and also leads to some new results. In particular, we show how to beat Evdokimov's algorithm when a polynomial $\tilde{f}(X)\in \mathbb{Z}[X]$ lifting $f(X)$ is given whose Galois group has a "linear" structure.

TBA

Friday, 29 March 2019

The Unique Games Conjecture, proposed by Subhash Khot [Khot 2002], is a complexity theoretic assumption that if true yields optimal inapproximability results of several optimization problems. Since it was proposed in 2002, the theoretical computer science community has spent equal effort in proving and refuting the conjecture and till recently were evenly divided in their belief in the conjecture. Recently, in a remarkable sequence of 4 papers, 5 researchers (Minzer, Khot, Safra, Dinur and Kindler) proved a weaker version of the conjecture, called the 2-to-2 Games conjecture (or now, the 2-to-2 Games Theorem).

At the heart of their proof is a study of the Grassman Graph and its expansion properties. The Grassman Graph is a graph whose vertices are k-dimensional subspaces of GF(2)^m and two subspaces are connected if they intersect on a (k-1)-dimensional subspace. This graph is known to have several small structured sets that do not expand. A crucial element of the proof goes towards showing that any set that does not have such an algebraic structure does in fact expand. In this survey talk, I'll explain the UGC conjecture, its implications, the 2-to-2 Games Theorem, the Grassman Graph, its expansion properties and some keys steps in the proof.

No background (but for a familiarity with complexity theory) will be assumed.

Talk based on the following papers:

1. Subhash Khot, Dor Minzer, and Muli Safra." On independent sets,

2-to-2 games, and Grassmann graphs", 2017.

2. Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra.

"Towards a proof of the 2-to-1 games conjecture?", 2018

3. Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra.

"On non-optimally expanding sets in grassmann graphs", 2018.

4. Subhash Khot, Dor Minzer, and Muli Safra. "Pseudorandom sets in

grassmann graph have near-perfect expansion", 2018.

We consider the special class of ideals where each generator is a univariate $p_i$ over the variable $x_i$ . First we show a randomized $O(d^r \poly(n))$ algorithm for testing if a rank $r$ polynomial is in a univariate ideal or not.

In a special case this lends itself to an algorithm for computing the Permanent of a matrix of rank $r$ which works over all fields and generalizes a result of Barvinok.

Next we discuss a result of Glynn that states that the Permanent of an $n \times n$ matrix of rank $r$ is 0 modulo $p$ when $p < \frac{n}{r}$ and we prove a a more general theorem.

When the univariate ideal has only repeated roots we show a randomized $O*(4.08^d)$ algorithm where $d$ is the degree of the input polynomial. When the generators of the ideal have distinct roots and the input polynomial is a depth 3 circuit we show a deterministic $O*(4^d)$ algorithm where $d$ is the degree of the input polynomial.

This is based on joint work with V. Arvind, Partha Mukhopadhyay and Abhranil Chatterjee.

Computing the (real) roots of polynomial is an important problem in mathematics and theoretical computer science. We propose an efficient algorithm to compute the real roots of a sparse polynomial having $k$ non-zero real-valued coefficients. It is known that even for 4-nomials, one can not hope for a polynomial (in the input size) time algorithm for isolating the real roots. Therefore we propose a slightly relaxed notion of isolation of real roots. For a given positive integer $L$, our algorithm returns disjoint disks $D_1,D_2,...,D_s$ with $s<2k$, centered at the real axis and of radius less than $2^{-L}$ together with positive integers $u_1,...,u_s$ such that each disk $D_i$ contains exactly $u_i$ roots of $f$ counted with multiplicity. In addition, it is ensured that each real root of $f$ is contained in one of the disks. If $f$ has only simple real roots, our algorithm can also be used to isolate all real roots.The bit complexity of our algorithm is polynomial in $k$ and $\log(n)$,and near-linear in $L$ and $\tau$, where $2^{-\tau}$ and $2^{\tau}$ constitute lower and upper bounds on the absolute values of the non-zero coefficients of $f$, and $n$ is the degree of $f$. For root isolation, the bit complexity is polynomial in $k$ and $\log(n)$, and near-linear in $\tau$ and $\log(1/\sigma)$, where $\sigma$ denotes the separation of the real roots. We also show that the roots of integer trinomials are well separated (this was also independently proved by Koiran). By using our algorithm, it follows that the real roots of trinomials can be isolated in polynomial time. This is a joint work with Prof. Michael Sagraloff( Max-Planck-Institut für Informatik and University of Applied Sciences Landshut).

Finding an irreducible factor, of a polynomial $f(x)$ modulo a prime $p$, is not known to be in deterministic polynomial time. Though there is such a classical algorithm that {\em counts} the number of irreducible factors of $f\bmod p$. We can ask the same question modulo prime-powers $p^k$. The irreducible factors of $f\bmod p^k$ blow up exponentially in number; making it hard to describe them. Can we count those irreducible factors $\bmod~p^k$ that remain irreducible mod $p$? These are called {\em basic-irreducible}. A simple example is in $f=x^2+px \bmod p^2$; it has $p$ many basic-irreducible factors. Also note that, $x^2+p \bmod p^2$ is irreducible but not basic-irreducible!

We give an algorithm to count the number of basic-irreducible factors of $f\bmod p^k$ in deterministic poly($\deg(f),k\log p$)-time. This solves the open questions posed in (Cheng et al, ANTS'18 \& Kopp et al, Math.Comp.'19). In particular, we are counting roots $\bmod\ p^k$; which gives the first deterministic poly-time algorithm to compute Igusa zeta function of $f$. Also, our algorithm efficiently partitions the set of all basic-irreducible factors (possibly exponential) into merely $\deg(f)$-many disjoint sets, using a compact tree data structure and {\em split} ideals.