Publications, papers, surveys…
Here I list my papers. Their copyrights belongs to the publisher. All papers locally stored on this page are preprints, are essentially equivalent to the published versions, and can be only downloaded for personal or academic purposes. Some of my papers are also indexed and stored at the following locations
 Electronic Colloquium on Computational Complexity
 ArXiv
 Google Scholar
 Digital Bibliography and Library Project (DBPL)
With a modern browser, any math should render properly: if \(e^x\)
shows as $e^x$
then your browser does not have enough capabilities
to render math.
Conference

Clique Is Hard on Average for Regular Resolution (2018). . In: 50th ACM Symposium on Theory of Computing (STOC 2018). pp. TBD. 
We prove that for $k \ll \sqrt[4]{n}$ regular resolution requires length $n^{\Omega(k)}$ to establish that an ErdősRényi graph with appropriately chosen edge density does not contain a $k$clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional $n^{\Omega(k)}$ lower bounds on running time for several stateoftheart algorithms for finding maximum cliques in graphs.

Graph Colouring is Hard for Algorithms Based on Hilbert's Nullstellensatz and Gröbner Bases (2017). . In: 32nd Computational Complexity Conference (CCC 2017). pp. 2:12:20. 
We consider the natural encoding of the $k$colouring problem for a
given graph as a set of polynomial equations.
We prove that there are boundeddegree graphs that do not have legal
$k$colourings but for which the polynomial calculus proof system
defined in [Clegg et al '96, Alekhnovich et al '02]
requires linear degree, and hence exponential size, to establish
this fact. This implies a linear degree lower bound for any algorithms based
on Gröbner bases solving $k$colouring problems using this encoding.
The same bound applies also for the algorithm studied in a sequence of papers
[De Loera et al '08, '09, '11, '15]
based on Hilbert's Nullstellensatz proofs for a slightly
different encoding, thus resolving an open problem mentioned, e.g., in
[De Loera et al '09] and [Li et al '16].
We obtain our results by combining the polynomial calculus degree
lower bound for functional pigeonhole principle (FPHP) formulas over
boundeddegree bipartite graphs in [Mikša and Nordström '15]
with a reduction from FPHP to $k$colouring derivable by polynomial
calculus in constant degree.

CNFgen: A generator of crafted benchmarks (2017). . In: Theory and Applications of Satisfiability Testing  SAT 2017. pp. 464473. 
We present CNFgen, a generator of combinatorial benchmarks in DIMACS and OPB format. The proof complexity literature is a rich source not only of hard instances but also of instances that are theoretically easy but “extremal” in different ways, and therefore of potential interest in the context of SAT solving. Since most of these formulas appear not to be very well known in the SAT community, however, we propose CNFgen as a resource to make them readily available for solver development and evaluation. Many formulas studied in proof complexity are based on graphs, and CNFgen is also able to generate, parse and do basic manipulation of such objects. Furthermore, it includes a library cnfformula giving access to the functionality of CNFgen to Python programs.
A long line of research has studied the power of conflictdriven clause learning (CDCL) and how it compares to the resolution proof system in which it searches for proofs. It has been shown that CDCL can polynomially simulate resolution, even when the learning scheme is adversarially chosen as long as it is asserting. However, the simulation only works under the assumption that no learned clauses are ever forgotten, and the polynomial blowup is significant. Moreover, the simulation requires very frequent restarts, whereas the power of CDCL with less frequent or entirely without restarts remains poorly understood. With a view towards obtaining results with tighter relations between CDCL and resolution, we introduce a more finegrained model of CDCL that captures not only time but also memory usage and number of restarts. We show how previously established strong sizespace tradeoffs for resolution can be transformed into equally strong tradeoffs between time and memory usage for CDCL, where the upper bounds hold for CDCL without any restarts using the standard 1UIP clause learning scheme, and the (in some cases tightly matching) lower bounds hold for arbitrarily frequent restarts and arbitrary clause learning schemes.

Semantic Versus Syntactic Cutting Planes (2016). . In: 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). pp. 35:135:13. 
In this paper, we compare the strength of the semantic and syntactic
version of the cutting planes proof system.
First, we show that the lower bound technique of Pudlak applies also
to semantic cutting planes: the proof system has feasible
interpolation via monotone real circuits, which gives an exponential
lower bound on lengths of semantic cutting planes refutations.
Second, we show that semantic refutations are stronger than
syntactic ones. In particular, we give a formula for which any
refutation in syntactic cutting planes requires exponential length,
while there is a polynomial length refutation in semantic cutting
planes. In other words, syntactic cutting planes does not psimulate
semantic cutting planes. We also give two incompatible integer
inequalities which require exponential length refutation in
syntactic cutting planes.
Finally, we pose the following problem, which arises in connection with
semantic inference of arity larger than two: can every multivariate
nondecreasing real function be expressed as a composition of
nondecreasing real functions in two variables?

Hardness of Approximation in PSPACE and Separation Results for Pebble Games (2015). . In: Proc. of the 56th IEEE Symposium on Foundations of Computer Science (FOCS), 2015. pp. 466485. 
We consider the pebble game on DAGs with bounded fanin introduced in [Paterson and Hewitt '70] and the reversible version of this game in [Bennett '89], and study the question of how hard it is to decide exactly or approximately the number of pebbles needed for a given DAG in these games.
We prove that the problem of deciding whether s pebbles suffice to reversibly pebble a DAG G is PSPACEcomplete, as was previously shown for the standard pebble game in [Gilbert, Lengauer and Tarjan '80]. Via two different graph product constructions we then strengthen these results to establish that both standard and reversible pebbling space are PSPACEhard to approximate to within any additive constant. To the best of our knowledge, these are the first hardness of approximation results for pebble games in an unrestricted setting (even for polynomial time). Also, since [Chan '13] proved that reversible pebbling is equivalent to the games in [Dymond and Tompa '85] and [Raz and McKenzie '99], our results apply to the DymondTompa and RazMcKenzie games as well, and from the same paper it follows that resolution depth is PSPACEhard to determine up to any additive constant.
We also obtain a multiplicative logarithmic separation between reversible and standard pebbling space. This improves on the additive logarithmic separation previously known and could plausibly be tight, although we are not able to prove this.
We leave as an interesting open problem whether our additive hardness of approximation result could be strengthened to a multiplicative bound if the computational resources are decreased from polynomial space to the more common setting of polynomial time.
During the last decade, an active line of research in proof
complexity has been into the space complexity of proofs and how
space is related to other measures. By now these aspects of
resolution are fairly well understood, but many open problems remain
for the related but stronger polynomial calculus (PC/PCR) proof
system. For instance, the space complexity of many standard
“benchmark formulas” is still open, as well as the relation of
space to size and degree in PC/PCR.
We prove that if a formula requires large resolution width, then
making XOR substitution yields a formula requiring large PCR space,
providing some circumstantial evidence that degree might be a lower
bound for space. More importantly, this immediately yields formulas
that are very hard for space but very easy for size, exhibiting a
sizespace separation similar to what is known for resolution. Using
related ideas, we show that if a graph has good expansion and in
addition its edge set can be partitioned into short cycles, then the
Tseitin formula over this graph requires large PCR space. In
particular, Tseitin formulas over random $4$regular graphs
almost surely require space at least $\Omega(\sqrt{n})$.
Our proofs use techniques recently introduced in
[BonacinaGalesi~'13]. Our final contribution, however, is to show
that these techniques provably cannot yield nonconstant space lower
bounds for the functional pigeonhole principle, delineating the
limitations of this framework and suggesting that we are still far
from characterizing PC/PCR space.
Given a set $S$ of points (stations) located in the $d$dim. Euclidean space and a root $b \in S$, the $h$$\mathsf{hops Convergecast}$ problem asks to find for a minimal energycost range assignment which allows to perform the convergecast primitive (i.e. node accumulation) towards $b$ in at most $h$ hops. For this problem no polynomial time algorithm is known even for $h = 2$.
The main goal of this work is the design of an efficient distributed heuristic (i.e. protocol) and the analysis (both theoretical and experimental) of its expected solution cost. In particular, we introduce an efficient parameterized randomized protocol for $h$$\mathsf{hops Convergecast}$ and we analyze it on random instances created by placing $n$ points uniformly at random in a $d$cube of side length $L$. We prove that for $h = 2$, its expected approximation ratio is bounded by some constant factor. Finally, for $h = 3,..., 8$, we provide a wide experimental study showing that our protocol has very good performances when compared with previously introduced (centralized) heuristics.
Journal

A note about $k$DNF resolution (2018). . In: Information Processing Letters. 137(TBD):3339. 
In this note we show two results about $k$DNF resolution. First we prove that resolution extended with parities of size $k$ is strictly weaker than $k$DNF resolution. Then we show the connection between treelike $k$DNF resolution and narrow daglike resolution. This latter result is clearly implicit in [Krajícek 1994] but this direct proof is focused on resolution and provides information about refutation width.

On semantic cutting planes proofs with very small coefficients (2018). . In: Information Processing Letters. 136(TBD):7075. 
Cutting planes proofs for integer programs can naturally be defined both in a syntactic and in a semantic fashion. Filmus et al. (STACS 2016) proved that semantic cutting planes proofs may be exponentially stronger than syntactic ones, even if they use the semantic rule only once. We show that when semantic cutting planes proofs are restricted to have coefficients bounded by a function growing slowly enough, syntactic cutting planes can simulate them efficiently. Furthermore if we strengthen the restriction to a constant bound, then the simulating syntactic proof even has polynomially small coefficients.

Cliques enumeration and treelike resolution proofs (2018). . In: Information Processing Letters. 135(TDB):6267. 
We show the close connection between the enumeration of cliques in a $k$clique free graph $G$ and the length of treelike resolution refutations for formula $\mathrm{Clique}(G,k)$, which claims that $G$ has a $k$clique. The length of any such treelike refutation is within a "fixed parameter tractable" factor from the number of cliques in the graph. We then proceed to drastically simplify the proofs of the lower bounds for the length of treelike resolution refutations of $\mathrm{Clique}(G,k)$ shown in [Beyersdorff et at. 2013, Lauria et al. 2017], which now reduce to a simple estimate of the number of cliques.

Tight SizeDegree Bounds for SumsofSquares Proofs (2017). . In: Computationa Complexity. 26(4):911948. Conference version published at CCC, 2015. 
We exhibit families of $4$CNF formulas over $n$ variables that have sumsofsquares (SOS) proofs of unsatisfiability of degree (a.k.a. rank) $d$ but require SOS proofs of size $n^{\Omega(d)}$ for values of $d = d(n)$ from constant all the way up to $n^{\delta}$ for some universal constant$\delta$. This shows that the $n^{O(d)}$ running time obtained by using the Lasserre semidefinite programming relaxations to find degree$d$ SOS proofs is optimal up to constant factors in the exponent. We establish this result by combining $\mathsf{NP}$reductions expressible as lowdegree SOS derivations with the idea of relativizing CNF formulas in [Kraj́íček '04] and [Dantchev and Riis'03], and then applying a restriction argument as in [Atserias, Müller, and Oliva '13] and [Atserias, Lauria, and Nordström '14]. This yields a generic method of amplifying SOS degree lower bounds to size lower bounds, and also generalizes the approach in [ALN14] to obtain size lower bounds for the proof systems resolution, polynomial calculus, and SheraliAdams from lower bounds on width, degree, and rank, respectively.

A Rank Lower Bound for Cutting Planes Proofs of Ramsey's Theorem (2016). . In: ACM Trans. Comput. Theory. 8(4):17:117:13. Conference version published at SAT, 2013. 
Ramsey's Theorem is a cornerstone of combinatorics and logic. In its simplest formulation it says that for every $k > 0$ and $s > 0$, there is a minimum number $r(k, s)$ such that any simple graph with at least $r(k, s)$ vertices contains either a clique of size $k$ or an independent set of size $s$. We study the complexity of proving upper bounds for the number $r(k, k)$. In particular, we focus on the propositional proof system cutting planes; we show that any cutting plane proof of the upper bound “$r(k, k) \leq 4k$” requires high rank. In order to do that we show a protection lemma which could be of independent interest.

Narrow Proofs May Be Maximally Long (2016). . In: ACM Trans. Comput. Logic. 17(3):19:119:30. Conference version published at CCC, 2014. 
We study the proof complexity of ParisHarrington’s Large Ramsey Theorem for bicolorings of graphs and of offdiagonal Ramsey’s Theorem. For ParisHarrington we prove a nontrivial conditional lower bound in Resolution and a nontrivial upper bound in boundeddepth Frege. The lower bound is conditional on a (very reasonable) hardness assumption for a weak (quasipolynomial) Pigeonhole principle in $\mathsf{Res}(2)$. We show that under such assumption, there is no refutation of the ParisHarrington formulas of size quasipolynomial in the number of propositional variables. The proof technique for the lower bound extends the idea of using a combinatorial principle to blowup a counterexample for another combinatorial principle beyond the threshold of inconsistency. A strong link with the proof complexity of an unbalanced offdiagonal Ramsey principle is established. This is obtained by adapting some constructions due to Erdös and Mills. We prove a nontrivial Resolution lower bound for a family of such offdiagonal Ramsey principles.

The Complexity of Proving that a Graph is Ramsey (2017). . In: Combinatorica. 37(2):253268. Conference version published at ICALP, 2013. 
We say that a graph with $n$ vertices is $c$Ramsey if it does not contain either a clique or an independent set of size $c \log n$. We define a CNF formula which expresses this property for a graph $G$. We show a superpolynomial lower bound on the length of resolution proofs that $G$ is $c$Ramsey, for every graph $G$. Our proof makes use of the fact that every Ramsey graph must contain a large subgraph with some of the statistical properties of the random graph.

Space Complexity in Polynomial Calculus (2015). . In: SIAM Journal on Computing. 44(4):11191153. Conference version published at CCC, 2012. 
During the last decade, an active line of research in proof complexity has been to study space complexity and timespace tradeoffs for proofs. Besides being a natural complexity measure of intrinsic interest, space is also an important issue in SAT solving, and so research has mostly focused on weak systems that are used by SAT solvers.
There has been a relatively long sequence of papers on space in resolution and resolutionbased proof systems, and it is probably fair to say that resolution is reasonably well understood from this point of view. For other natural candidates to study, however, such as polynomial calculus or cutting planes, very little has been known. We are not aware of any nontrivial space lower bounds for cutting planes, and for polynomial calculus the only lower bound has been for CNF formulas of unbounded width in [Alekhnovich et al.'02], where the space lower bound is smaller than the initial width of the clauses in the formulas. Thus, in particular, it has been consistent with current knowledge that polynomial calculus could be able to refute any $k$CNF formula in constant space.
In this paper, we prove several new results on space in polynomial calculus (PC), and in the extended proof system polynomial calculus resolution (PCR) studied in [Alekhnovich et al.'02]:
 We prove an $\Omega(n)$ space lower bound in PC for the canonical $3$CNF version of the pigeonhole principle formulas $\mathsf{PHP}_n^m$ with $m$ pigeons and $n$ holes, and show that this is tight.
 For PCR, we prove an $\Omega(n)$ space lower bound for a bitwise encoding of the functional pigeonhole principle with $m$ pigeons and $n$ holes. These formulas have width $O(\log n)$, and so this is an exponential improvement over [Alekhnovich et al.'02] measured in the width of the formulas.
 We then present another encoding of a version of the pigeonhole principle that has constant width, and prove an $\Omega(n)$ space lower bound in PCR for these formulas as well.
 Finally, we prove that any $k$CNF formula can be refuted in PC in simultaneous exponential size and linear space (which holds for resolution and thus for PCR, but was not obviously the case for PC). We also characterize a natural class of CNF formulas for which the space complexity in resolution and PCR does not change when the formula is transformed into a $3$CNF in the canonical way, something that we believe can be useful when proving PCR space lower bounds for other wellstudied formula families in proof complexity.

From Small Space to Small Width in Resolution (2015). . In: ACM Transactions on Computational Logic. 16(4):28:128:15. Conference version published at STACS, 2014. 

Parameterized Complexity of DPLL Search Procedures (2013). . In: ACM Transactions on Computational Logic. 14(3):20. Conference version appeared at SAT, 2011. 
We study the performance of DPLL algorithms on parameterized problems. In particular, we investigate how difficult it is to decide whether small solutions exist for satisfiability and combinatorial problems. For this purpose we develop a ProverDelayer game which models the running time of DPLL procedures and we establish an informationtheoretic method to obtain lower bounds to the running time of parameterized DPLL procedures. We illustrate this technique by showing lower bounds to the parameterized pigeonhole principle and to the ordering principle. As our main application we study the DPLL procedure for the problem of deciding whether a graph has a small clique. We show that proving the absence of a $k$clique requires $n^{\Omega(k)}$ steps for a nontrivial distribution of graphs close to the critical threshold. For the restricted case of treelike Parameterized Resolution, this result answers a question asked in Beyersdorff et al. (2012) of understanding the Resolution complexity of this family of formulas.

A Characterization of TreeLike Resolution Size (2013). . In: Information Processing Letters. 113(18):666671. 
We explain an asymmetric ProverDelayer game which precisely characterizes proof size in treelike Resolution. This game was previously described in a parameterized complexity context to show lower bounds for parameterized formulas and for the classical pigeonhole principle. The main point of this note is to show that the asymmetric game in fact characterizes treelike Resolution proof size, i.e. in principle our proof method allows to always achieve the optimal lower bounds. This is in contrast with previous techniques described in the literature. We also provide a very intuitive informationtheoretic interpretation of the game.
A general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider [FOCS, 2007]. In that framework the parameterized version of any proof system is not fptbounded for some technical reasons, but we remark that this question becomes much more interesting if we restrict ourselves to those parameterized contradictions $(F,k)$ in which $F$ itself is a contradiction. We call such parameterized contradictions strong, and with one important exception (vertex cover) all interesting contradictions we are aware of are strong. It follows from the gap complexity theorem of Dantchev, Martin and Szeider [FOCS, 2007] that treelike Parameterized Resolution is not fptbounded w.r.t. strong parameterized contradictions.
The main result of this paper significantly improves upon this by showing that even the parameterized version of boundeddepth Frege is not fptbounded w.r.t. strong contradictions. More precisely, we prove that the pigeonhole principle requires proofs of size $n^{\Omega(k)}$ in boundeddepth Frege, and, as a special case, in daglike Parameterized Resolution. This answers an open question posed in Dantchev, Martin and Szeider [FOCS, 2007].
In the opposite direction, we interpret a wellknown $\mathsf{FPT}$ algorithm for vertex cover as a DPLL procedure for Parameterized Resolution. Its generalization leads to a proof search algorithm for Parameterized Resolution that in particular shows that treelike Parameterized Resolution allows short refutations of all parameterized contradictions given as boundedwidth CNF's.

A Lower Bound for the Pigeonhole Principle in Treelike Resolution by Asymmetric ProverDelayer Games (2010). . In: Information Processing Letters. 110(23):10741077. 

Optimality of sizedegree tradeoffs for polynomial calculus (2010). . In: ACM Transaction on Computational Logic. 12(1):4:14:22. 
There are methods to turn short refutations in Polynomial Calculus ($\mathsf{Pc}$) and Polynomial Calculus with Resolution ($\mathsf{Pcr}$) into refutations of low degree. Bonet and Galesi [Bonet and Galesi 1999, 2003] asked if such sizedegree tradeoffs for $\mathsf{Pc}$ [Clegg et al. 1996; Impagliazzo et al. 1999] and $\mathsf{Pcr}$ [Alekhnovich et al. 2004] are optimal.
We answer this question by showing a polynomial encoding of Graph Ordering Principle on $m$ variables which requires $\mathsf{Pc}$ and $\mathsf{Pcr}$ refutations of degree $\Omega(\sqrt{m})$. Tradeoffs optimality follows from our result and from the short refutations of Graph Ordering Principle in [Bonet and Galesi 1999, 2001].
We then introduce the algebraic proof system $\mathsf{Pcr}_{k}$ which combines together Polynomial Calculus ($\mathsf{Pc}$) and $k$DNF Resolution ($\mathsf{Res}_{k}$). We show a size hierarchy theorem for $\mathsf{Pcr}_{k}$: $\mathsf{Pcr}_{k}$ is exponentially separated from $\mathsf{Pcr}_{k+1}$. This follows from the previous degree lower bound and from techniques developed for $\mathsf{Res}_{k}$.
Finally we show that random formulas in conjunctive normal form ($3$CNF) are hard to refute in $\mathsf{Pcr}_{k}$.

On the Automatizability of Polynomial Calculus (2010). . In: Theory of Computing Systems. 47(2):491506. Errata: Few typos in equation (13) and subsequent text. Fixed only in the local version. 
We prove that Polynomial Calculus and Polynomial Calculus with Resolution are not automatizable, unless $W[P]$hard problems are fixed parameter tractable by oneside error randomized algorithms. This extends to Polynomial Calculus the analogous result obtained for Resolution by Alekhnovich and Razborov (SIAM J. Computing, 38(4), 2008).

MinimumEnergy Broadcast and disk cover in grid wireless networks (2008). . In: Theoretical Computer Science. 399(12):3853. Conference version appeared at SIROCCO, 2006. 
The MinimumEnergy Broadcast problem is to assign a transmission range to every station of an ad hoc wireless networks so that (i) a given source station is allowed to perform broadcast operations and (ii) the overall energy consumption of the range assignment is minimized.
We prove a nearly tight asymptotical bound on the optimal cost for the MinimumEnergy Broadcast problem on square grids. We also derive neartight bounds for the BoundedHop version of this problem. Our results imply that the bestknown heuristic, the MSTbased one, for the MinimumEnergy Broadcast problem is far to achieve optimal solutions (even) on very regular, wellspread instances: its worstcase approximation ratio is about $\pi$ and it yields $\Omega(\sqrt{n})$ source hops, where $n$ is the number of stations.
As a by product, we get nearly tight bounds for the MinimumDisk Cover problem and for its restriction in which the allowed disks must have nonconstant radius.
Finally, we emphasize that our upper bounds are obtained via polynomial time constructions.
The $d$Dim $h$hops MST problem is defined as follows: Given a set $S$ of points in the $d$ dimensional Euclidean space and $s \in S$, find a minimumcost spanning tree for $S$ rooted at $s$ with height at most $h$. We investigate the problem for any constants $h$ and $d > 0$. We prove the first non trivial lower bound on the solution cost for almost all Euclidean instances (i.e. the lowerbound holds with hight probability). Then we introduce an easytoimplement, very fast divide et impera heuristic and we prove that its solution cost matches the lower bound.
Manuscripts (not peer reviewed)

Circular (Yet Sound) Proofs (2018). . 
We introduce a new way of composing proofs in rulebased proof systems that generalizes treelike and daglike proofs. In the new definition, proofs are directed graphs of derived formulas, in which cycles are allowed as long as every formula is derived at least as many times as it is required as a premise. We call such proofs circular. We show that, for all sets of standard inference rules, circular proofs are sound. For Frege we show that circular proofs can be converted into treelike ones with at most polynomial overhead. For Resolution the translation can no longer be a Resolution proof because, as we show, the pigeonhole principle has circular Resolution proofs of polynomial size. Surprisingly, as proof systems for deriving clauses from clauses, Circular Resolution turns out to be equivalent to SheraliAdams, a proof system for reasoning through polynomial inequalities that has linear programming at its base. As corollaries we get: 1) polynomialtime (LPbased) algorithms that find circular Resolution proofs of constant width, 2) examples that separate circular from daglike Resolution, such as the pigeonhole principle and its variants, and 3) exponentially hard cases for circular Resolution.

Short Res*(polylog) Refutations if and only if Narrow Res Refutations (2011). . Superseded by the paper A note about $k$DNF resolution (2018). 
In this note we show that any $k$CNF which can be refuted by a quasipolynomial $\mathsf{Res}^{*}(\mathsf{polylog})$ refutation has a “narrow” refutation in $\mathsf{Res}$ (i.e., of polylogarithmic width). Notice that while $\mathsf{Res}^{*}(\mathsf{polylog})$ is a complete proof system, this is not the case for $\mathsf{Res}$ if we ask for a narrow refutation. In particular is not even possible to express all CNFs with narrow clauses. But even for constant width CNF the former system is complete and the latter is not (see for example Bonet, Galesi 2001). We are going to show that the formulas “left out” are the ones which require large $\mathsf{Res}^{*}(\mathsf{polylog})$ refutations. We also show the converse implication: a narrow Resolution refutation can be simulated by a short $\mathsf{Res}^{*}(\mathsf{polylog})$ refutation.
Thesis

Ph.D. thesis Degree lower bounds for Algebraic Proof System (2008). . Sapienza  Universita` di Roma. 
Given a propositional formula $\phi$ we are interested in studying how difficult is to prove that $\phi$ is a tautology. Multivariate polynomials can be manipulated in a specific way to obtain such a proof. We study the power of this algebraic manipulation. In particular its running time is deeply related with the degree of the polynomials involved. We prove that this method behaves very badly in some cases, by showing that there are tautologies with very short proofs but which require high degree polynomials.
This does not exclude that another method of polynomial manipulation could lead to an efficient algorithm. In this case we are able to prove that if such efficient algorithm exists, it could be used to solve problems believed to be very hard.
Algebraic proof systems discussed so far are limited. We define a slightly stronger system $\mathsf{Pcr}_{k}$. We give examples of tautologies that are easy for this system, but very hard for systems known in literature. Nevertheless we show that a random tautology is still very hard to prove for $\mathsf{Pcr}_{k}$.