Project Details
Projekt Print View

Structural Complexity Theory for Big Data – From Classification Theorems to Algorithm Engineering

Subject Area Theoretical Computer Science
Term since 2021
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 462679611
 
In light of increasingly large data sets to analyze, a central task for modern complexity theory is to establish novel methods for proving computational difficulty of problems that face immense amounts of data. The theory of fine-grained reductions in P commends itself as a promising candidate, not least due to contributions of the applicant. This raises the question: Can this theory truly serve as an analogue of NP-hardness in the big-data regime? To provide such an analogue, perhaps the main theoretical challenge is to advance the current highly problem-specific state of the art to a more encompassing theory. Consequently, the main goal of this proposal is to establish classifications theorems for classes of polynomial-time problems, akin to Schaefer's Theorem and the subsequent Dichotomy Theorem that impressively demonstrate the explanatory power of NP-hardness for Constraint Satisfaction Problems. The particular focus for this task is on the class of model-checking problems of first-order properties, which are central to logic, theoretical computer science and information systems, and most notably concern the very foundations of relational databases. A systematic classification, already if only partially successful, can be expected to structure the complexity landscape of polynomial-time problems significantly more than isolated results could and will likely pinpoint (and thus enable progress on) major technical challenges. Given the centrality of first-order properties, and taking a pragmatic perspective on the main question, this proposal also seeks to closely complement resulting theoretical advances by practical investigations: How can we make use of developed fine-grained reductions in P to engineer algorithms on large data sets, particularly for information retrieval, geographical information systems, and graph theory? A comprehensive study may reveal new techniques for aspects such as benchmark generation, relational database query optimization, and general algorithm design on large data sets.
DFG Programme Independent Junior Research Groups
 
 

Additional Information

Textvergrößerung und Kontrastanpassung