We consider the problem of a client who outsources the computation of a function $f$ over an input $x$ to a server, who returns $y=f(x)$. The client wants to be assured of the correctness of the computation and wants to preserve confidentiality of the input $x$ and possibly of the function $f$ as well. Moreover, the client wants to invest substantially less effort in verifying the correctness of the result than it would require to compute $f$ from scratch.
This is the problem of secure outsourced computation over encrypted data. Most of the work on outsourced computation in the literature focuses on either privacy of the data, using {\em Fully Homomorphic Encryption (FHE)}, or the integrity of the computation. No general security definition for protocols achieving both privacy and integrity appears in the literature. Previous definitions only deal with a very limited security model where the server is not allowed to
issue {\em verification queries} to the client: i.e. it is not allowed to ``see'' if the client accepts or rejects the value $y$.
In this paper we present:
-- A formal definition of {\em private and secure} outsourced computation {\em in the presence of verification queries};
-- A protocol based on FHE that achieves the above definition for arbitrary poly-time computations;
-- Some additional protocols for the computation of {\em ad-hoc} functions (such as the computation of polynomials and linear
combinations) over encrypted data. These protocols do not use the power of FHE, and therefore are much more efficient than the generic approach. We point out that some existing protocols in the literature for these tasks become insecure in the presence of verification queries, while our protocols can be proven in the stronger security model where verification queries are allowed.
↧