Is it possible to cache POST methods in HTTP?

With very simple caching semantics: if the parameters are the same (and the URL is the same, of course), then it's a hit. Is that possible? Recommended?

18.3k 6 6 gold badges 55 55 silver badges 83 83 bronze badges asked Mar 9, 2009 at 12:42 270k 199 199 gold badges 403 403 silver badges 509 509 bronze badges

9 Answers 9

The corresponding RFC 2616 in section 9.5 (POST) allows the caching of the response to a POST message, if you use the appropriate headers.

Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.

Note that the same RFC states explicitly in section 13 (Caching in HTTP) that a cache must invalidate the corresponding entity after a POST request.

Some HTTP methods MUST cause a cache to invalidate an entity. This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers (if present). These methods are:

 - PUT - DELETE - POST 

It's not clear to me how these specifications can allow meaningful caching.

This is also reflected and further clarified in RFC 7231 (Section 4.3.3.), which obsoletes RFC 2616.

Responses to POST requests are only cacheable when they include
explicit freshness information (see Section 4.2.1 of [RFC7234]).
However, POST caching is not widely implemented. For cases where an origin server wishes the client to be able to cache the result of a POST in a way that can be reused by a later GET, the origin server MAY send a 200 (OK) response containing the result and a Content-Location header field that has the same value as the POST's effective request URI (Section 3.1.4.2).

According to this, the result of a cached POST (if this ability is indicated by the server) can be subsequently used for as the result of a GET request for the same URI.

1 1 1 silver badge answered Mar 9, 2009 at 12:50 Diomidis Spinellis Diomidis Spinellis 19.2k 6 6 gold badges 62 62 silver badges 85 85 bronze badges

The origin server is a broker between HTTP and the application that handles the POST requests. The application is beyond the HTTP boundary and it can do whatever it pleases. If caching makes sense for a specific POST request it's free to cache, as much as the OS can cache disk requests.

Commented Mar 9, 2009 at 13:10

Diomidis, your statement that caching POST requests would not be HTTP, is wrong. Please see reBoot's answer for details. It's not very helpful to have the wrong answer show up at the top, but that's how democracy works. If you agree with reBoot, it would be nice if you corrected your answer.

Commented Aug 11, 2011 at 2:24

Eugene, can we agree that a) POST should invalidate the cached entity (per section 13.10), so that e.g. a subsequent GET must fetch a fersh copy and b) that the POST's response can be cached (per section 9.5), so that e.g. a subsequent POST can receive the same response?

Commented Aug 14, 2011 at 21:12 This is being clarified by HTTPbis; see mnot.net/blog/2012/09/24/caching_POST for a summary. Commented Sep 23, 2012 at 16:05

Clarification by Mark Nottingham, with slightly changed wording, is now standardized as RFC 7231 which obsoletes RFC 2616.

Commented May 27, 2020 at 11:14

According to RFC 2616 Section 9.5:

"Responses to POST method are not cacheable, UNLESS the response includes appropriate Cache-Control or Expires header fields."

So, YES, you can cache POST request response but only if it arrives with appropriate headers. In most cases you don't want to cache the response. But in some cases - such as if you are not saving any data on the server - it's entirely appropriate.

Note, however many browsers, including current Firefox 3.0.10, will not cache POST response regardless of the headers. IE behaves more smartly in this respect.

Now, i want to clear up some confusion here regarding RFC 2616 S. 13.10. POST method on a URI doesn't "invalidate the resource for caching" as some have stated here. It makes a previously cached version of that URI stale, even if its cache control headers indicated freshness of longer duration.

answered May 6, 2009 at 4:57 reBoot reBoot

What's the difference between "invalidate the resource for caching" and "making a cached version of the URI stale"? Are you saying that the server is allowed to cache a POST response but clients may not?

Commented Jan 12, 2012 at 18:37

"making a cached version of the URI stale" applies where you use the same URI for GET and POST requests. If you are a cache sitting between the client and the server, you see GET /foo and you cache the response. Next you see POST /foo then you are required to invalidate the cached response from GET /foo even if the POST response doesn't include any cache control headers because they are the same URI, thus the next GET /foo will have to revalidate even if the original headers indicated the cache would still be live (if you had not seen the POST /foo request)

Commented Sep 28, 2018 at 10:01

But in some cases - such as if you are not saving any data on the server - it's entirely appropriate. . What's the point of such a POST API in the first place then?

Commented Jun 7, 2020 at 18:02

If you're wondering whether you can cache a post request, and try researching an answer to that question, you likely won't succeed. When searching "cache post request" the first result is this StackOverflow question.

The answers are a confused mixture of how caching should work, how caching works according to the RFC, how caching should work according to the RFC, and how caching works in practice. Let's start with the RFC, walk through how browser's actually work, then talk about CDNs, GraphQL, and other areas of concern.

RFC 2616

Per the RFC, POST requests must invalidate the cache:

13.10 Invalidation After Updates or Deletions .. Some HTTP methods MUST cause a cache to invalidate an entity. This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers (if present). These methods are: - PUT - DELETE - POST 

This language suggests POST requests are not cacheable, but that is not true (in this case). The cache is only invalidated for previously stored data. The RFC (appears to) explicitly clarify that yes, you can cache POST requests:

9.5 POST .. Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource. 

Despite this language, setting the Cache-Control must not cache subsequent POST requests to the same resource. POST requests must be sent to the server:

13.11 Write-Through Mandatory .. All methods that might be expected to cause modifications to the origin server's resources MUST be written through to the origin server. This currently includes all methods except for GET and HEAD. A cache MUST NOT reply to such a request from a client before having transmitted the request to the inbound server, and having received a corresponding response from the inbound server. This does not prevent a proxy cache from sending a 100 (Continue) response before the inbound server has sent its final reply. 

How does that make sense? Well, you're not caching the POST request, you're caching the resource.

The POST response body can only be cached for subsequent GET requests to the same resource. Set the Location or Content-Location header in the POST response to communicate which resource the body represents. So the only technically valid way to cache a POST request, is for subsequent GETs to the same resource.

The correct answer is both:

Although the RFC allows for caching requests to the same resource, in practice, browsers and CDNs do not implement this behavior, and do not allow you to cache POST requests.

Demonstration of Browser Behavior

Given the following example JavaScript application (index.js):

const express = require('express') const app = express() let count = 0 app .get('/asdf', (req, res) => < count++ const msg = `count is $` console.log(msg) res .set('Access-Control-Allow-Origin', '*') .set('Cache-Control', 'public, max-age=30') .send(msg) >) .post('/asdf', (req, res) => < count++ const msg = `count is $` console.log(msg) res .set('Access-Control-Allow-Origin', '*') .set('Cache-Control', 'public, max-age=30') .set('Content-Location', 'http://localhost:3000/asdf') .set('Location', 'http://localhost:3000/asdf') .status(201) .send(msg) >) .set('etag', false) .disable('x-powered-by') .listen(3000, () => < console.log('Example app listening on port 3000!') >) 

And given the following example web page (index.html):

       

Install NodeJS, Express, and start the JavaScript application. Open the web page in your browser. Try a few different scenarios to test browser behavior:

This shows that, even though you can set the Cache-Control and Content-Location response headers, there is no way to make a browser cache an HTTP POST request.

Do I have to follow the RFC?

Browser behavior is not configurable, but if you're not a browser, you aren't necessarily bound by the rules of the RFC.

If you're writing application code, there's nothing stopping you from explicitly caching POST requests (pseudocode):

if (cache.get('hello')) < return cache.get('hello') >else < response = post(url = 'http://somewebsite/hello', request_body = 'world') cache.put('hello', response.body) return response.body >

CDNs, proxies, and gateways do not necessarily have to follow the RFC either. For example, if you use Fastly as your CDN, Fastly allows you to write custom VCL logic to cache POST requests.

Should I cache POST requests?

Whether your POST request should be cached or not depends on the context.

For example, you might query Elasticsearch or GraphQL using POST where your underlying query is idempotent. In those cases, it may or may not make sense to cache the response depending on the use case.

In a RESTful API, POST requests usually create a resource and should not be cached. This is also the RFC's understanding of POST that it is not an idempotent operation.

GraphQL

If you're using GraphQL and require HTTP caching across CDNs and browsers, consider whether sending queries using the GET method meets your requirements instead of POST. As a caveat, different browsers and CDNs may have different URI length limits, but operation safelisting (query whitelist), as a best practice for external-facing production GraphQL apps, can shorten URIs.