关于 okhttp

简单使用

Http Get

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
OkhttpClient mOkHttpClient = new OkHttpClient();

final Request request = new Request.Builder()
.url("https://www.baidu.com")
.build(); //可设置更多的参数:header、method 等

Call call = mOkHttpClient.newCall(request);

call.enqueue(new Callback(){
@Override
public void onFailure(Request request, IOException e){

}

@Override
public void onResponse(final Response response) throws IOException{
//String htmlStr = response.body().string();
}
});
  • onResponse()回调的参数是response,一般情况下,比如我们希望获得返回的字符串,可以通过response.body().string()获取;如果希望获取返回的二进制数据,则调用response.body().bytes();如果你想拿到返回的inputStream,则调用response.body().byteStream()
  • onResponse()执行的线程并不是 UI 线程。
  • 异步:call.enqueue();同步:call.execute()

Http Post 携带参数

1
2
3
4
5
6
7
8
9
10
Request request = buildMultipartFormRequest(url, new File[]{file}, new String[]{fileKey}, null);
FormEncodingBuilder builder = new FormEncodingBuilder();
builder.add("username", "K");

Request request = new Request.Builder()
.url(url)
.post(builder.build())
.build();

mOkHttpClient.newCall(request).enqueue(new Callback(){});

基于 Http 的文件上传

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
File file = new File(Environment.getExternalStorageDirectory(), "test.mp4");

RequestBody fileBody = RequestBody.create(MediaType.parse("application/octet-stream"), file);

RequestBoty requestBody = new MultipartBuilder()
.type(MultipartBuilder.FORM)
.addPart(Headers.of("Content-Disposition", "form-data; name=\"usrename\""),RequestBody.create(null, "L"))
.addPart(Headers.of("Content-Disposition", "form-data; name=\"mFile\";filename=\"test.mp4\""), fileBody)
.build();

Request request = new Request.Builder()
.url("fileUploadApi")
.post(requestBody)
.build();

Call call = mOkHttpClient.newCall(request);
call.enqueue(new Callback(){
...
});

封装

okhttputils

源码分析

基本流程

1.OkHttpClient

  • 构造函数

    1
    2
    3
    4
    5
    public OkHttpClient() {
    this(new Builder());
    }

    OkHttpClient(Builder builder) {...}
  • Builder 模式:通过Builder配置参数,最后通过build()方法返回一个OkHttpClient实例。

    1
    2
    3
    public OkHttpClient build() {
    return new OkHttpClient(this);
    }

OkHttpClient中可以看出什么设计模式?Builder 模式、外观模式

2.Request

1
2
3
4
5
6
7
Request(Builder builder) {
this.url = builder.url;
this.method = builder.method;
this.headers = builder.headers.build();
this.body = builder.body;
this.tag = builder.tag != null ? builder.tag : this;
}

这意味着什么,当构建一个request需要用builder模式进行构建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public Builder newBuilder() {
return new Builder(this);
}
//builder===================
public Builder() {
this.method = "GET";
this.headers = new Headers.Builder();
}

Builder(Request request) {
this.url = request.url;
this.method = request.method;
this.body = request.body;
this.tag = request.tag;
this.headers = request.headers.newBuilder();
}
public Request build() {
if (url == null) throw new IllegalStateException("url == null");
return new Request(this);
}

request的构建也是基于 Builder 模式。

3.异步请求

构建完Request后,接着构建一个Call,即Call call = mOkHttpClient.newCall(request);

1
2
3
@Override public Call newCall(Request request) {
return RealCall.newRealCall(this, request, false /* for web socket */);
}
1
public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {...}
1
2
3
interface Factory {
Call newCall(Request request);
}

从接口源码可以看出这个接口并不复杂,仅仅是定义一个newCall用于创建Call的方法,这里其实用到了工厂模式的思想,将构建的细节交给具体实现,顶层只需要拿到Call对象即可

1
2
3
4
5
6
7
8
9
10
final class RealCall implements Call {
...
static RealCall newRealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
// Safely publish the Call instance to the EventListener.
RealCall call = new RealCall(client, originalRequest, forWebSocket);
call.eventListener = client.eventListenerFactory().create(call); //监听事件流程,使用了工厂模式
return call;
}
...
}
1
2
3
4
5
6
7
private RealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
this.client = client;
this.originalRequest = originalRequest;
this.forWebSocket = forWebSocket;
//默认创建一个retryAndFollowUpInterceptor过滤器
this.retryAndFollowUpInterceptor = new RetryAndFollowUpInterceptor(client, forWebSocket);
}

现在Call创建完了,一般就到最后一个步骤,将请求加入调度

1
2
3
4
5
6
7
8
9
10
11
call.enqueue(new Callback(){
@Override
public void onFailure(Request request, IOException e){

}

@Override
public void onResponse(final Response response) throws IOException{

}
});

调用了callenqueue()

1
2
3
4
5
6
7
8
9
@Override public void enqueue(Callback responseCallback) {
synchronized (this) { // 防止多线程同时调用
if (executed) throw new IllegalStateException("Already Executed");
executed = true;
}
captureCallStackTrace(); //追踪堆栈信息
eventListener.callStart(this); //这是干吗的啊???回调?!
client.dispatcher().enqueue(new AsyncCall(responseCallback));
}

client.dispatcher().enqueue(new AsyncCall(responseCallback));这里我们需要先回到OkHttpClient的源码中。

1
2
3
public Dispatcher dispatcher() {
return dispatcher;
}
1
2
3
4
5
6
7
8
synchronized void enqueue(AsyncCall call) {
if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
runningAsyncCalls.add(call);
executorService().execute(call);
} else {
readyAsyncCalls.add(call);
}
}

这里先对Dispatcher的成员变量做个初步的认识:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public final class Dispatcher {
private int maxRequests = 64;
private int maxRequestsPerHost = 5;
private @Nullable Runnable idleCallback;

/** Executes calls. Created lazily. */
private @Nullable ExecutorService executorService;

/** Ready async calls in the order they'll be run. */
private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();

/** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();

/** Running synchronous calls. Includes canceled calls that haven't finished yet. */
private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();
...
}

故上面的逻辑就可以比较清楚了。当正在执行的异步队列个数小于maxRequest(64)并且请求同一个主机的个数小于maxRequestsPerHost(5)时,则将这个请求加入异步执行队列runningAsyncCall,并用线程池执行这个Call,否则加入异步等待队列。

现在来看一个AsyncCall的源码:

1
final class AsyncCall extends NamedRunnable {...}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public abstract class NamedRunnable implements Runnable {
protected final String name;

public NamedRunnable(String format, Object... args) {
this.name = Util.format(format, args);
}

@Override public final void run() {
// 所以这一系列的操作的作用是?
String oldName = Thread.currentThread().getName();
Thread.currentThread().setName(name);
try {
execute();
} finally {
Thread.currentThread().setName(oldName);
}
}

protected abstract void execute();
}

可以看到NamedRunnable是一个抽象类,实现了Runnable接口。这里将当前执行的线程的名字设为我们在构造方法中传入的名字,接着执行execute()方法,finally再设置回来。回到AsyncCall中找到execute()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@Override protected void execute() {
boolean signalledCallback = false;
try {
Response response = getResponseWithInterceptorChain();
if (retryAndFollowUpInterceptor.isCanceled()) {
signalledCallback = true;
responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
} else {
signalledCallback = true;
responseCallback.onResponse(RealCall.this, response);
}
} catch (IOException e) {
if (signalledCallback) {
// Do not signal the callback twice!
Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
} else {
eventListener.callFailed(RealCall.this, e);
responseCallback.onFailure(RealCall.this, e);
}
} finally {
client.dispatcher().finished(this);
}
}

终于,找到了Response的身影了,那么就意味着执行网络请求就在getResponseWithInterceptorChain()中,后面的代码其实基本上就是一些接口回调,回调当前Call的执行状态。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Response getResponseWithInterceptorChain() throws IOException {
// Build a full stack of interceptors.
List<Interceptor> interceptors = new ArrayList<>();
interceptors.addAll(client.interceptors());
interceptors.add(retryAndFollowUpInterceptor); //失败和重定向过滤器
interceptors.add(new BridgeInterceptor(client.cookieJar())); //封装request和response过滤器
interceptors.add(new CacheInterceptor(client.internalCache())); //缓存相关过滤器,负责读取缓存直接返回、更新缓存
interceptors.add(new ConnectInterceptor(client)); //负责和服务器建立连接
if (!forWebSocket) {
//配置OkHttpClient时设置的networkInterceptors
interceptors.addAll(client.networkInterceptors());
}
//负责向服务器发送请求数据、从服务器读取响应数据(实际网络请求)
interceptors.add(new CallServerInterceptor(forWebSocket));

Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
originalRequest, this, eventListener, client.connectTimeoutMillis(),
client.readTimeoutMillis(), client.writeTimeoutMillis());

return chain.proceed(originalRequest);
}

各式各样的Interceptor的列表。当我们默认创建OkHttpClient时,okHttp默认会给我们实现这些过滤器,每个过滤器执行不同的任务,各个过滤器间相互不耦合。

retryAndFollowUpInterceptor——失败和重定向过滤器
BridgeInterceptor——封装request和response过滤器
CacheInterceptor——缓存相关的过滤器,负责读取缓存直接返回、更新缓存
ConnectInterceptor——负责和服务器建立连接,连接池等
networkInterceptors——配置 OkHttpClient 时设置的 networkInterceptors
CallServerInterceptor——负责向服务器发送请求数据、从服务器读取响应数据(实际网络请求)

添加完过滤器后,就是执行过滤器了。

1
2
3
4
5
Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
originalRequest, this, eventListener, client.connectTimeoutMillis(),
client.readTimeoutMillis(), client.writeTimeoutMillis());

return chain.proceed(originalRequest);

可以看到这里创建了一个RealInterceptorChain,并调用了proceed(),这里注意下index:0这个参数。

1
2
3
4
5
/**
* A concrete interceptor chain that carries the entire interceptor chain: all application
* interceptors, the OkHttp core, all network interceptors, and finally the network caller.
*/
public final class RealInterceptorChain implements Interceptor.Chain {...}

可以看到RealInterceptorChain就是所有过滤器组成的调用链,最终的网络请求动作也是由它发起的。我们看看它的proceed()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
RealConnection connection) throws IOException {
if (index >= interceptors.size()) throw new AssertionError();

calls++;

// If we already have a stream, confirm that the incoming request will use it.
if (this.httpCodec != null && !this.connection.supportsUrl(request.url())) {
throw new IllegalStateException("network interceptor " + interceptors.get(index - 1)
+ " must retain the same host and port");
}

// If we already have a stream, confirm that this is the only call to chain.proceed().
if (this.httpCodec != null && calls > 1) {
throw new IllegalStateException("network interceptor " + interceptors.get(index - 1)
+ " must call proceed() exactly once");
}

// Call the next interceptor in the chain.
RealInterceptorChain next = new RealInterceptorChain(interceptors, streamAllocation, httpCodec,
connection, index + 1, request, call, eventListener, connectTimeout, readTimeout,
writeTimeout);
Interceptor interceptor = interceptors.get(index);
Response response = interceptor.intercept(next); //这里调用的就是特定过滤器实现Interceptor接口的自身的intercept方法

// Confirm that the next interceptor made its required call to chain.proceed().
if (httpCodec != null && index + 1 < interceptors.size() && next.calls != 1) {
throw new IllegalStateException("network interceptor " + interceptor
+ " must call proceed() exactly once");
}

// Confirm that the intercepted response isn't null.
if (response == null) {
throw new NullPointerException("interceptor " + interceptor + " returned null");
}

if (response.body() == null) {
throw new IllegalStateException(
"interceptor " + interceptor + " returned a response with no body");
}

return response;
}

index0开始,如果index超过了过滤器的个数会抛出异常,后面会new一个RealInterceptorChain,而且会将参数传递,并且index + 1了,接着获取indexinterceptor,并调用了intercept(),传入新newnext对象。这里用了递归的思想来完成遍历。

1
2
3
4
5
6
7
8
9
10
11
12
13
public final class ConnectInterceptor implements Interceptor{
public final OkHttpClient client;

public ConnectInterceptor(OkHttpClient client){
this.client = client;
}

@Override public Response intercept(Chain chain) throws IOException{
RealInterceptorChain realChain = (RealInterceptorChain) chain;
...
return realChain.proceed(request, streamAllocation, httpCodec, connection);
}
}

可以看到这里我们拿了一个ConnectInterceptor的源码,这里得到Chain后,进行相应的处理后,继续调用proceed(),那么就接着刚才的逻辑,index + 1,获取下一个Interceptor,重复操作,所以现在就很清楚了,这里利用递归循环,也就是 OkHttp 最经典的责任链模式

4.同步请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@Override public Response execute() throws IOException {
synchronized (this) {
if (executed) throw new IllegalStateException("Already Executed");
executed = true;
}
captureCallStackTrace();
eventListener.callStart(this); //回调
try {
client.dispatcher().executed(this); //将请求加入同步队列中
Response result = getResponseWithInterceptorChain(); //创建过滤器责任链,得到 Response
if (result == null) throw new IOException("Canceled");
return result;
} catch (IOException e) {
eventListener.callFailed(this, e);
throw e;
} finally {
client.dispatcher().finished(this);
}
}

可以看到基本上流程都一致,除了是同步执行,核心方法走的还是getResponseWithInterceptorChain()

okHttp流程

RetryAndFollowUpInterceptor

1.宏观流程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Override public Response intercept(Chain chain) throws IOException {
。。。
while (true) {
。。。
try {
response = realChain.proceed(request, streamAllocation, null, null);
}
。。。
if(满足条件){
return response;
}
。。。
//不满足条件,一顿操作,赋值再来!
request = followUp;
priorResponse = response;
}
}

宏观流程,循环体里面的主要方法就是

1
response = realChain.proceed(request, streamAllocation, null, null);

执行了这个方法,就会交给下一个过滤器继续执行,所以单从这里来看,我们可以简单的理解这个过滤器其实没有做什么。

但是当出现了一些问题,导致不满足条件的时候,就需要进行一系列的操作,重新复制Request,重新请求,这也就是while的功能,对应的也就是这个过滤器的主要功能:重试和重定向

2.过程细节

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
 @Override public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Call call = realChain.call();
EventListener eventListener = realChain.eventListener();
//StreamAllocation 的作用:协调 Calls、Streams、Connections 三者的关系
StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(request.url()), call, eventListener, callStackTrace);
this.streamAllocation = streamAllocation;

int followUpCount = 0;
Response priorResponse = null;
while (true) {
if (canceled) { //请求已取消
streamAllocation.release();
throw new IOException("Canceled");
}

Response response;
boolean releaseConnection = true;
try {
//执行请求
response = realChain.proceed(request, streamAllocation, null, null);
releaseConnection = false;
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
// 判断能否从前一个请求恢复,不行就抛出异常
throw e.getLastConnectException();
}
releaseConnection = false;
//重试
continue;
} catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
// 先判断当前请求是否已经发送了
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
// 同样的重试判断
if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
releaseConnection = false;
continue; //重试
} finally {
// We're throwing an unchecked exception. Release any resources.
// 没有捕获到的异常,最终要释放
if (releaseConnection) {
streamAllocation.streamFailed(null);
streamAllocation.release();
}
}

// Attach the prior response if it exists. Such responses never have a body.
//priorResponse 是用来保存前一个 Response 的,这里可以看到将前一个Response和当前Response结合到一起了,
//对应的场景是,当获得Response后,发现需要重定向,则将当前Response设置给priorResponse,再执行一遍流程,
//直到不需要重定向了,则将priorResponse和Response结合起来。
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build();
}

//判断是否需要重定向,如果需要重定向则返回一个重定向的Request,没有则为null
Request followUp = followUpRequest(response, streamAllocation.route());

if (followUp == null) { //不需要重定向
if (!forWebSocket) { //是WebSocket,释放
streamAllocation.release();
}
return response;
}

closeQuietly(response.body()); //需要重定向,关闭响应流

//重定向次数++,并且小于最大重定向次数 MAX_FOLLOW_UPS (20)
if (++followUpCount > MAX_FOLLOW_UPS) {
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}

//流类型,没有被缓存,不能重定向
if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}
//判断是否相同,不然重新创建一个streamConnection
if (!sameConnection(response, followUp.url())) {
streamAllocation.release();
streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(followUp.url()), call, eventListener, callStackTrace);
this.streamAllocation = streamAllocation;
} else if (streamAllocation.codec() != null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?");
}
//赋值再来
request = followUp;
priorResponse = response;
}
}
1
2
3
//StreamAllocation 的作用:协调 Calls、Streams、Connections 三者的关系
StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(request.url()), call, eventListener, callStackTrace);

这个类大概可以理解为是处理ConnectionsStreamsCalls三者的关系,这一点其实从构造函数的传参也可以看出来。

接下来就要进入循环体中看了,首先可以看到当请求被取消的时候,会跳出循环体(第一种跳出的情况)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Response response;
boolean releaseConnection = true;
try {
//执行请求
response = realChain.proceed(request, streamAllocation, null, null);
releaseConnection = false;
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
// 判断能否从前一个请求恢复,不行就抛出异常
throw e.getLastConnectException();
}
releaseConnection = false;
//重试
continue;
}

接下来看try catch体中的内容,try其实就是执行后续过滤器链中的东西,这里要稍微注意一下releaseConnection这个变量的,对后续的判断理解是有影响的,可以看到初始化时将releaseConnection这个变量赋值为true

下面是重点内容了:

看到捕获的第一个异常RouteException,注释为尝试连接一个路由失败,这个请求还没有被发出,接下来执行了一个方法recover(),这里注意一下false参数,进入到方法体中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
private boolean recover(IOException e, boolean requestSendStarted, Request userRequest) {
streamAllocation.streamFailed(e);

// The application layer has forbidden retries.
// 如果OkHttpClient直接配置拒绝失败重连
// 默认创建的OkHttpClient的retryOnConnectionFailure属性是true
if (!client.retryOnConnectionFailure()) return false;

// We can't send the request body again.
//如果请求已经发送,并且这个请求体是一个UnrepeatableRequestBody类型,则不能重试
//StreamedRequestBody实现了UnrepeatableRequestBody接口,是个流类型,不会被缓存,所以只能执行一次
if (requestSendStarted && userRequest.body() instanceof UnrepeatableRequestBody) return false;

// This exception is fatal.
//一些严重问题就不要重试了
if (!isRecoverable(e, requestSendStarted)) return false;

// No more routes to attempt.
//没有更多的路由就不要重试了
if (!streamAllocation.hasMoreRoutes()) return false;

// For failure recovery, use the same route selector with a new connection.
return true;
}

第二个判断条件中,这个UnrepeatableRequestBody,是一个空的接口

1
2
public interface UnrepeatableRequestBody {
}

这个空的接口的作用就是标记那些不能被重复请求的请求体,这个时候可能就要了解哪些请求是不能被重复请求的。到目前的 OkHttp 源码中,只有一种请求实现了这个接口,那就是 StreamedRequestBody

1
2
3
4
5
/**
* This request body streams bytes from an application thread to an OkHttp dispatcher thread via a
* pipe. Because the data is not buffered it can only be transmitted once.
*/
final class StreamedRequestBody extends OutputStreamRequestBody implements UnrepeatableRequestBody {}

从这个类的注释我们也可以理解,StreameedRequestBody实现了UnrepeatableRequestBody接口,是个流类型,不会被缓存,所以只能执行一次。

看看第三个判断条件,也就是if(!isRecoverable(e, requestSendStarted)) return false;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
private boolean isRecoverable(IOException e, boolean requestSendStarted) {
// If there was a protocol problem, don't recover.
if (e instanceof ProtocolException) {//如果是协议问题
return false;
}

// If there was an interruption don't recover, but if there was a timeout connecting to a route
// we should try the next route (if there is one).
if (e instanceof InterruptedIOException) {// 中断
//超时问题并且请求还没有被发送,可以重试
//其他就不要重试了
return e instanceof SocketTimeoutException && !requestSendStarted;
}

// Look for known client-side or negotiation errors that are unlikely to be fixed by trying
// again with a different route.
if (e instanceof SSLHandshakeException) {
// If the problem was a CertificateException from the X509TrustManager,
// do not retry.
//证书或安全原因。就不要重试
if (e.getCause() instanceof CertificateException) {
return false;
}
}
if (e instanceof SSLPeerUnverifiedException) {
// e.g. a certificate pinning error.
return false;
}

// An example of one we might want to retry with a different route is a problem connecting to a
// proxy and would manifest as a standard IOException. Unless it is one we know we should not
// retry, we return true and try a new route.
return true;
}

总体可以理解为:如果是一些严重的问题(协议、安全……),拒绝重试。以上判断归结到一个isRecoverable()方法中,这里严重的情况主要由这几种:

  • 协议问题,不能重试;
  • 超时问题,并且请求还没有被发送,可以重试,其他就不要重试了。
  • 安全问题,不要重试。

最后一个判断:if(!streamAllocation.hasMoreRoutes()) return false;

1
2
3
public boolean hasMoreRoutes() {
return route != null || routeSelector.hasNext();
}
1
2
3
4
5
6
7
8
/**
* Returns true if there's another route to attempt. Every address has at least one route.
*/
public boolean hasNext() {
return hasNextInetSocketAddress()
|| hasNextProxy()
|| hasNextPostponed();
}

这个判断表明:没有更多的可以使用的路由,则不要重试了(第四种拒绝重连的方式)。这里大概说明一下routeSelection 是用 List 保存的

1
2
3
4
5
6
7
8
9
catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), false, request)) {
throw e.getLastConnectException();
}
releaseConnection = false;
//重试。。。
continue;
}

所以当上述判断结束后,如果需要重试,则continue,重新执行循环体,也就是发挥了这个过滤器的作用,重试

1
2
3
4
5
6
7
8
9
10
catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
//先判断当前请求是否已经发送了
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
//同样的重试判断
if (!recover(e, requestSendStarted, request)) throw e;
releaseConnection = false;
//重试。。。
continue;
}

这时候查看下一个异常IOException,这里稍微注意下requestSendStarted,先前版本的出现的循环重复请求问题。这时默认传的就不是false,而是判断得到的requestSendStarted。最后同样当需要重试时,继续循环。

1
2
3
4
5
6
7
8
finally {
// We're throwing an unchecked exception. Release any resources.
if (releaseConnection) {
//没有捕获到异常,最终要释放。
streamAllocation.streamFailed(null);
streamAllocation.release();
}
}
1
2
3
4
5
6
7
8
// Attach the prior response if it exists. Such responses never have a body.
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build();
}

priorResponse是用来保存前一个Response的,这里可以看到将前一个Response和当前Response结合在一起了。对应的场景是:当获得Response后,发现需要重定向,则将当前Response设置给priorResponse,再执行一遍流程,直到不需要重定向了,则将priorResponseResponse结合起来

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
      Request followUp = followUpRequest(response);
//=========================followUpRequest()==============================
/**
* Figures out the HTTP request to make in response to receiving {@code userResponse}. This will
* either add authentication headers, follow redirects or handle a client request timeout. If a
* follow-up is either unnecessary or not applicable, this returns null.
*/
private Request followUpRequest(Response userResponse) throws IOException {
if (userResponse == null) throw new IllegalStateException();
Connection connection = streamAllocation.connection();
Route route = connection != null
? connection.route()
: null;
int responseCode = userResponse.code();

final String method = userResponse.request().method();
switch (responseCode) {
case HTTP_PROXY_AUTH:
Proxy selectedProxy = route != null
? route.proxy()
: client.proxy();
if (selectedProxy.type() != Proxy.Type.HTTP) {
throw new ProtocolException("Received HTTP_PROXY_AUTH (407) code while not using proxy");
}
return client.proxyAuthenticator().authenticate(route, userResponse);

case HTTP_UNAUTHORIZED:
return client.authenticator().authenticate(route, userResponse);

case HTTP_PERM_REDIRECT:
case HTTP_TEMP_REDIRECT:
// "If the 307 or 308 status code is received in response to a request other than GET
// or HEAD, the user agent MUST NOT automatically redirect the request"
if (!method.equals("GET") && !method.equals("HEAD")) {
return null;
}
// fall-through
case HTTP_MULT_CHOICE:
case HTTP_MOVED_PERM:
case HTTP_MOVED_TEMP:
case HTTP_SEE_OTHER:
// Does the client allow redirects?
if (!client.followRedirects()) return null;

String location = userResponse.header("Location");
if (location == null) return null;
HttpUrl url = userResponse.request().url().resolve(location);

// Don't follow redirects to unsupported protocols.
if (url == null) return null;

// If configured, don't follow redirects between SSL and non-SSL.
boolean sameScheme = url.scheme().equals(userResponse.request().url().scheme());
if (!sameScheme && !client.followSslRedirects()) return null;

// Most redirects don't include a request body.
Request.Builder requestBuilder = userResponse.request().newBuilder();
if (HttpMethod.permitsRequestBody(method)) {
final boolean maintainBody = HttpMethod.redirectsWithBody(method);
if (HttpMethod.redirectsToGet(method)) {
requestBuilder.method("GET", null);
} else {
RequestBody requestBody = maintainBody ? userResponse.request().body() : null;
requestBuilder.method(method, requestBody);
}
if (!maintainBody) {
requestBuilder.removeHeader("Transfer-Encoding");
requestBuilder.removeHeader("Content-Length");
requestBuilder.removeHeader("Content-Type");
}
}

// When redirecting across hosts, drop all authentication headers. This
// is potentially annoying to the application layer since they have no
// way to retain them.
if (!sameConnection(userResponse, url)) {
requestBuilder.removeHeader("Authorization");
}
//重新构造了一个 Request
return requestBuilder.url(url).build();

case HTTP_CLIENT_TIMEOUT:
// 408's are rare in practice, but some servers like HAProxy use this response code. The
// spec says that we may repeat the request without modifications. Modern browsers also
// repeat the request (even non-idempotent ones.)
if (userResponse.request().body() instanceof UnrepeatableRequestBody) {
return null;
}

return userResponse.request();

default:
return null;
}
}

下面这行代码主要是对followUpRequest()这个方法的理解,其实没必要在意每一行代码,这样反而会影响我们的阅读。这里可以观察发现,其实这个方法的主要操作是,当返回码满足某些条件时就重新构造一个Request

1
2
3
4
5
6
7
8
if(followUp == null){
//不需要重定向
if(!forWebSocket){
//是WebSocket,释放
streamAllocation.release();
}
return response;
}

当不需要重定向,也就是返回null,直接返回response

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
closeQuietly(response.body());

if (++followUpCount > MAX_FOLLOW_UPS) {
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}

if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}

if (!sameConnection(response, followUp.url())) {
streamAllocation.release();
streamAllocation = new StreamAllocation(
client.connectionPool(), createAddress(followUp.url()), callStackTrace);
} else if (streamAllocation.codec() != null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?");
}

request = followUp;
priorResponse = response;

当返回的不为空,也就是重新构造了一个Request,需要重定向。

  1. 关闭响应流;closeQuietly(response.body());
  2. 增加重定向的次数,保证小于最大重定向次数;++followUpCount > MAX_FOLLOW_UPS
  3. 不能是UnrepeatableRequestBody类型,这是一个空接口,用于标记那些只能请求一次的请求;
  4. 判断是否相同,如果不相同,则需要重新创建一个streamConnection
  5. 重新赋值,结束当前循环,继续while循环,也就是执行重定向请求。

3.总结

这个过滤器主要作用就是用于对请求的重试和重定向的。其中拒绝重试的判断条件有如下几种:

  • 在配置OkHttpClient中配置retryOnConnectionFailure属性为false,表明拒绝失败重连,那么这里返回false
  • 如果请求已经发送,并且这个请求体是一个UnrepeatableRequestBody类型,则不能重试
  • 如果是一些严重的问题(协议,安全等),拒绝重试
  • 没有更多可以使用的路由,则不要重试了

CacheInterceptor

1.宏观流程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@Override public Response intercept(Chain chain) throws IOException {
//1 尝试通过这个Request拿到缓存
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;
//2如果不允许使用网络并且缓存为空,新建一个504的Response返回
if (networkRequest == null && cacheResponse == null) {
return new Response;
}
//3如果不允许使用网络,但是有缓存,返回缓存
if (networkRequest == null) {
return cacheResponse;
}
//4链式调用下一个过滤器
networkResponse = chain.proceed(networkRequest);
//5如果缓存不为空,但是网络请求得来的返回码是304
//(如果返回码是304,客户端有缓冲的文档并发出了一个条件性的请求(一般是提供`If-Modified-Since`头表示客户只想指定日期更新文档)。)
//服务器告诉客户,原来缓冲的文档还可以继续使用,则使用缓存的响应
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
Response response = cacheResponse.newBuilder()
return response;
}
}
//6 使用网络请求得到的Response
Response response = networkResponse;
//7并将这个Response缓存起来(前提当然是能缓存)
cache.put(response);

return response;
}

2.过程细节

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
 @Override public Response intercept(Chain chain) throws IOException {
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null; //默认cache为null,可以在构建okHttpClient时配置cache,不为空时尝试获取缓存中的response

long now = System.currentTimeMillis();
//根据response,time,request创建一个缓存策略,用于判断怎样是使用缓存
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;

if (cache != null) {
cache.trackResponse(strategy);
}

if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}

// If we're forbidden from using the network and the cache is insufficient, fail.
//如果缓存策略中禁止使用网络,并且缓存又为空,则构建一个response直接返回,注意返回码为504
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}

// If we don't need the network, we're done.
//如果不使用网络,但是又缓存,直接返回缓存
if (networkRequest == null) {
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}

Response networkResponse = null;
try {
直接走后续过滤器
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}

// If we have a cache response too, then we're doing a conditional get.
//当缓存响应和网络响应同时存在的时候,选择用哪个
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
//如果返回码是304,客户端有缓冲的文档并发出了一个条件性的请求
//(一般是提供If-Modified-Since头表示用户只想到指定日期更新文档)
//服务器告诉客户,原来缓冲的文档还可以继续使用
//则使用缓存的响应
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();

// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}

//使用网络响应
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();

//所以默认创建的OkHttpClient是没有缓存的
if (cache != null) {
//将响应缓存
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
//缓存Response的Header信息
CacheRequest cacheRequest = cache.put(response);
//缓存Body
return cacheWritingResponse(cacheRequest, response);
}
//只能缓存GET,不然移除request
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}

return response;
}

首先看第一行代码:

1
2
3
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;

这里的重点就是要看怎么取缓存了。其实CacheInterceptor重点比较难以理解的就是:拿缓存,缓存策略,存缓存

1
2
3
4
5
6
7
8
9
10
public final class CacheInterceptor implements Interceptor {
final InternalCache cache;

public CacheInterceptor(InternalCache cache) {
this.cache = cache;
}
}

//======================RealCall.java======================
interceptors.add(new CacheInterceptor(client.internalCache()));

首先可以看到这里的cacheInternalCache类型,而且是在构造函数的时候调用的。并且通过RealCall也可以看到,构造这个过滤器的时候传入的是我们构造OkHttpClient中设置的internalCache而当我们用默认方式构造OKHttpClient时是不会创建缓存的,也就是internalCache = null

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/**
* OkHttp's internal cache interface. Applications shouldn't implement this: instead use {@link
* okhttp3.Cache}.
*/
public interface InternalCache {
Response get(Request request) throws IOException;

CacheRequest put(Response response) throws IOException;

/**
* Remove any cache entries for the supplied {@code request}. This is invoked when the client
* invalidates the cache, such as when making POST requests.
*/
void remove(Request request) throws IOException;

/**
* Handles a conditional request hit by updating the stored cache response with the headers from
* {@code network}. The cached response body is not updated. If the stored response has changed
* since {@code cached} was returned, this does nothing.
*/
void update(Response cached, Response network);

/** Track an conditional GET that was satisfied by this cache. */
void trackConditionalCacheHit();

/** Track an HTTP response being satisfied with {@code cacheStrategy}. */
void trackResponse(CacheStrategy cacheStrategy);
}

不出意外,InternalCache是一个接口,OkHttp充分贯彻了面向接口编程。接着查找OkHttp中哪个实现了或者说使用了这个接口,对应找到了Cache这个类。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
public final class Cache implements Closeable, Flushable {
...
final InternalCache internalCache = new InternalCache() {
@Override public Response get(Request request) throws IOException {
return Cache.this.get(request);
}
...
};
...
Response get(Request request) {
String key = key(request.url());
DiskLruCache.Snapshot snapshot;
Entry entry;
try {
snapshot = cache.get(key);
if (snapshot == null) {
//没拿到,返回null
return null;
}
} catch (IOException e) {
// Give up because the cache cannot be read.
return null;
}

try {
//创建一个Entry,这里其实传入的是CleanFiles数组的第一个(ENTRY_METADATA = 0)
//得到的是头信息,也就是key.0
entry = new Entry(snapshot.getSource(ENTRY_METADATA));
} catch (IOException e) {
Util.closeQuietly(snapshot);
return null;
}
//得到缓存构建得到的response
Response response = entry.response(snapshot);

if (!entry.matches(request, response)) {
Util.closeQuietly(response.body());
return null;
}

return response;
}
...
}

可以看到,Cache中实现了InternalCache这个接口,get()方法对应调用的是Cache类中的get()

1
String key = key(request.url());

缓存的key是和requesturl直接相关的。这里通过url,得到了缓存的key

1
2
3
4
5
6
7
8
9
10
11
12
13
final DiskLruCache cache;
==============================
DiskLruCache.Snapshot snapshot;
Entry entry;
try {
snapshot = cache.get(key);
if (snapshot == null) {
return null;
}
} catch (IOException e) {
// Give up because the cache cannot be read.
return null;
}

刚开始看到对各种变量是很难理解的,可以看到这里得到key后,又会走cache.get()方法。首先要明白的是,这里的cache对应的类型是DiskLruCache

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/**
* Returns a snapshot of the entry named {@code key}, or null if it doesn't exist is not currently
* readable. If a value is returned, it is moved to the head of the LRU queue.
*/
public synchronized Snapshot get(String key) throws IOException {
initialize();//总结来说就是对journalFile文件的操作,有则删除无用冗余的信息,构建新文件,没有则new一个新的

checkNotClosed();//判断是否关闭,如果缓存损坏了,则会被关闭
validateKey(key);//检查key是否满足格式要求,正则表达式
Entry entry = lruEntries.get(key);//获取key对应的Entry
if (entry == null || !entry.readable) return null;

Snapshot snapshot = entry.snapshot();//获取Entry里面的snapshot的值
if (snapshot == null) return null;

redundantOpCount++;//有则计数器加1
journalWriter.writeUtf8(READ).writeByte(' ').writeUtf8(key).writeByte('\n');//把这个内容写入文档中
if (journalRebuildRequired()) {//判断是否达清理条件
executor.execute(cleanupRunnable);
}

return snapshot;
}

进入到DisLruCache内部,首先执行的是initialize()方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
public synchronized void initialize() throws IOException {
assert Thread.holdsLock(this); //断言,当持有自己锁的时候,继续执行。没有持有锁,抛出异常

if (initialized) {
return; // Already initialized.
}

// If a bkp file exists, use it instead.
if (fileSystem.exists(journalFileBackup)) {
// If journal file also exists just delete backup file.
if (fileSystem.exists(journalFile)) {
fileSystem.delete(journalFileBackup);
} else {
fileSystem.rename(journalFileBackup, journalFile);
}
}//经过上述判断后的结果只有两种:1.什么都没有;2.有journalFile文件

// Prefer to pick up where we left off.
if (fileSystem.exists(journalFile)) {//journalFile文件动作
try {
readJournal();
processJournal();
initialized = true;
return;
} catch (IOException journalIsCorrupt) {
Platform.get().log(WARN, "DiskLruCache " + directory + " is corrupt: "
+ journalIsCorrupt.getMessage() + ", removing", journalIsCorrupt);
}

// The cache is corrupted, attempt to delete the contents of the directory. This can throw and
// we'll let that propagate out as it likely means there is a severe filesystem problem.
try {
delete();//到了这里表示有缓存损坏导致异常,则删除缓存目录下所有文件
} finally {
closed = false;
}
}

rebuildJournal();//如果没有journalFile则重建一个

initialized = true;//标记初始化完成,无论有没有journalFile文件。initialized都会标记为true,只执行一遍
}

这里先说明一下journalFile指的是日志文件,是对缓存一系列操作的记录,不影响缓存的执行流程。可以看到这里有两个文件journalFilejournalFileBackup,从名字上可以确定,一个是备份文件,一个是记录文件,随着后面的分析,会发现缓存中充分利用这两个文件,这种形式,一个用于保存,一个用于编辑操作。

当存在journalFile,执行readHournal(),读取journalFile文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
private void readJournal() throws IOException {
BufferedSource source = Okio.buffer(fileSystem.source(journalFile)); //利用Okio读取journalFile文件
try {
String magic = source.readUtf8LineStrict();
String version = source.readUtf8LineStrict();
String appVersionString = source.readUtf8LineStrict();
String valueCountString = source.readUtf8LineStrict();
String blank = source.readUtf8LineStrict();
//保证和默认值相同
if (!MAGIC.equals(magic)
|| !VERSION_1.equals(version)
|| !Integer.toString(appVersion).equals(appVersionString)
|| !Integer.toString(valueCount).equals(valueCountString)
|| !"".equals(blank)) {
throw new IOException("unexpected journal header: [" + magic + ", " + version + ", "
+ valueCountString + ", " + blank + "]");
}

int lineCount = 0;
while (true) {
try {
//逐行读取,并根据每行的开头,不同的状态执行不同的操作,主要就是往LruEntries里面add,或者remove
readJournalLine(source.readUtf8LineStrict());
lineCount++;
} catch (EOFException endOfJournal) {
break;
}
}
//日志操作的记录数 = 总行数 - lruEntries 中实际add的行数
redundantOpCount = lineCount - lruEntries.size();

// If we ended on a truncated line, rebuild the journal before appending to it.
if (!source.exhausted()) {//表示是否还多余字节。如果没有多余字节,返回true,有多余字节返回false
rebuildJournal();
} else {
journalWriter = newJournalWriter();//获取这个文件的Sink,以便Writer
}
} finally {
Util.closeQuietly(source);
}
}

可以看到这里用到了使用OkHttp必须依赖的库Okio,这个库内部对输入输出流进行了很多优化,分帧读取写入,帧还有池的概念。具体原理可以网上去学习。

看一下readJournalLine()方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
private void readJournalLine(String line) throws IOException {
int firstSpace = line.indexOf(' ');//记录第一个空串的位置
if (firstSpace == -1) {
throw new IOException("unexpected journal line: " + line);
}

int keyBegin = firstSpace + 1;
int secondSpace = line.indexOf(' ', keyBegin);//记录第二个空串的位置
final String key;
if (secondSpace == -1) {//如果中间没有空串,则直接截取得到key
key = line.substring(keyBegin);
if (firstSpace == REMOVE.length() && line.startsWith(REMOVE)) {//如果解析出来的是“REMOVE sjkafjlasj”这样以REMOVE开头的
lruEntries.remove(key);//移除这个key, LruEntries是LinkedHashMap
return;
}
} else {
//解析两个空格间的字符串为key
key = line.substring(keyBegin, secondSpace);
}

Entry entry = lruEntries.get(key);//取出Entry对象
if (entry == null) {
//如果Entry对象为空,则new一个Entry,把key,和对象put进去
entry = new Entry(key);
lruEntries.put(key, entry);
}

if (secondSpace != -1 && firstSpace == CLEAN.length() && line.startsWith(CLEAN)) {
//如果是“CLEAN 1 2”这样的以CLEAN开头
//取第二个空格后面的字符串,parts变成[1,2]
String[] parts = line.substring(secondSpace + 1).split(" ");
entry.readable = true;//可读
entry.currentEditor = null;//不被编辑
entry.setLengths(parts);//设置长度
} else if (secondSpace == -1 && firstSpace == DIRTY.length() && line.startsWith(DIRTY)) {
//如果是“DIRTY dsfjkfj”这样以DIRTY开头,新建一个Editor
entry.currentEditor = new Editor(entry);
} else if (secondSpace == -1 && firstSpace == READ.length() && line.startsWith(READ)) {
// This work was already done by calling lruEntries.get().
//如果是“READ sfkjskf”这样以READ开头,不需要做什么事
} else {
throw new IOException("unexpected journal line: " + line);
}
}

这里一开始可能比较难以理解,说明一下journalFile每一行的保存格式是这样的:

REMOVE sdkjlg 2341 1234

第一个空格前面代表这条日志的操作内容,后面的第一个保存的是 key,后面这两个内容根据前面的操作存入缓存内容对应的 length。如果没有空格,那么数据格式就是这样的:REMOVE sdjkhf,截取第一个空格之后的内容为 key,如果是以 REMOVE 开头,则从lruEntries移除这个 key 对应的缓存。如果有第二个空格,如上,则去第一个空格和第二个空格间取内容作为 key。如果是类似:CLEAN jsdf 2 5,以 CLEAN 开头的话,则将取出 key 后面的数组,设置可读不可编辑。并设置 entry 的长度。这里先说明一下 Entry 类:

1
2
3
4
5
6
7
8
9
private final class Entry {
final String key;

/** Lengths of this entry's files. */
final long[] lengths;
final File[] cleanFiles;//用于保存持久数据,作用是读取,最后格式:key.0
final File[] dirtyFiles;//用于保存编辑的临时数据,作用是写,最后的格式:ley.0.tmp
...
}

Entry 中又两个数组,cleanFiles是用于保存持久性数据。用于读取,dirtyFiles是用于进行编辑,当编辑完成后会执行commit操作,将dirtyFiles赋值给cleanFileslength适用于保存 Entry 中每个数组对应的 file 的数量。

所以当 CLEAN jklsd 2 5,如果是以 CLEAN 开头的话,cleanFiles对应的 size 就是2,dirtyFiles对应的数量是 5(默认都是2个)。

后面的都类似。至此结束对readJournalLine()方法的分析,总结一下这个方法的作用:逐行读取,并根据每行的开头,不同的状态执行不同的操作,主要就是往lruEntries里面add,或者remove。接着返回到readJournal()方法中。

1
2
3
4
5
6
7
8
9
while (true) {
try {
//逐行读取,并根据每行的开头,不同的状态执行不同的操作,主要就是往lruEntries里面add,或者remove
readJournalLine(source.readUtf8LineStrict());
lineCount++;
} catch (EOFException endOfJournal) {
break;
}
}

可以看到,这里利用lineCount记录读取的行数。

1
2
3
4
5
6
7
8
9
10
11
//日志中操作的记录数=总行数-lruEntries中实际add的行数
redundantOpCount = lineCount - lruEntries.size();
//source.exhausted()表示是否还多余字节,如果没有多余字节,返回true,有多余字节返回false
// If we ended on a truncated line, rebuild the journal before appending to it.
if (!source.exhausted()) {
//如果有多余的字节,则重新构建下journal文件
rebuildJournal();
} else {
//获取这个文件的Sink,以便Writer
journalWriter = newJournalWriter();
}

读取完毕后会计算日志中操作的记录数,日志中操作的记录数 = 读取的总行数 - lruEntries 中实际保存的行数。

接下来source.exhausted()是表示是否还多余字节,如果没有多余字节,返回true,有多余字节返回false,如果有多余的字节则需要执行rebuildJournal(),没有则获得这个文件的Sink,用于Write操作。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
 /**
* Creates a new journal that omits redundant information. This replaces the current journal if it
* exists.
*/
synchronized void rebuildJournal() throws IOException {
if (journalWriter != null) {
journalWriter.close();
}

BufferedSink writer = Okio.buffer(fileSystem.sink(journalFileTmp));
try {
//写入校验信息
writer.writeUtf8(MAGIC).writeByte('\n');
writer.writeUtf8(VERSION_1).writeByte('\n');
writer.writeDecimalLong(appVersion).writeByte('\n');
writer.writeDecimalLong(valueCount).writeByte('\n');
writer.writeByte('\n');
//利用刚才逐行读的内容按照格式重新构建
for (Entry entry : lruEntries.values()) {
if (entry.currentEditor != null) {
writer.writeUtf8(DIRTY).writeByte(' ');
writer.writeUtf8(entry.key);
writer.writeByte('\n');
} else {
writer.writeUtf8(CLEAN).writeByte(' ');
writer.writeUtf8(entry.key);
entry.writeLengths(writer);
writer.writeByte('\n');
}
}
} finally {
writer.close();
}
//用新构建的journalFileTmp替换当前的journalFile文件
if (fileSystem.exists(journalFile)) {
fileSystem.rename(journalFile, journalFileBackup);
}
fileSystem.rename(journalFileTmp, journalFile);
fileSystem.delete(journalFileBackup);

journalWriter = newJournalWriter();
hasJournalErrors = false;
mostRecentRebuildFailed = false;
}

到这里readJournal()方法分析完了,总结这个方法的作用:主要是读取journalFile,根据日志文件中的日志信息,过滤无用冗余的信息,有冗余的则重新构建,最后保证journalFile文件中没有冗余信息

执行完readJournal(),回到initialize()中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// Prefer to pick up where we left off.
if (fileSystem.exists(journalFile)) {
try {
readJournal();
processJournal();
initialized = true;
return;
} catch (IOException journalIsCorrupt) {
Platform.get().log(WARN, "DiskLruCache " + directory + " is corrupt: "
+ journalIsCorrupt.getMessage() + ", removing", journalIsCorrupt);
}

// The cache is corrupted, attempt to delete the contents of the directory. This can throw and
// we'll let that propagate out as it likely means there is a severe filesystem problem.
try {
delete();
} finally {
closed = false;
}
}

readJournal()->processJournal()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/**
* Computes the initial size and collects garbage as a part of opening the cache. Dirty entries
* are assumed to be inconsistent and will be deleted.
*/
private void processJournal() throws IOException {
fileSystem.delete(journalFileTmp);//删除journalFileTmp文件
for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) {
Entry entry = i.next();
if (entry.currentEditor == null) {//表明数据是 CLEAN,循环记录 size
for (int t = 0; t < valueCount; t++) {
size += entry.lengths[t];
}
} else {//表明数据时DIRTY,删除
entry.currentEditor = null;
for (int t = 0; t < valueCount; t++) {
fileSystem.delete(entry.cleanFiles[t]);
fileSystem.delete(entry.dirtyFiles[t]);
}
i.remove();
}
}
}

可以看到,这里删除了刚才创建的journalFileTmp文件,并且遍历lruEntries,记录不可编辑的数据长度(也就是CLEAN),删除 DIRTY 数据。也就是只保留 CLEAN 持久性数据,删除编辑的数据。

走完processJournal()方法,接下来就将initialized标记为true,表示初始化完成。到这里其实初始化已经完成了,继续看initialized()方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
...
// The cache is corrupted, attempt to delete the contents of the directory. This can throw and
// we'll let that propagate out as it likely means there is a severe filesystem problem.
try {
delete();
} finally {
closed = false;
}
}

rebuildJournal();

initialized = true;

后面就比较简单了,当没有journalFile,则会调用我们刚才分析的方法rebuildJournal()重新创建一个日志文件,仍然将initialized标记为true,只执行一遍。到这里initialized()走完。总结一下这个方法:

  • 这个方法线程安全
  • 如果初始化过了,则什么都不干,只初始化一遍
  • 如果有journalFile日志文件。则对journalFile文件和lruEntries进行初始化操作,主要是删除冗余信息,和 DIRTY 信息
  • 没有则构建一个journalFile文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/**
* Returns a snapshot of the entry named {@code key}, or null if it doesn't exist is not currently
* readable. If a value is returned, it is moved to the head of the LRU queue.
*/
public synchronized Snapshot get(String key) throws IOException {
initialize();

checkNotClosed();
validateKey(key);
Entry entry = lruEntries.get(key);
if (entry == null || !entry.readable) return null;

Snapshot snapshot = entry.snapshot();
if (snapshot == null) return null;

redundantOpCount++;
journalWriter.writeUtf8(READ).writeByte(' ').writeUtf8(key).writeByte('\n');
if (journalRebuildRequired()) {
executor.execute(cleanupRunnable);
}

return snapshot;
}

到这initialized()总算分析完,接下来回到get()。总结下get()方法的主要操作:

  • 初始化日志文件和lruEntries
  • 检查保证 key 正确后获取缓存中保存的 Entry
  • 操作计数器 +1
  • 往日志文件中写入这次的 READ 操作
  • 根据redundantOpCount判断是否需要清理日志信息
  • 需要则开启线程清理
  • 不需要则返回缓存
1
2
3
4
5
6
7
8
9
/**
* We only rebuild the journal when it will halve the size of the journal and eliminate at least
* 2000 ops.
*/
boolean journalRebuildRequired() {
final int redundantOpCompactThreshold = 2000;
return redundantOpCount >= redundantOpCompactThreshold
&& redundantOpCount >= lruEntries.size();
}

可以看到清理的条件是当前redundantOpCount大于2000,并且redundantOpCount的值大于linkedList里面的size。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
private final Runnable cleanupRunnable = new Runnable() {
public void run() {
synchronized (DiskLruCache.this) {
if (!initialized | closed) {//如果没有初始化或者已经关闭了,则不需要清理,这里注意|和||的区别,|会两个条件都检查
return; // Nothing to do
}

try {
trimToSize();//清理
} catch (IOException ignored) {
mostRecentTrimFailed = true;
}

try {
if (journalRebuildRequired()) {//如果还要清理。重新构建
rebuildJournal();
redundantOpCount = 0;//计数器置0
}
} catch (IOException e) {
mostRecentRebuildFailed = true;//如果抛异常了,设置最近的一次构建失败
journalWriter = Okio.buffer(Okio.blackhole());
}
}
}
};

这里先总结一下这一块清理的操作流程:

  1. 如果还没有初始化或者缓存关闭了,则不清理
  2. 执行清理操作
  3. 如果清理完了还是判断后还是需要清理,只能重新构建日志文件,并且日志记录器记0。

这里主要就需要看一下清理操作trimToSize()

1
2
3
4
5
6
7
void trimToSize() throws IOException {
while (size > maxSize) {
Entry toEvict = lruEntries.values().iterator().next();
removeEntry(toEvict);
}
mostRecentTrimFailed = false;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
boolean removeEntry(Entry entry) throws IOException {
if (entry.currentEditor != null) {
//结束editor
entry.currentEditor.detach(); // Prevent the edit from completing normally.
}

for (int i = 0; i < valueCount; i++) {
//清除用于保存文件的cleanFiles
fileSystem.delete(entry.cleanFiles[i]);
size -= entry.lengths[i];
entry.lengths[i] = 0;
}

redundantOpCount++;//计数器加1
journalWriter.writeUtf8(REMOVE).writeByte(' ').writeUtf8(entry.key).writeByte('\n');//增加一条删除日志
lruEntries.remove(entry.key);//移除Entry

if (journalRebuildRequired()) {//如果需要重新清理一下,边界情况
executor.execute(cleanupRunnable);//清理
}

return true;
}

这里的执行流程:

  1. 停止编辑操作
  2. 清除用于保存的cleanFiles
  3. 增加一条清除日志记录,计数器+1
  4. 移除对应 key 的 Entry
  5. 由于增加了一条日志,判断是否需要清理,不然可能会越清越多

至此,DiskLruCacheget()方法终于分析完成,接着就要返回Cache中的get()方法继续看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Response get(Request request) {
String key = key(request.url());
DiskLruCache.Snapshot snapshot;
Entry entry;
try {
snapshot = cache.get(key);
if (snapshot == null) {
return null;//没拿到,返回 null
}
} catch (IOException e) {
// Give up because the cache cannot be read.
return null;
}

try {
//创建一个Entry,这里传入的是CleanFiles数组中的第一个(ENTRY_METADATA = 0),得到是头信息,也就是key.0
entry = new Entry(snapshot.getSource(ENTRY_METADATA));
} catch (IOException e) {
Util.closeQuietly(snapshot);
return null;
}

Response response = entry.response(snapshot);//得到缓存构建得到的response

if (!entry.matches(request, response)) {
Util.closeQuietly(response.body());
return null;
}

return response;
}

可以看到,这里通过得到的snapshot.getSource()构建了Entry(这个EntryCache的内部类,不是DiskLruCache的内部类)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public final class Snapshot implements Closeable {
private final String key;
private final long sequenceNumber;
private final Source[] sources;
private final long[] lengths;

Snapshot(String key, long sequenceNumber, Source[] sources, long[] lengths) {
this.key = key;
this.sequenceNumber = sequenceNumber;
this.sources = sources;
this.lengths = lengths;
}
public Source getSource(int index) {
return sources[index];
}
}

可以看到这里的getSource()其实就是返回Source数组中的元素,而Source数组是在Snapshot的构造函数的时候赋值的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Snapshot snapshot() {
if (!Thread.holdsLock(DiskLruCache.this)) throw new AssertionError();

Source[] sources = new Source[valueCount];
long[] lengths = this.lengths.clone(); // Defensive copy since these can be zeroed out.
try {
for (int i = 0; i < valueCount; i++) {
//可以看到这里其实就是将cleanFiles传给了sources
sources[i] = fileSystem.source(cleanFiles[i]);
}
return new Snapshot(key, sequenceNumber, sources, lengths);
} catch (FileNotFoundException e) {
// A file must have been deleted manually!
for (int i = 0; i < valueCount; i++) {
if (sources[i] != null) {
Util.closeQuietly(sources[i]);
} else {
break;
}
}
// Since the entry is no longer valid, remove it so the metadata is accurate (i.e. the cache
// size.)
try {
removeEntry(this);
} catch (IOException ignored) {
}
return null;
}
}

看到这个方法,其实应该注意到这个就是我们在调用DiskLruCache中的get()时最后返回的Snapshot调用的方法,具体下方代码贴出了,这时候可以看到source数组其实是将entry中的cleanFile数组对应的保存到source数组中,这也验证了我们前面说的 clean 数组是用来保存持久性数据,也就是真正用来存东西的地方,而且前面提到ENTRY_METADATA = 0,所以对应的取的也是 clean 数组中的第一个文件,也验证了前面说的 clean 数组分两部分,第一部分保存头,第二部分保存 body

1
2
3
4
5
public synchronized Snapshot get(String key) throws IOException {
...
Snapshot snapshot = entry.snapshot();
...
}

接下来我们来看一下CacheEntry这个内部类,注意不是 DiskLruCache 中的 Entry

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
public Entry(Source in) throws IOException {
try {
BufferedSource source = Okio.buffer(in);
url = source.readUtf8LineStrict();
requestMethod = source.readUtf8LineStrict();
//得到cleanfiles[0]来构建头信息
Headers.Builder varyHeadersBuilder = new Headers.Builder();
int varyRequestHeaderLineCount = readInt(source);
for (int i = 0; i < varyRequestHeaderLineCount; i++) {
varyHeadersBuilder.addLenient(source.readUtf8LineStrict());
}
varyHeaders = varyHeadersBuilder.build();

StatusLine statusLine = StatusLine.parse(source.readUtf8LineStrict());
protocol = statusLine.protocol;
code = statusLine.code;
message = statusLine.message;
Headers.Builder responseHeadersBuilder = new Headers.Builder();
int responseHeaderLineCount = readInt(source);
for (int i = 0; i < responseHeaderLineCount; i++) {
responseHeadersBuilder.addLenient(source.readUtf8LineStrict());
}
String sendRequestMillisString = responseHeadersBuilder.get(SENT_MILLIS);
String receivedResponseMillisString = responseHeadersBuilder.get(RECEIVED_MILLIS);
responseHeadersBuilder.removeAll(SENT_MILLIS);
responseHeadersBuilder.removeAll(RECEIVED_MILLIS);
sentRequestMillis = sendRequestMillisString != null
? Long.parseLong(sendRequestMillisString)
: 0L;
receivedResponseMillis = receivedResponseMillisString != null
? Long.parseLong(receivedResponseMillisString)
: 0L;
responseHeaders = responseHeadersBuilder.build();

if (isHttps()) {
String blank = source.readUtf8LineStrict();
if (blank.length() > 0) {
throw new IOException("expected \"\" but was \"" + blank + "\"");
}
String cipherSuiteString = source.readUtf8LineStrict();
CipherSuite cipherSuite = CipherSuite.forJavaName(cipherSuiteString);
List<Certificate> peerCertificates = readCertificateList(source);
List<Certificate> localCertificates = readCertificateList(source);
TlsVersion tlsVersion = !source.exhausted()
? TlsVersion.forJavaName(source.readUtf8LineStrict())
: null;
handshake = Handshake.get(tlsVersion, cipherSuite, peerCertificates, localCertificates);
} else {
handshake = null;
}
} finally {
in.close();
}
}

这里通过 Entry 的构造方法更能说明 clean 数组中的第一项是用来保存 header 信息的,从代码中科院看到利用Header.builder对传入进来的Source(也就是clean[0])进行构建,最后利用build()方法构建了 header 信息。

头构建完了,现在就需要找构建 body 的地方,因为剩下的代码只剩下

1
Response response = entry.response(snapshot);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public Response response(DiskLruCache.Snapshot snapshot) {
String contentType = responseHeaders.get("Content-Type");
String contentLength = responseHeaders.get("Content-Length");
Request cacheRequest = new Request.Builder()
.url(url)
.method(requestMethod, null)
.headers(varyHeaders)
.build();
return new Response.Builder()
.request(cacheRequest)
.protocol(protocol)
.code(code)
.message(message)
.headers(responseHeaders)
.body(new CacheResponseBody(snapshot, contentType, contentLength))
.handshake(handshake)
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(receivedResponseMillis)
.build();
}

上面方法的作用基本上就是利用Response.builder构建缓存中的Response了,但是没有找到明显的写入Body的地方,唯一有 body 的就是.body(new CacheResponseBody(snapshot, contentType, contentLength))

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public CacheResponseBody(final DiskLruCache.Snapshot snapshot,
String contentType, String contentLength) {
this.snapshot = snapshot;
this.contentType = contentType;
this.contentLength = contentLength;
//这里的 ENTRY_BODY = 1,同样拿的是 CleanFils 数组,构建 ResponseBody
Source source = snapshot.getSource(ENTRY_BODY);
bodySource = Okio.buffer(new ForwardingSource(source) {
@Override public void close() throws IOException {
snapshot.close();
super.close();
}
});
}

到现在可以看出来,clean 数组的 0 对应保存 Header 信息,1 对应保存 Body 信息

这里分析完构建 header 和 body 得到对应的缓存的 Response 后,对应的从缓存中拿缓存的 Response 流程终于结束了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
@Override public Response intercept(Chain chain) throws IOException {
//默认cache为null,可以配置cache,不为空尝试获取缓存中的response
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;

long now = System.currentTimeMillis();
//根据response,time,request创建一个缓存策略,用于判断怎样使用缓存
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;

if (cache != null) {
cache.trackResponse(strategy);
}

if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}

// If we're forbidden from using the network and the cache is insufficient, fail.
//如果缓存策略中禁止使用网络,并且缓存又为空,则构建一个Resposne直接返回,注意返回码=504
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}

// If we don't need the network, we're done.
//不使用网络,但是又缓存,直接返回缓存
if (networkRequest == null) {
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}

Response networkResponse = null;
try {
//直接走后续过滤器
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}

// If we have a cache response too, then we're doing a conditional get.
//当缓存响应和网络响应同时存在的时候,选择用哪个
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
//如果返回码是304,客户端有缓冲的文档并发出了一个条件性的请求(一般是提供If-Modified-Since头表示客户
// 只想比指定日期更新的文档)。服务器告诉客户,原来缓冲的文档还可以继续使用。
//则使用缓存的响应
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();

// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}
//使用网络响应
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
//所以默认创建的OkHttpClient是没有缓存的
if (cache != null) {
//将响应缓存
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
//缓存Resposne的Header信息
CacheRequest cacheRequest = cache.put(response);
//缓存body
return cacheWritingResponse(cacheRequest, response);
}
//只能缓存GET....不然移除request
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}

return response;
}

这里就可以分析CacheInterceptor主要流程

  1. 通过 Request 尝试到 Cache 中拿缓存(里面非常多流程),当然前提是OkHttpClient中配置了缓存,默认是不支持的
  2. 根据 response,time,request 创建一个缓存策略,用于判断怎样使用缓存
  3. 如果缓存策略中设置禁止使用网络,并且缓存又为空,则构建一个 Response 直接返回,注意返回码 = 504
  4. 缓存策略中设置不使用网络,但是有缓存,直接返回缓存
  5. 接着走后续过滤器的流程:chain.proceed(networkRequest)
  6. 当缓存存在的时候,如果网络返回的 Response 为 304,则使用缓存的 Response
  7. 构建网络请求的 Response
  8. 挡在 OkHttpClient 中配置了缓存,则将这个 Response 缓存起来
  9. 缓存起来的步骤也是先缓存 header,再缓存 body
  10. 返回 Response
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
if (cache != null) {
//将响应缓存
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
//缓存Resposne的Header信息
CacheRequest cacheRequest = cache.put(response);
//缓存body
return cacheWritingResponse(cacheRequest, response);
}
//只能缓存GET....不然移除request
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}

当可以缓存的时候,这里用了cache.put(response)方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
CacheRequest put(Response response) {
String requestMethod = response.request().method();

if (HttpMethod.invalidatesCache(response.request().method())) {
//Okhttp只能缓存 GET 请求!
try {
remove(response.request());
} catch (IOException ignored) {
// The cache cannot be written.
}
return null;
}
if (!requestMethod.equals("GET")) {
//Okhttp只能缓存 GET 请求!
// Don't cache non-GET responses. We're technically allowed to cache
// HEAD requests and some POST requests, but the complexity of doing
// so is high and the benefit is low.
return null;
}

if (HttpHeaders.hasVaryAll(response)) {
return null;
}

Entry entry = new Entry(response);
DiskLruCache.Editor editor = null;
try {
editor = cache.edit(key(response.request().url()));
if (editor == null) {
return null;
}
entry.writeTo(editor);//缓存了Header信息
return new CacheRequestImpl(editor);
} catch (IOException e) {
abortQuietly(editor);
return null;
}
}

从效率角度考虑,OkHttp 暂时只支持 GET 形式的缓存。(如果需要支持,对应的其实也就是修改源码,将这里的判断给删除)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
//缓存了Header信息
entry.writeTo(editor);
//Entry的writeTo方法===============================
public void writeTo(DiskLruCache.Editor editor) throws IOException {
//往dirty中写入header信息,ENTRY_METADATA=0,所以是dirtyFiles[0]
BufferedSink sink = Okio.buffer(editor.newSink(ENTRY_METADATA));

sink.writeUtf8(url)
.writeByte('\n');
sink.writeUtf8(requestMethod)
.writeByte('\n');
sink.writeDecimalLong(varyHeaders.size())
.writeByte('\n');
for (int i = 0, size = varyHeaders.size(); i < size; i++) {
sink.writeUtf8(varyHeaders.name(i))
.writeUtf8(": ")
.writeUtf8(varyHeaders.value(i))
.writeByte('\n');
}

sink.writeUtf8(new StatusLine(protocol, code, message).toString())
.writeByte('\n');
sink.writeDecimalLong(responseHeaders.size() + 2)
.writeByte('\n');
for (int i = 0, size = responseHeaders.size(); i < size; i++) {
sink.writeUtf8(responseHeaders.name(i))
.writeUtf8(": ")
.writeUtf8(responseHeaders.value(i))
.writeByte('\n');
}
sink.writeUtf8(SENT_MILLIS)
.writeUtf8(": ")
.writeDecimalLong(sentRequestMillis)
.writeByte('\n');
sink.writeUtf8(RECEIVED_MILLIS)
.writeUtf8(": ")
.writeDecimalLong(receivedResponseMillis)
.writeByte('\n');

if (isHttps()) {
sink.writeByte('\n');
sink.writeUtf8(handshake.cipherSuite().javaName())
.writeByte('\n');
writeCertList(sink, handshake.peerCertificates());
writeCertList(sink, handshake.localCertificates());
sink.writeUtf8(handshake.tlsVersion().javaName()).writeByte('\n');
}
sink.close();
}

可以看到这里对应的其实就是往dirtyFiles[0]中写入 header 信息,而待会对应的 1 就是 写入 body 信息。

1
2
3
//缓存了Header信息
entry.writeTo(editor);
return new CacheRequestImpl(editor);

写完header后,继续找写body的地方,这里返回了一个CacheRequestImpl对象,一定不要忽略,不然就找不到写body的地方了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public CacheRequestImpl(final DiskLruCache.Editor editor) {
this.editor = editor;
this.cacheOut = editor.newSink(ENTRY_BODY);//ENTRY_BODY = 1
this.body = new ForwardingSink(cacheOut) {
@Override public void close() throws IOException {
synchronized (Cache.this) {
if (done) {
return;
}
done = true;
writeSuccessCount++;
}
super.close();
editor.commit();
}
};
}

看到ENTRY_BODY就放心, 这里对应的ENTRY_BODY=1对应的就是数组的第二个位置。那到了数据源,接着就要找写入的地方,这里还要注意一个地方editor.commit();

1
2
3
4
5
//CacheIntercetor中==========================
//缓存Resposne的Header信息
CacheRequest cacheRequest = cache.put(response);
//缓存body
return cacheWritingResponse(cacheRequest, response);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
private Response cacheWritingResponse(final CacheRequest cacheRequest, Response response)
throws IOException {
// Some apps return a null body; for compatibility we treat that like a null cache request.
if (cacheRequest == null) return response;
Sink cacheBodyUnbuffered = cacheRequest.body();
if (cacheBodyUnbuffered == null) return response;
//获得body
final BufferedSource source = response.body().source();
final BufferedSink cacheBody = Okio.buffer(cacheBodyUnbuffered);

Source cacheWritingSource = new Source() {
boolean cacheRequestClosed;

@Override public long read(Buffer sink, long byteCount) throws IOException {
long bytesRead;
try {
bytesRead = source.read(sink, byteCount);
} catch (IOException e) {
if (!cacheRequestClosed) {
cacheRequestClosed = true;
cacheRequest.abort(); // Failed to write a complete cache response.
}
throw e;
}

if (bytesRead == -1) {
if (!cacheRequestClosed) {
cacheRequestClosed = true;
cacheBody.close(); // The cache response is complete!
}
return -1;
}
//读的时候会将body写入
sink.copyTo(cacheBody.buffer(), sink.size() - bytesRead, bytesRead);
cacheBody.emitCompleteSegments();
return bytesRead;
}

@Override public Timeout timeout() {
return source.timeout();
}

@Override public void close() throws IOException {
if (!cacheRequestClosed
&& !discard(this, HttpCodec.DISCARD_STREAM_TIMEOUT_MILLIS, MILLISECONDS)) {
cacheRequestClosed = true;
//关闭的时候会执行commit操作,最终合并header和body,完成缓存
cacheRequest.abort();
}
source.close();
}
};

String contentType = response.header("Content-Type");
long contentLength = response.body().contentLength();
return response.newBuilder()
.body(new RealResponseBody(contentType, contentLength, Okio.buffer(cacheWritingSource)))
.build();
}

3 .总结

  1. 缓存中是有日志文件用于保存操作纪律
  2. 缓存中的 Entry 有用 CleanFIles 和 DirtyFiles,其中 Clean 是用于保存持久性数据的,也就是真正保存数据的地方;Dirty 是用于保存编辑过程中的数据的
  3. 两个数组大小都为2,第一个保存 Header 信息,第二个保存 Body 信息。

ConnectInterceptor

CallServerInterceptor


参考引用其实就是照着打这样我会比较专注

okhttp源码分析(一)——基本流程(超详细)