非常感谢你能够坚持看到第四篇,同时这也是这个Volley系列教程的最后一篇了。经过前三节的学习,相信你也已经懂得如何运用Volley提供的Request以及自定义Request了,这一节我将从源码的角度带领大家理解Volley的工作流程。

从newRequestQueue()看起

我们都知道,使用Volley最开始要做的就是使用newRequestQueue()获取一个RequestQueue对象,仔细看一下这个方法

  • newRequestQueue()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}

if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}

Network network = new BasicNetwork(stack);

RequestQueue queue;
if (maxDiskCacheBytes <= -1)
{
// No maximum size specified
queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
}
else
{
// Disk cache size specified
queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
}

queue.start();

return queue;
}

在方法内部我们可以看到在api等级大于9的时候,使用HurlStack实例来进行主要的网络请求工作,到这里已经很明显了,Volley底层是使用HttpUrlConnection进行的;而对于小于9的API则创建否则就创建一个HttpClientStack的实例,也就是对于9之前的API使用HttpClient进行网络通讯。最后被包装为一个BasicNetwork对象。
接着根据得到的BasicNetwork对象和一个DiskBasedCache对象(磁盘缓存)来构造一个RequestQueue,并且调用了它的start方法来启动这个线程。

接着看start()

  • start()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();

// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}

首先先创建CacheDispatcher对象,接着进入for循环这个for循环遍历了mCacheDispatcher,这个mCacheDispatcher其实相当于一个线程池,这个线程池的大小默认是4。然后分别让这里面的线程运行起来(调用了它们的start方法)。这里为什么要有多个线程来处理呢?原因很简单,因为我们每一个请求都不一定会马上处理完毕,多个线程进行同时处理的话效率会提高。 所以最终这里会有5个线程,4个是网络线程NetworkDispatcher,1个是缓存线程CacheDispatcher。

得到了RequestQueue之后,我们只需要构建出相应的Request,然后调用RequestQueue的add()方法将Request传入就可以完成网络请求操作了,那就先来看一下add()吧!

add()方法

  • add()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}

// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");

// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}

// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}

可以看到,在第13行的时候会判断当前的请求是否可以缓存,如果不能缓存则在第14行直接将这条请求加入网络请求队列,可以缓存的话则在第36行将这条请求加入缓存队列。在默认情况下,每条请求都是可以缓存的,当然我们也可以调用Request的setShouldCache(false)方法来改变这一默认行为。

那就先来看看NetworkDispatcher的run()吧!

  • run()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request<?> request;
while (true) {
long startTimeMs = SystemClock.elapsedRealtime();
// release previous request object to avoid leaking request object when mQueue is drained.
request = null;
try {
// Take a request from the queue.
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}

try {
request.addMarker("network-queue-take");

// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}

addTrafficStatsTag(request);

// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");

// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}

// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");

// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}

// Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
}
}
}

第4行设置了这些线程的优先级,这个优先级比较低,目的是为了尽量减少对UI线程的影响保证流畅度。

接着第12行,调用mQueue的take方法取出队列头的一个请求进行处理,这个mQueue就是我们在上面add方法中添加进去的一个请求。

直接看到第34行,如果请求没有被取消,也就是正常的情况下,我们会调用mNetwork的performRequest方法进行请求的处理。不知道你还记的这个mNetwork不,它其实就是我们上面提到的那个由HttpUrlConnection层层包装的网络请求对象。

如果请求得到了结果,我们会看到55行调用了mDelivery的postResponose方法来回传我们的请求结果。

先来看performRequest()

因为Network是一个接口,这里具体的实现是BasicNetwork,所以我们可以看到其中重写的performRequest()如下:

  • performRequest()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
long requestStart = SystemClock.elapsedRealtime();
while (true) {
HttpResponse httpResponse = null;
byte[] responseContents = null;
Map<String, String> responseHeaders = Collections.emptyMap();
try {
// Gather headers.
Map<String, String> headers = new HashMap<String, String>();
addCacheHeaders(headers, request.getCacheEntry());
httpResponse = mHttpStack.performRequest(request, headers);
StatusLine statusLine = httpResponse.getStatusLine();
int statusCode = statusLine.getStatusCode();

responseHeaders = convertHeaders(httpResponse.getAllHeaders());
// Handle cache validation.
if (statusCode == HttpStatus.SC_NOT_MODIFIED) {

Entry entry = request.getCacheEntry();
if (entry == null) {
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null,
responseHeaders, true,
SystemClock.elapsedRealtime() - requestStart);
}

// A HTTP 304 response does not have all header fields. We
// have to use the header fields from the cache entry plus
// the new ones from the response.
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5
entry.responseHeaders.putAll(responseHeaders);
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data,
entry.responseHeaders, true,
SystemClock.elapsedRealtime() - requestStart);
}

// Handle moved resources
if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY || statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
String newUrl = responseHeaders.get("Location");
request.setRedirectUrl(newUrl);
}

// Some responses such as 204s do not have content. We must check.
if (httpResponse.getEntity() != null) {
responseContents = entityToBytes(httpResponse.getEntity());
} else {
// Add 0 byte response as a way of honestly representing a
// no-content request.
responseContents = new byte[0];
}

// if the request is slow, log it.
long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
logSlowRequests(requestLifetime, request, responseContents, statusLine);

if (statusCode < 200 || statusCode > 299) {
throw new IOException();
}
return new NetworkResponse(statusCode, responseContents, responseHeaders, false,
SystemClock.elapsedRealtime() - requestStart);
} catch (Exception e) {
···
}
}
}

这段代码中,先10和11行代码将cache的属性设置给header,接着第12行调用mHttpStack对象的performRequest方法并传入请求对象和头部来进行请求,得到一个HttpResponse对象。

接着将HttpResponse对象中的状态码取出,如果值为HttpStatus.SC_NOT_MODIFIED(也就是304),则表示请求得到的Response没有变化,直接显示缓存内容。

第45行表示请求成功并且获取到请求内容,将内容取出并作为一个NetworkResponse对象的属性并返回给NetworkDispatcher。

在NetworkDispatcher中收到了NetworkResponse这个返回值后又会调用Request的parseNetworkResponse()方法来解析NetworkResponse中的数据,以及将数据写入到缓存,这个方法的实现是交给Request的子类来完成的,因为不同种类的Request解析的方式也肯定不同,这就是为什么我们在自定义Request的时候必须要重写parseNetworkResponse()这个方法的原因了。

在解析完了NetworkResponse中的数据之后,又会调用ExecutorDelivery的postResponse()方法来回调解析出的数据。

接着是postResponse()

  • postResponse()
1
2
3
4
5
6
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}

这里看到第5行调用了mResponsePoster的execute方法并传入了一个ResponseDeliveryRunnable对象,再看mResponsePoster的定义:

1
2
3
4
5
6
7
8
9
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}

也就是我们在这里把ResponseDeliveryRunnable对象通过Handler的post方法发送出去了。这里为什么要发送到MainLooper中?因为RequestQueue是在子线程中执行的,回调到的代码也是在子线程中的,如果在回调中修改UI,就会报错。再者,为什么要使用post方法?原因也很简单,因为我们在消息发出之后再进行回调,post方法允许我们传入一个Runnable的实现类,post成功会自动执行它的run方法,这个时候在run方法中进行结果的判断并且进行回调:

  • run()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
 @Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
}

// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}

// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}

// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
}

可以看到,11行是调用Request的deleverResponse方法将结果回调给Request。举例看一下StringRequest中该方法是如何实现的:

  • deliverResponse()
1
2
3
4
5
6
@Override
protected void deliverResponse(String response) {
if (mListener != null) {
mListener.onResponse(response);
}
}

直接通过我们构造StringRequest时传进来的Listener的回调方法onResponse来将结果回调给Activity。deleverError也是同样的做法。


看完网络线程NetworkDispatcher之后再来看一下缓存线程CacheDispatcher是如何工作的


最后来看CacheDispatcher的run()方法

  • run()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

// Make a blocking call to initialize the cache.
mCache.initialize();

Request<?> request;
while (true) {
// release previous request object to avoid leaking request object when mQueue is drained.
request = null;
try {
// Take a request from the queue.
request = mCacheQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("cache-queue-take");

// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}

// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}

// If it is completely expired, just send it to the network.
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}

// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");

if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);

// Mark the response as intermediate.
response.intermediate = true;

// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
final Request<?> finalRequest = request;
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(finalRequest);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
}
}
}

首先在10行可以看到一个while(true)循环,说明缓存线程始终是在运行的,
接着在第33行会尝试从缓存当中取出响应结果,如何为空的话则把这条请求加入到网络请求队列中,如果不为空的话再判断该缓存是否已过期,如果已经过期了则同样把这条请求加入到网络请求队列中,否则就认为不需要重发网络请求,直接使用缓存中的数据即可。

之后会在第39行调用Request的parseNetworkResponse()方法来对数据进行解析,再往后就是将解析出来的数据进行回调了,跟上面的回掉思路是完全一样的!


至此,我们可以通过通过Volley官方提供的流程图重新回顾一下整个的流程

Volley流程图
Volley流程图

其中蓝色部分代表主线程,绿色部分代表缓存线程,橙色部分代表网络线程。我们在主线程中调用RequestQueue的add()方法来添加一条网络请求,这条请求会先被加入到缓存队列当中,如果发现可以找到相应的缓存结果就直接读取缓存并解析,然后回调给主线程。如果在缓存中没有找到结果,则将这条请求加入到网络请求队列中,然后处理发送HTTP请求,解析响应结果,写入缓存,并回调主线程。

希望通过这个系列的文章你能够清晰的掌握和理解Volley,尽管他现在已经不流行了,接下来我会持续为大家讲解比较好的开源框架,TX☺