python – 在缓存应用程序中Redis vs Disk的性能

我想在python中创建一个redis缓存,和任何一个自我尊重的科学家,我做了一个基准测试来表现。

有趣的是,redis没有这么好。 Python正在做一些魔术(存储文件),或者我的redis版本是非常慢的。

我不知道这是不是因为我的代码结构的方式,或者什么,但我期望redis做得比它更好。

要进行redis缓存,我将二进制数据(在这种情况下,HTML页面)设置为从文件名派生的密钥,期限为5分钟。

在所有情况下,文件处理都是用f.read()完成的(这比f.readlines()快3倍,我需要二进制blob)。

有没有我在我的比较中缺少的东西,还是Redis真的不符合一个磁盘? Python是否将文件缓存在某处,每次都重新访问?为什么这比访问redis这么快?

我使用redis 2.8,python 2.7和redis-py,都在64位Ubuntu系统上。

我不认为Python是在做任何特别神奇的事情,因为我做了一个函数,将文件数据存储在一个python对象中,并永久产生。

我有四个函数调用我分组:

阅读文件X次

被调用以查看redis对象是否仍在内存中,加载它或缓存新文件(单个和多个redis实例)的函数。

创建一个生成器的函数,该生成器从redis数据库(具有单个和多个redis实例)生成结果。

最后,将文件存储在存储器中并使其永久生效。

import redis
import time

def load_file(fp, fpKey, r, expiry):
    with open(fp, "rb") as f:
        data = f.read()
    p = r.pipeline()
    p.set(fpKey, data)
    p.expire(fpKey, expiry)
    p.execute()
    return data

def cache_or_get_gen(fp, expiry=300, r=redis.Redis(db=5)):
    fpKey = "cached:"+fp

    while True:
        yield load_file(fp, fpKey, r, expiry)
        t = time.time()
        while time.time() - t - expiry < 0:
            yield r.get(fpKey)


def cache_or_get(fp, expiry=300, r=redis.Redis(db=5)):

    fpKey = "cached:"+fp

    if r.exists(fpKey):
        return r.get(fpKey)

    else:
        with open(fp, "rb") as f:
            data = f.read()
        p = r.pipeline()
        p.set(fpKey, data)
        p.expire(fpKey, expiry)
        p.execute()
        return data

def mem_cache(fp):
    with open(fp, "rb") as f:
        data = f.readlines()
    while True:
        yield data

def stressTest(fp, trials = 10000):

    # Read the file x number of times
    a = time.time()
    for x in range(trials):
        with open(fp, "rb") as f:
            data = f.read()
    b = time.time()
    readAvg = trials/(b-a)


    # Generator version

    # Read the file, cache it, read it with a new instance each time
    a = time.time()
    gen = cache_or_get_gen(fp)
    for x in range(trials):
        data = next(gen)
    b = time.time()
    cachedAvgGen = trials/(b-a)

    # Read file, cache it, pass in redis instance each time
    a = time.time()
    r = redis.Redis(db=6)
    gen = cache_or_get_gen(fp, r=r)
    for x in range(trials):
        data = next(gen)
    b = time.time()
    inCachedAvgGen = trials/(b-a)


    # Non generator version    

    # Read the file, cache it, read it with a new instance each time
    a = time.time()
    for x in range(trials):
        data = cache_or_get(fp)
    b = time.time()
    cachedAvg = trials/(b-a)

    # Read file, cache it, pass in redis instance each time
    a = time.time()
    r = redis.Redis(db=6)
    for x in range(trials):
        data = cache_or_get(fp, r=r)
    b = time.time()
    inCachedAvg = trials/(b-a)

    # Read file, cache it in python object
    a = time.time()
    for x in range(trials):
        data = mem_cache(fp)
    b = time.time()
    memCachedAvg = trials/(b-a)


    print "\n%s file reads: %.2f reads/second\n" %(trials, readAvg)
    print "Yielding from generators for data:"
    print "multi redis instance: %.2f reads/second (%.2f percent)" %(cachedAvgGen, (100*(cachedAvgGen-readAvg)/(readAvg)))
    print "single redis instance: %.2f reads/second (%.2f percent)" %(inCachedAvgGen, (100*(inCachedAvgGen-readAvg)/(readAvg)))
    print "Function calls to get data:"
    print "multi redis instance: %.2f reads/second (%.2f percent)" %(cachedAvg, (100*(cachedAvg-readAvg)/(readAvg)))
    print "single redis instance: %.2f reads/second (%.2f percent)" %(inCachedAvg, (100*(inCachedAvg-readAvg)/(readAvg)))
    print "python cached object: %.2f reads/second (%.2f percent)" %(memCachedAvg, (100*(memCachedAvg-readAvg)/(readAvg)))

if __name__ == "__main__":
    fileToRead = "templates/index.html"

    stressTest(fileToRead)

现在结果:

10000 file reads: 30971.94 reads/second

Yielding from generators for data:
multi redis instance: 8489.28 reads/second (-72.59 percent)
single redis instance: 8801.73 reads/second (-71.58 percent)
Function calls to get data:
multi redis instance: 5396.81 reads/second (-82.58 percent)
single redis instance: 5419.19 reads/second (-82.50 percent)
python cached object: 1522765.03 reads/second (4816.60 percent)

结果很有趣,因为a)发生器比每次调用函数都快,b)redis比从磁盘读取速度慢,c)从python对象读取是非常快的。

为什么从磁盘读取比从redis读取内存中的文件要快得多?

编辑:
一些更多的信息和测试。

我替换了这个功能

data = r.get(fpKey)
if data:
    return r.get(fpKey)

结果与之不同

if r.exists(fpKey):
    data = r.get(fpKey)


Function calls to get data using r.exists as test
multi redis instance: 5320.51 reads/second (-82.34 percent)
single redis instance: 5308.33 reads/second (-82.38 percent)
python cached object: 1494123.68 reads/second (5348.17 percent)


Function calls to get data using if data as test
multi redis instance: 8540.91 reads/second (-71.25 percent)
single redis instance: 7888.24 reads/second (-73.45 percent)
python cached object: 1520226.17 reads/second (5132.01 percent)

在每个函数调用上创建一个新的redis实例实际上对读取速度没有明显的影响,从测试到测试的变化大于增益。

Sripathi Krishnan建议实现随机文件读取。这是缓存开始真正帮助的地方,我们可以从这些结果中看到。

Total number of files: 700

10000 file reads: 274.28 reads/second

Yielding from generators for data:
multi redis instance: 15393.30 reads/second (5512.32 percent)
single redis instance: 13228.62 reads/second (4723.09 percent)
Function calls to get data:
multi redis instance: 11213.54 reads/second (3988.40 percent)
single redis instance: 14420.15 reads/second (5157.52 percent)
python cached object: 607649.98 reads/second (221446.26 percent)

在文件读取中存在巨大的变异性,所以百分比差异不是加速的一个很好的指标。

Total number of files: 700

40000 file reads: 1168.23 reads/second

Yielding from generators for data:
multi redis instance: 14900.80 reads/second (1175.50 percent)
single redis instance: 14318.28 reads/second (1125.64 percent)
Function calls to get data:
multi redis instance: 13563.36 reads/second (1061.02 percent)
single redis instance: 13486.05 reads/second (1054.40 percent)
python cached object: 587785.35 reads/second (50214.25 percent)

我使用random.choice(fileList)在每次传递函数时随机选择一个新文件。

如果有人想尝试 – https://gist.github.com/3885957,完整的要点就在这里

编辑编辑:
没有意识到我正在为发电机调用一个文件(尽管函数调用和生成器的性能非常相似)。以下是来自发生器的不同文件的结果。

Total number of files: 700
10000 file reads: 284.48 reads/second

Yielding from generators for data:
single redis instance: 11627.56 reads/second (3987.36 percent)

Function calls to get data:
single redis instance: 14615.83 reads/second (5037.81 percent)

python cached object: 580285.56 reads/second (203884.21 percent)
这是一个苹果到橘子的比较。
http://redis.io/topics/benchmarks

Redis是一个高效的远程数据存储。每次在Redis上执行命令时,都会向Redis服务器发送一条消息,如果客户端是同步的,它将阻止等待回复。所以超出命令本身的代价,你将支付网络往返或IPC。

在现代硬件方面,与其他业务相比,网络往返或IPC价格昂贵。这是由于以下几个因素:

>介质的原始延迟(主要用于网络)
>操作系统调度器的延迟(在Linux / Unix上不保证)
>存储器高速缓存未命中是昂贵的,并且缓存未命中的概率增加,而客户端和服务器进程被调度进/出。
>高端盒子,NUMA副作用

现在,我们来看看结果。

比较使用生成器的实现和使用函数调用的实现,它们不会生成与Redis相同数量的往返。使用发电机你只需要:

    while time.time() - t - expiry < 0:
        yield r.get(fpKey)

所以每次迭代一次往返。有了这个功能,你有:

if r.exists(fpKey):
    return r.get(fpKey)

所以每次迭代2次往返。难怪发电机更快。

当然,您应该重复使用相同的Redis连接以获得最佳性能。没有必要运行一个系统地连接/断开连接的基准。

最后,关于Redis调用和文件读取之间的性能差异,您只是将本地调用与远程调用进行比较。文件读取由OS文件系统缓存,因此它们是内核和Python之间的快速内存传输操作。这里没有磁盘I / O。使用Redis,您必须支付往返费用,所以要慢得多。

http://stackoverflow.com/questions/12868222/performance-of-redis-vs-disk-in-caching-application

本站文章除注明转载外,均为本站原创或编译
转载请明显位置注明出处:python – 在缓存应用程序中Redis vs Disk的性能