Cassandra占用了所有磁盘空间

我有一个单节点cassandra集群,我使用当前分钟作为分区键并插入TTL为12小时的行.

我看到一些我无法解释的问题

> / var / lib / cassandra / data /< key_space> /< table_name>包含多个文件,其中很多都是旧的(比较长的12小时,类似于2天)
>当我尝试在cqlsh中执行查询时,我得到了很多日志,似乎表明我的表包含大量的墓碑

日志:

WARN  [SharedPool-Worker-2] 2015-01-26 10:51:39,376 SliceQueryFilter.java:236 - Read 0 live and 1571042 tombstoned cells in <table_name>_name (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:40,472 SliceQueryFilter.java:236 - Read 0 live and 1557919 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:41,630 SliceQueryFilter.java:236 - Read 0 live and 1589764 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:42,877 SliceQueryFilter.java:236 - Read 0 live and 1582163 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:44,081 SliceQueryFilter.java:236 - Read 0 live and 1550989 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:44,869 SliceQueryFilter.java:236 - Read 0 live and 1566246 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:45,582 SliceQueryFilter.java:236 - Read 0 live and 1577906 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:46,443 SliceQueryFilter.java:236 - Read 0 live and 1571493 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:47,701 SliceQueryFilter.java:236 - Read 0 live and 1559448 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
WARN  [SharedPool-Worker-2] 2015-01-26 10:51:49,255 SliceQueryFilter.java:236 - Read 0 live and 1574936 tombstoned cells in <table_name> (see tombstone_warn_threshold). 100 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}

我尝试了多种压缩策略,多线程压缩,我尝试使用nodetool手动运行压缩,我也尝试用jmx强制进行垃圾收集.

我的一个猜测是压缩不会删除墓碑文件

任何想法如何避免磁盘空间变得太大,我最担心的是空间不足,我宁愿存储更少(通过使ttl更小但目前没有帮助)

最佳答案
我假设您在使用分钟作为分区键时使用时间戳作为每个分区中的聚类列,并且在执行插入时使用12小时的TTL.这将在每个分区中构建逻辑删除,因为您永远不会删除整个行(即整个分区).

假设您的密钥空间名为k1,而您的表名为t2,那么您可以运行:

nodetool flush k1 t2
nodetool compact k1 t2
sstable2json /var/lib/cassandra/data/k1/t2/k1-t2-jb-<last version>-Data.db

然后你会看到所有这样的墓碑(标有“d”表示已删除):

{"key": "00000003","columns": [["4:","54c7b514",1422374164512000,"d"], ["5:","54c7b518",1422374168501000,"d"], ["6:","54c7b51b",1422374171987000,"d"]]}

现在如果你去删除那一行(即从k1.t2删除key = 3;),然后再次执行flush,compact和sstable2json,你会看到它变为:

{"key": "00000003","metadata": {"deletionInfo": {"markedForDeleteAt":1422374340312000,"localDeletionTime":1422374340}},"columns": []}

所以你看到所有的墓碑都消失了,Cassandra只需要记住在某个时间删除了整行,而不是在某些时候删除了一点点的行.

摆脱墓碑的另一种方法是截断整个表格.当你这样做时,Cassandra只需要记住整个表在某个时间被截断,因此不再需要在此之前保留墓碑(因为墓碑用于告诉其他节点某些数据被删除,如果你可以说整个表在时间x被清空,然后在此之前的细节不再重要).

所以你怎么能在你问的情况下应用它.好吧,您可以使用小时和分钟作为分区键,然后每小时运行一次cron作业以删除13小时前的所有行.然后在下一次压缩时,将删除该分区的所有逻辑删除.

或者为每小时保留一个单独的表,然后使用cron作业从每小时13小时前截断表.

另一种有时有用的策略是“重用”聚类键.例如,如果您每秒插入一次数据,而不是将高分辨率时间戳作为聚类键,则可以使用模60秒的时间作为聚类键,并将更独特的时间戳保留为数据字段.因此,在每分钟分区中,您将从昨天更改墓碑(或过时的信息)回到今天的实时行,然后您不会累积多天的墓碑.

所以希望能为你提供一些尝试的想法.通常当你遇到墓碑问题时,这表明你需要重新思考一下你的架构.

转载注明原文:Cassandra占用了所有磁盘空间 - 代码日志