For boop.mp3, this commit adds both ID3v1 and ID3v2 tags. For boop.ogg,
we use Vorbis metadata.
In the case of boop.mp3, this also adds a cover image. Interestingly, it
didn't seem to affect the size of boop.mp3 much, despite being ~8k.
boop.ogg seemed to be much more affected and so no cover image was added
to that version.
For boop.mp3, this commit adds both ID3v1 and ID3v2 tags. For boop.ogg,
we use Vorbis metadata.
In the case of boop.mp3, this also adds a cover image. Interestingly, it
didn't seem to affect the size of boop.mp3 much, despite being ~8k.
boop.ogg seemed to be much more affected and so no cover image was added
to that version.
We have changed how we store reblogs in the redis for bigint IDs. This process is done by 1) scan all entries in users feed, and 2) re-store reblogs by 3 write commands.
However, this operation is really slow for large instances. e.g. 1hrs on friends.nico (w/ 50k users). So I have tried below tweaks.
* It checked non-reblogs by `entry[0] == entry[1]`, but this condition won't work because `entry[0]` is String while `entry[1]` is Float. Changing `entry[0].to_i == entry[1]` seems work.
-> about 4-20x faster (feed with less reblogs will be faster)
* Write operations can be batched by pipeline
-> about 6x faster
* Wrap operation by Lua script and execute by EVALSHA command. This really reduces packets between Ruby and Redis.
-> about 3x faster
I've taken Lua script way, though doing other optimizations may be enough.
We have changed how we store reblogs in the redis for bigint IDs. This process is done by 1) scan all entries in users feed, and 2) re-store reblogs by 3 write commands.
However, this operation is really slow for large instances. e.g. 1hrs on friends.nico (w/ 50k users). So I have tried below tweaks.
* It checked non-reblogs by `entry[0] == entry[1]`, but this condition won't work because `entry[0]` is String while `entry[1]` is Float. Changing `entry[0].to_i == entry[1]` seems work.
-> about 4-20x faster (feed with less reblogs will be faster)
* Write operations can be batched by pipeline
-> about 6x faster
* Wrap operation by Lua script and execute by EVALSHA command. This really reduces packets between Ruby and Redis.
-> about 3x faster
I've taken Lua script way, though doing other optimizations may be enough.