Old statuses and statuses from Pawoo, which runs a modified version of
Mastodon, may not have been marked sensitive even if spoiler text is
present.
Such statuses are still not marked sensitve if they are local or
arrived before version upgrade. Marking recently fetched remote status
sensitive contradicts the behavior.
Considering what people expected when they authored such statuses, this
change removes the sensitivity enforcement.
Do not touch statuses_count on accounts table when mass-destroying
statuses to reduce load when removing accounts, same for
reblogs_count and favourites_count
Do not count statuses with direct visibility in statuses_count
Fix#828
* Track trending tags
- Half-life of 1 day
- Historical usage in daily buckets (last 7 days stored)
- GET /api/v1/trends
Fix#271
* Add trends to web UI
* Don't render compose form on search route, adjust search results header
* Disqualify tag from trends if it's in disallowed hashtags setting
* Count distinct accounts using tag, ignore silenced accounts
Updates account `uri` field on each call to `update_account` instead of
only once during `create_account` to mirror the same behavior in OStatus
`ResolveAccountService` class [0].
ActivityPub accounts are identified using `@username` and `@domain` pair
instead of URI since #6842.
This fixes#7479: a bug when the account identified by `@username` and
`@domain` changes its URI.
[0]:
060fa11ee2/app/services/resolve_account_service.rb (L121)
When an ActivityPub Announce is processed and the boosted toot is not known,
fetch it on behalf of one of the booster's followers. This is to allow
fetching self-boosts of previously-unknown private toots.
If fetching on behalf of a user fails, try fetching it anonymously: the
selected follower of a boosting user may be banned by the boosted toot's
author.
- POST /api/v1/push/subscription
- PUT /api/v1/push/subscription
- DELETE /api/v1/push/subscription
- New OAuth scope: "push" (required for the above methods)
Offload creation of local notifications to a worker. Remove two
redundant SQL queries from ProcessMentionsService, remove n+1
XML/JSON serialization via memoization
* No need to re-require sidekiq plugins, they are required via Gemfile
* Add derailed_benchmarks tool, no need to require TTY gems in Gemfile
* Replace ruby-oembed with FetchOEmbedService
Reduce startup by 45382 allocated objects
* Remove preloaded JSON-LD in favour of caching HTTP responses
Reduce boot RAM by about 6 MiB
* Fix tests
* Fix test suite by stubbing out JSON-LD contexts
* Remove most behaviour disparities between blocks and mutes
The only differences between block and mute should be:
- Mutes can optionally NOT affect notifications
- Mutes should not be visible to the muted
Fix#7230Fix#5713
* Do not allow boosting someone you blocked
Fix#7248
* Do not allow favouriting someone you blocked
* Fix nil error in StatusPolicy
* Add equals_or_includes_any? helper in JsonLdHelper
* Support arrays in JSON-LD type fields for actors/tags/objects.
* Spec for resolving accounts with extension types
* Style tweaks for codeclimate
* Added a timeline for Direct statuses
* Lists all Direct statuses you've sent and received
* Displayed in Getting Started
* Streaming server support for direct TL
* Changes to match other timelines in 2.0
* Add bio fields
- Fix#3211
- Fix#232
- Fix#121
* Display bio fields in web UI
* Fix output of links and missing fields
* Federate bio fields over ActivityPub as PropertyValue
* Improve how the fields are stored, add to Edit profile form
* Add rel=me to links in fields
Fix#121
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
Media attachments are part of the association cache of statuses,
since they are presumed to be immutable. Unless this cache is
cleared manually, the statuses will continue to look like they
have media embedded.
* Fix#201: Account archive download
* Export actor and private key in the archive
* Optimize BackupService
- Add conversation to cached associations of status, because
somehow it was forgotten and is source of N+1 queries
- Explicitly call GC between batches of records being fetched
(Model class allocations are the worst offender)
- Stream media files into the tar in 1MB chunks
(Do not allocate media file (up to 8MB) as string into memory)
- Use #bytesize instead of #size to calculate file size for JSON
(Fix FileOverflow error)
- Segment media into subfolders by status ID because apparently
GIF-to-MP4 media are all named "media.mp4" for some reason
* Keep uniquely generated filename in Paperclip::GifTranscoder
* Ensure dumped files do not overwrite each other by maintaing directory partitions
* Give tar archives a good name
* Add scheduler to remove week-old backups
* Fix code style issue