Word-boundary matching only works as intended in English and languages
that use similar word-breaking characters; it doesn't work so well in
(say) Japanese, Chinese, or Thai.  It's unacceptable to have a feature
that doesn't work as intended for some languages.  (Moreso especially
considering that it's likely that the largest contingent on the Mastodon
bit of the fediverse speaks Japanese.)
There are rules specified in Unicode TR29[1] for word-breaking across
all languages supported by Unicode, but the rules deliberately do not
cover all cases.  In fact, TR29 states
    For example, reliable detection of word boundaries in languages such
    as Thai, Lao, Chinese, or Japanese requires the use of dictionary
    lookup, analogous to English hyphenation.
So we aren't going to be able to make word detection work with regexes
within Mastodon (or glitchsoc).  However, for a first pass (even if it's
kind of punting) we can allow the user to choose whether they want word
or substring detection and warn about the limitations of this
implementation in, say, docs.
[1]: https://unicode.org/reports/tr29/
     https://web.archive.org/web/20171001005125/https://unicode.org/reports/tr29/
		
	
			
		
			
				
	
	
		
			12 lines
		
	
	
	
		
			347 B
		
	
	
	
		
			Ruby
		
	
	
	
	
	
			
		
		
	
	
			12 lines
		
	
	
	
		
			347 B
		
	
	
	
		
			Ruby
		
	
	
	
	
	
| class CreateKeywordMutes < ActiveRecord::Migration[5.1]
 | |
|   def change
 | |
|     create_table :keyword_mutes do |t|
 | |
|       t.references :account, null: false
 | |
|       t.string :keyword, null: false
 | |
|       t.boolean :whole_word, null: false, default: true
 | |
|       t.timestamps
 | |
|     end
 | |
| 
 | |
|     add_foreign_key :keyword_mutes, :accounts, on_delete: :cascade
 | |
|   end
 | |
| end
 |