https://www.gkbrk.com/feed.xml2024-03-16T03:36:30ZGokberk YaltirakliPersonal blog of Gokberk YaltirakliCopyright (c) 2024, Gokberk YaltirakliGokberk Yaltiraklihttps://www.gkbrk.com/rss@gkbrk.comEasy ClickHouse S3 Backupshttps://www.gkbrk.com/2023/11/clickhouse-s3-backups/2023-11-29T00:00:00ZGokberk Yaltiraklihttps://www.gkbrk.com/rss@gkbrk.com<p>I’d been moving more and more personal projects to my ClickHouse database, but
I didn’t have a good backup solution. In fact, I didn’t have any backup
solution whatsoever.</p>
<p>When we were working on some incremental ClickHouse backup scripts at work with
my colleague, he recommended that I should look into getting backups going for
my personal database as well. I wasn’t eager to deal with the complexity of
incremental backups, and I didn’t want fancy <a href="https://github.com/Altinity/clickhouse-backup">clickhouse-backup</a> scripts
either.</p>
<p>While checking out these options, we found that ClickHouse has a built-in
<a href="https://clickhouse.com/docs/en/operations/backup">backup/restore functionality</a> that can use local storage or S3. After running
a few tests locally, I got the queries working the way I wanted, and I tested
the S3 functionality.</p>
<h1 id="setting-up-s3">Setting up S3</h1>
<p>While you can set up S3 backups using the ClickHouse configuration XML files,
you can also provide all the parameters in the <em>BACKUP</em> query itself. This is
the approach I took, as I didn’t want to mess with the configuration files or
have my S3 credentials in them.</p>
<p>I created a new bucket for my backups using the AWS console <em>(but you can use
the CLI or the API as well)</em>. Afterwards, I grabbed the bucket URL and
generated AWS credentials with the permission to write to the bucket.</p>
<p>With the bucket URL and the credentials in hand, I was ready to run a manual
backup.</p>
<h1 id="backup-queries-with-clickhouse">Backup queries with ClickHouse</h1>
<p>Here is the query I ended up using. We are excluding the system database
because it has caused some problems while I was setting up the backup. You
should try to include it if it works for you. And a personal TODO for me to
try including it again.</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">backup</span> <span class="k">all</span>
<span class="k">except</span> <span class="k">database</span> <span class="k">system</span>
<span class="k">to</span> <span class="n">S3</span><span class="p">(</span>
<span class="s1">'https://my-ch-backups.s3.eu-west-1.amazonaws.com/ch01/${DT}.zip'</span><span class="p">,</span>
<span class="s1">'${AWS_KEY}'</span><span class="p">,</span>
<span class="s1">'${AWS_SECRET}'</span>
<span class="p">);</span>
</code></pre></div></div>
<p>AWS_KEY and AWS_SECRET should be replaced with the credentials you generated.
DT is the current date and time, but you can use anything you want. I generated
DT with the <code class="language-plaintext highlighter-rouge">date -u '+%Y-%m-%dT%H:%M:%SZ'</code> command.</p>
<p>This query returns instantly, and runs the backup in the background. You can
check the status of the backup using the <code class="language-plaintext highlighter-rouge">system.backups</code> and the
<code class="language-plaintext highlighter-rouge">system.backup_log</code> tables.</p>
<p>After the backup was complete, I was able to see a file called
<code class="language-plaintext highlighter-rouge">2023-11-29T00:00:01Z.zip</code> in my S3 bucket.</p>
<h1 id="automating-the-backup">Automating the backup</h1>
<p>Being able to make backups with a single query is great, but it’s useless if
I need to remember to run it regularly. Fortunately, it’s easy to automate
running ClickHouse queries with cron.</p>
<p>I used <code class="language-plaintext highlighter-rouge">curl</code> to run the query, but you can also use <code class="language-plaintext highlighter-rouge">clickhouse client</code> if you
want. The query is the same, I send a <em>POST</em> request to the ClickHouse server
with the query as the body, and the credentials in the URL. Here’s what my
shell script looks like:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/sh</span>
<span class="nv">dt</span><span class="o">=</span><span class="si">$(</span><span class="nb">date</span> <span class="nt">-u</span> <span class="s1">'+%Y-%m-%dT%H:%M:%SZ'</span><span class="si">)</span>
<span class="nv">qry</span><span class="o">=</span><span class="s2">"backup all except database system to S3('https://my-ch-backups.s3.eu-west-1.amazonaws.com/ch01/</span><span class="k">${</span><span class="nv">dt</span><span class="k">}</span><span class="s2">.zip', 'AWSKEY', 'AWSSECRET');"</span>
curl <span class="s1">'http://127.0.0.1:8123?user=backupUser&password=backupPassword'</span> <span class="nt">--data</span> <span class="s2">"</span><span class="k">${</span><span class="nv">qry</span><span class="k">}</span><span class="s2">"</span>
</code></pre></div></div>
<p>And here’s the crontab entry.</p>
<blockquote>
<p>0 0 * * * /home/clickhouse/full-backup-s3.sh</p>
</blockquote>
<p>You can use other scheduling tools as well, like Apache Airflow or systemd
timers.</p>
<h1 id="s3-lifecycle-rules">S3 lifecycle rules</h1>
<p>Now we have daily backups of our ClickHouse database, and this is very useful
for restoring data from past days. But we don’t want to accumulate these
backups forever and end up paying more for the backups than for the database
itself.</p>
<p>Fortunately, S3 has a feature called lifecycle rules that can automatically
take actions on objects as they age. I set up the following rule for my
backups:</p>
<ul>
<li>Day 0: Objects uploaded.</li>
<li>Day 1: Objects move to Glacier Instant Retrieval.</li>
<li>Day 45: Objects expire.</li>
</ul>
<p>This means after backups are taken, they are stored with the default settings
for the first day. After that, they are moved to a cheaper storage tier, and
after 45 days, they are deleted.</p>
<hr />
<p>There you have it, a simple and cheap backup solution for your ClickHouse
database. I hope you found this useful. Feel free to comment under this post if
you have any questions, or send me a message to discuss your ClickHouse needs.</p>
Shell scripts as a poor man's AppImagehttps://www.gkbrk.com/2023/04/poor-mans-appimage/2023-04-28T00:00:00ZGokberk Yaltiraklihttps://www.gkbrk.com/rss@gkbrk.com<p>I love applications that can be packaged as a single executable file. A lot of
languages can create files like that now, including <a href="https://musl.libc.org/">C</a>,
C++, Rust, Go, and even <a href="https://learn.microsoft.com/en-us/dotnet/core/deploying/single-file/overview">C#</a>. But not every language can compile to a
single executable like that, even even ones that do might need some extra files
bundled with the executable.</p>
<blockquote>
<p>There is a lot of overlap between this topic and <a href="https://en.wikipedia.org/wiki/Self-extracting_archive">self-extracting
archives</a>. Check those
out for ideas if you are going to roll your own solution.</p>
</blockquote>
<h2 id="appimages">AppImages</h2>
<p>One popular method I see these days is AppImages. <a href="https://docs.appimage.org/introduction/index.html">AppImage</a> is a format for
bundling Linux applications, along with their dependencies, in order to create
applications that work without installation or <em>root</em>, and the whole thing gets
packaged in a single file.</p>
<p>They are extremely convenient, and they work pretty well. We’ve deployed
AppImages to production and used them to bundle tools and dependencies without
messing with developer or server systems.</p>
<p>An AppImage is basically a directory that contains all the files for an
application. This directory is turned into a compressed disk image with
squashfs. A small runtime that is prepended to the squashfs image handles
executing the application. This is done by mounting the image as a temporary
mount point, and then executing the application inside it. This runtime also
handles things like extracting the disk image to execute it without mounting.</p>
<p>In the time we used AppImages, we ran into two small problems.</p>
<p>The first one is the dependence on <a href="https://github.com/libfuse/libfuse">libfuse</a>. Running an AppImage normally,
without extracting it first with <code class="language-plaintext highlighter-rouge">--appimage-extract</code>, requires <em>FUSE</em> and
<em>libfuse</em>. A huge appeal of AppImages for us was being able to deploy
applications without any superuser privileges and without modifying the system
in any way. This dependency on <em>libfuse</em> means either we need to modify the
system to install stuff, or we need to extract the image and get rid of the
“single-file executable” benefits.</p>
<p>The other downside is also related to mounts. In order to make startups faster,
and not require extra disk space or RAM when executing the application,
AppImages create a temporary mount instead of extracting themselves every time
you execute them. This is a good way to get free performance.</p>
<p>But we ran into some commercial system monitoring tools that give alerts to
sysadmins when a new mount is created. This is meant for checking the disks,
stuff like making sure everything is in /etc/fstab and will be mounted again on
reboots etc. But this system doesn’t understand temporary mounts, and thinks
everything is a physical drive. When we execute AppImages regularly, that
monitoring system creates useless alerts. It’s not something we can disable, and
it’s not a bug we can fix ourselves. And frankly mounting stuff feels too much
like modifying the system, even if FUSE lets us do it without extra permissions.</p>
<p>Those two minor issues inspired me to come up with an alternative solution.</p>
<h2 id="an-ugly-andor-great-solution">An ugly and/or great solution</h2>
<p>I wrote a Shell script generator in Python. It takes the application to bundle
as a directory, just like AppImages. The output is a single executable that runs
the application, just like AppImages. But the internals? That’s very much unlike
AppImages.</p>
<p>We first take a statically-compiled binary of the <a href="https://github.com/facebook/zstd">zstd</a> compressor. We encode
this binary as base64, and embed this into our Shell script as a heredoc. The
script un-base64’s that into a temporary file, and makes it executable.</p>
<p>We then <em>tar</em> our application folder, pipe that through <em>zstd</em> to compress it,
encode it as base64, and embed it into our Shell script. The script pipes that
through a base64 decoder, decompresses is with the <em>zstd</em> binary we unpacked
earlier, and un-tars it into another temporary directory.</p>
<p>After this, the application is executed normally from this temp directory. After
our shell scripts exits, all the temp files are deleted from the system.</p>
<p>All of this works, and we’ve deployed applications with this strategy. The only
downside seems to be a small latency when executing the application, and this is
due to the decompression and extraction of the embedded tar file.</p>
<h2 id="future-work">Future work</h2>
<p>This project was both fun and useful in real life. If I come back to it in the
future, I’m planning to investigate some of the things below.</p>
<ul>
<li>Instead of base64, embed binary data after the script and unpack with <a href="https://pubs.opengroup.org/onlinepubs/9699919799/utilities/dd.html">dd</a>.</li>
<li>Instead of a shell script with embedded data, make a statically compiled
executable that can be prepended to archives.</li>
<li>Instead of extracting a zstd binary and using it to decompress data, compile
the decompressor into the “runtime” as a static library.</li>
<li>Allow stuff like <code class="language-plaintext highlighter-rouge">--appimage-extract</code> to extract application files somewhere
without executing them.</li>
</ul>
Earthquake data for Turkeyhttps://www.gkbrk.com/2022/11/turkey-earthquake-data/2022-11-26T00:00:00ZGokberk Yaltiraklihttps://www.gkbrk.com/rss@gkbrk.com<p>Istanbul residents recently had a scare when a magnitude 6 earthquake hit the
nearby city of Duzce. When we felt the tremors, I wanted to confirm that it
was actually an earthquake, and wanted to check where the epicenter was.</p>
<p>I went to the website of the national disaster management agency, AFAD, and
found that it was overloaded and could not be accessed. I then went to the
website of an earthquake observatory, Kandilli, and found that it was also
overloaded and could not be accessed.</p>
<p>At the same time, we managed to get some information via Twitter, and later
learned more details with various earthquake apps. In a future event, I don’t
want to be searching for random websites and apps, so I decided to create a
consolidated data source that I can quickly access. I’m sharing it here in
case it’s useful to anyone else.</p>
<h2 id="afad-data">AFAD data</h2>
<p>AFAD, the national disaster management agency, has a website that provides
information about the latest earthquakes. It uses an HTTP endpoint that
returns a JSON object with the latest earthquake data. Here’s how it works.</p>
<p>The endpoint is <code class="language-plaintext highlighter-rouge">https://deprem.afad.gov.tr/EventData/GetEventsByFilter</code>. To
get data out, you need to provide a filter. The filter is a JSON object that
specifies the start and end dates of the data you want. Here’s an example.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="n">requests</span>
<span class="kn">import</span> <span class="n">datetime</span>
<span class="kn">import</span> <span class="n">json</span>
<span class="n">URL</span> <span class="o">=</span> <span class="sh">"</span><span class="s">https://deprem.afad.gov.tr/EventData/GetEventsByFilter</span><span class="sh">"</span>
<span class="n">end_date</span> <span class="o">=</span> <span class="n">datetime</span><span class="p">.</span><span class="n">datetime</span><span class="p">.</span><span class="nf">now</span><span class="p">()</span>
<span class="n">start_date</span> <span class="o">=</span> <span class="n">end_date</span> <span class="o">-</span> <span class="n">datetime</span><span class="p">.</span><span class="nf">timedelta</span><span class="p">(</span><span class="n">days</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">event_filter</span> <span class="o">=</span> <span class="p">{</span>
<span class="sh">"</span><span class="s">EventSearchFilterList</span><span class="sh">"</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span><span class="sh">"</span><span class="s">FilterType</span><span class="sh">"</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="sh">"</span><span class="s">Value</span><span class="sh">"</span><span class="p">:</span> <span class="n">start_date</span><span class="p">.</span><span class="nf">isoformat</span><span class="p">()},</span>
<span class="p">{</span><span class="sh">"</span><span class="s">FilterType</span><span class="sh">"</span><span class="p">:</span> <span class="mi">9</span><span class="p">,</span> <span class="sh">"</span><span class="s">Value</span><span class="sh">"</span><span class="p">:</span> <span class="n">end_date</span><span class="p">.</span><span class="nf">isoformat</span><span class="p">()},</span>
<span class="p">],</span>
<span class="sh">"</span><span class="s">Skip</span><span class="sh">"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span>
<span class="sh">"</span><span class="s">Take</span><span class="sh">"</span><span class="p">:</span> <span class="mi">100</span><span class="p">,</span>
<span class="sh">"</span><span class="s">SortDescriptor</span><span class="sh">"</span><span class="p">:</span> <span class="p">{</span><span class="sh">"</span><span class="s">field</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">eventDate</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">dir</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">desc</span><span class="sh">"</span><span class="p">},</span>
<span class="p">}</span>
<span class="n">resp</span> <span class="o">=</span> <span class="n">requests</span><span class="p">.</span><span class="nf">post</span><span class="p">(</span><span class="n">URL</span><span class="p">,</span> <span class="n">json</span><span class="o">=</span><span class="n">event_filter</span><span class="p">)</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">resp</span><span class="p">.</span><span class="nf">json</span><span class="p">()</span>
<span class="nf">print</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
</code></pre></div></div>
<h2 id="emsc-lastquake-app">EMSC LastQuake app</h2>
<p>The European Mediterranean Seismological Centre (EMSC) has an Android app called
“LastQuake”. It has a backend that returns earthquake data in GeoJSON format.</p>
<p><code class="language-plaintext highlighter-rouge">https://www.emsc-csem.org/service/api/1.6/get.geojson?type=full</code> returns all
earthquakes while
<code class="language-plaintext highlighter-rouge">https://www.emsc-csem.org/service/api/1.6/get.geojson?type=risk</code> returns only
the ones that are considered “significant”.</p>
<h2 id="kandilli-data">Kandilli data</h2>
<p>The Kandilli Observatory and Earthquake Research Institute also has a website,
possibly the most popular page that everyone uses to check for earthquakes.</p>
<p>The URL is <code class="language-plaintext highlighter-rouge">https://www.koeri.boun.edu.tr/scripts/lst0.asp</code>. It returns an HTML
page with the latest earthquake data. The data is formatted as plain text, and
contained in a <code class="language-plaintext highlighter-rouge"><pre></code> tag.</p>
<p>Another URL that seems to return the same data is
<code class="language-plaintext highlighter-rouge">http://www.koeri.boun.edu.tr/scripts/lst1.asp</code>.</p>
<h2 id="kandilli-mobile-app">Kandilli mobile app</h2>
<p>The same observatory also has a mobile app that provides earthquake data. The
app is basically a webview that loads a mobile-friendly version of the website.</p>
<p>It can be accessed at <code class="language-plaintext highlighter-rouge">https://m.koeri.boun.edu.tr/dbs3/deprem-liste.asp</code>.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="n">requests</span>
<span class="kn">import</span> <span class="n">re</span>
<span class="n">URL</span> <span class="o">=</span> <span class="sh">"</span><span class="s">https://m.koeri.boun.edu.tr/dbs3/deprem-liste.asp</span><span class="sh">"</span>
<span class="n">html</span> <span class="o">=</span> <span class="n">requests</span><span class="p">.</span><span class="nf">get</span><span class="p">(</span><span class="n">URL</span><span class="p">).</span><span class="n">text</span>
<span class="c1"># Remove HTML comments
</span><span class="n">html</span> <span class="o">=</span> <span class="n">re</span><span class="p">.</span><span class="nf">sub</span><span class="p">(</span><span class="sa">r</span><span class="sh">"</span><span class="s"><!--.*?--></span><span class="sh">"</span><span class="p">,</span> <span class="sh">""</span><span class="p">,</span> <span class="n">html</span><span class="p">,</span> <span class="n">flags</span><span class="o">=</span><span class="n">re</span><span class="p">.</span><span class="n">DOTALL</span><span class="p">)</span>
<span class="n">pattern</span> <span class="o">=</span> <span class="sh">"</span><span class="s">deprem_detay\(</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">,</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">,</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">,</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">,</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">,</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">,</span><span class="sh">'</span><span class="s">(.*?)</span><span class="sh">'</span><span class="s">\);</span><span class="sh">"</span>
<span class="n">matches</span> <span class="o">=</span> <span class="n">re</span><span class="p">.</span><span class="nf">findall</span><span class="p">(</span><span class="n">pattern</span><span class="p">,</span> <span class="n">html</span><span class="p">)</span>
<span class="k">for</span> <span class="n">match</span> <span class="ow">in</span> <span class="n">matches</span><span class="p">:</span>
<span class="nf">print</span><span class="p">(</span><span class="n">match</span><span class="p">)</span>
</code></pre></div></div>
<h2 id="earthquake-network-application">Earthquake Network application</h2>
<p>I found that a lot of people were using an Android app called “Earthquake
Network”. It seems to have a PHP backend that returns the earthquake data in
JSON format. Here are some interesting endpoints.</p>
<p>There is an endpoint that returns all earthquakes greater than a given
magnitude at <code class="language-plaintext highlighter-rouge">http://srv.earthquakenetwork.it/distquake_download_automatic19.php?mag=2.0&pro=all</code>.</p>
<p>The <code class="language-plaintext highlighter-rouge">mag</code> parameter specifies the minimum magnitude, and the <code class="language-plaintext highlighter-rouge">pro</code> parameter
specifies the organization that reported the earthquake. The value <code class="language-plaintext highlighter-rouge">all</code> means
all organizations. To get the maximum number of earthquakes, you can set the
magnitude to 0 and the organization to <code class="language-plaintext highlighter-rouge">all</code>.</p>
<p>Some values for the <code class="language-plaintext highlighter-rouge">pro</code> parameter are <em>all</em>, <em>bdtim</em>, <em>csi</em>, <em>csn</em>, <em>emsc</em>,
<em>funvisis</em>, <em>geonet</em>, <em>ign</em>, <em>ineter</em>, <em>ingv</em>, <em>inpres</em>, <em>jma</em>, <em>ncs</em>,
<em>phivolcs</em>, <em>rsn</em>, <em>rspr</em>, <em>sgc</em>, <em>ssn</em>, <em>uasd</em> and <em>usgs</em>.</p>
<p>Another interesting feature of the application is the live chat. It seems
valuable to get updates from people in the area, so this endpoint is also
interesting. The URL is <code class="language-plaintext highlighter-rouge">http://srv.earthquakenetwork.it/distquake_download_chat5.php</code>.</p>
<p>It returns a JSON list of chat messages. The endpoint takes two parameters,
<code class="language-plaintext highlighter-rouge">idmin</code> and <code class="language-plaintext highlighter-rouge">postfix</code>. The <code class="language-plaintext highlighter-rouge">idmin</code> parameter specifies the minimum ID of the
messages you want, and can be used to prevent downloading the same messages
multiple times. The <code class="language-plaintext highlighter-rouge">postfix</code> parameter is like a room ID, and can be used to
get messages from a specific region. <code class="language-plaintext highlighter-rouge">_tr_gen</code> is the general chat for Turkey.</p>
<h2 id="conclusion">Conclusion</h2>
<p>I am planning to use these data sources to create a consolidated “status page”,
like a personal dashboard that I can use to check for earthquakes. I hope
others find this useful as well.</p>
<p>Please remember that people use these websites and apps during emergencies, so
please don’t overload them. If you are going to scrape data, please be
considerate and don’t make too many requests.</p>
<p>In fact, if you are making a user-facing application, put a cache in front of
the data sources. This way, you can reduce the load on the data sources, and
also provide a better user experience by reducing the latency of your
application. If the websites I initially used had a cache, I would have been
able to access them during the earthquake.</p>
A Brief Overview of Mastodonhttps://www.gkbrk.com/2022/11/a-brief-overview-of-mastodon/2022-11-19T00:00:00ZGokberk Yaltiraklihttps://www.gkbrk.com/rss@gkbrk.com<p>Recently on both my Twitter and Mastodon feeds, I’ve been seeing a lot of people
talking about a Twitter to Mastodon migration. I’ve been on the Fediverse, and
similar networks, for some time now, but I haven’t written too much about it.</p>
<p>I thought I’d take a moment to write a brief overview of Mastodon, in case
anyone hears about it for the first time and wants to know more.</p>
<!--more-->
<p>With Elon Musk’s purchase of Twitter, and the rule changes that have followed,
Twitter has seen a lot of speculation about its future. There are a lot of
people on both sides of the fence, but one thing is clear: For better or worse,
Twitter is changing.</p>
<p>Many users that have left Twitter are switching to Mastodon. It is an
open-sourced, de-centralized social media software that allows users to
communicate with each other. It is now growing in popularity as many people use
it as an alternative to Twitter. This article will give an overview of Mastodon
and explain what it is and how it’s different from Twitter.</p>
<h1 id="what-is-mastodon">What is Mastodon?</h1>
<p>Mastodon is an open-source social media platform that is similar to Twitter. It
allows users to join a federated network called the Fediverse. The Fediverse is
a network of independent servers that are connected to each other. This allows
users to communicate with each other even if they are using different servers
and different applications.</p>
<p>There are many applications that can be used to access the Fediverse, but
Mastodon seems to be the popular choice during the Twitter exodus. It is most
likely due to being used previously by many people as a Twitter alternative
whenever people were unhappy with Twitter’s policies.</p>
<p>Mastodon was created by Eugen Rochko in 2016. At some point, the developers
created a non-profit organization in Germany called Mastodon gGmbH.</p>
<p>The software is open-source and is licensed under the GNU AGPLv3 license. This
means that anyone can use the software for free, modify it as they wish, and
redistribute it to others. This is a massive difference to Twitter, where
users are at the mercy of the company’s policies.</p>
<p>People are allowed to contribute to the code and fix bugs if they find any. They
also can upgrade the platform by adding new features and translate the interface
into different languages.</p>
<p>Any user can make their version of Mastodon or host their own Mastodon server.
Each server has its own set of rules and regulations that will apply to that
particular server only. The rules and regulations will only be followed by those
who use that server.</p>
<p>Mastodon can be used through mobile phone apps and web browsers and is gaining
more daily popularity.</p>
<h1 id="how-is-mastodon-different-from-twitter">How is Mastodon different from Twitter?</h1>
<p>Mastodon and Twitter are both free social media platforms which share many
similarities. Both platforms allow users to tweet (<em>toot</em> or <em>post</em> in
Mastodon), follow other users, like posts and retweet (<em>boost</em> in Mastodon)
posts made by other users. So for the core functionality, they are very similar.</p>
<p>Twitter is a single social network that requires people to sign up for and share
content only on Twitter. If you are on Twitter and want to communicate with
your friend, they must also be on Twitter. If they are not, you cannot
interact with them.</p>
<p>Whereas on Mastodon, you can communicate with your friend even if they are on a
different server. You can follow people on different servers, and even different
applications. For example, you can follow someone on Pixelfed, which is like
Instagram for the Fediverse, and they can follow your Mastodon account.</p>
<p>On Twitter, the company has the power to change the rules and regulations at any
time. They can decide to change their spam policy, or alter their algorithm to
show more or less of a certain type of content. On Mastodon, the server owners
decide all of this. If you don’t like the rules on one server, you can switch to
another server that has different rules, or you can host your own server.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Mastodon shows up first on the list if you search about Twitter alternatives,
but Mastodon has plenty of differences and unique features to stand out as its
own unique thing. So, use Mastodon not as an alternative for Twitter but as a
new social app.</p>
<p>If it seems like Mastodon is not for you, there are plenty of other Fediverse
apps that you can try. Thanks to the power of federation, you can use any of
these apps and still communicate with your friends on Mastodon.</p>