<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Built With Vibes]]></title><description><![CDATA[AI is changing how we build things. It empowers people to build stuff they want with caffeine and vibes. This publication covers exactly that.]]></description><link>https://blog.lakshyabuilds.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 22:33:40 GMT</lastBuildDate><atom:link href="https://blog.lakshyabuilds.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Partitioning a Table in PostgreSQL - Pros and Cons Explained]]></title><description><![CDATA[Partition is a very sensitive topic for some people. Like the 1947 India-Pak partition which lead to so many families getting devasted. But some partitions are not that bad. For e.g., partitioning your relational table. This blog talks about the good...]]></description><link>https://blog.lakshyabuilds.com/partitioning-a-table-in-postgresql-pros-and-cons-explained</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/partitioning-a-table-in-postgresql-pros-and-cons-explained</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Database Partitioning]]></category><category><![CDATA[pg_partman]]></category><category><![CDATA[psql]]></category><category><![CDATA[pg_cron]]></category><category><![CDATA[indexing]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Mon, 19 Jan 2026 13:30:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768762303965/c33af65f-4633-44bb-a696-86a807535850.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Partition is a very sensitive topic for some people. Like the 1947 India-Pak partition which lead to so many families getting devasted. But some partitions are not that bad. For e.g., partitioning your relational table. This blog talks about the good partitions and how you can use it to supercharge your database operations.</p>
<h2 id="heading-what-is-meant-by-partitioning">What Is Meant by Partitioning?</h2>
<p>The idea behind partitioning a table is very simple - divide and conquer. Say you have a very big table containing information about all the orders placed in the last 5 years. Querying it to identify recent orders of a customer will involve writing a select query that utilizes an index built on the entire table’s data. Due to sheer size of the table, the index itself may be quite big with data running in gigabytes of storage space. Compare that to searching a table which only contains information about orders of the last three months. Because the number of records will be less, the search space is significantly smaller as compared to our other table with all the records. This is the core idea behind partitioning a table. Instead of 1 big table, there exists multiple small tables each containing a subset of the original data.</p>
<p>By definition, Partitioning is a physical data organization strategy where a single logical table is split into multiple child tables based on a key - the partition key.</p>
<h2 id="heading-so-many-tables-how-are-read-amp-writes-handled">So Many Tables! How Are Read &amp; Writes Handled?</h2>
<p>With so many small tables, how do you decide which data goes in which table during insertion and how do you search across multiple small tables? The short answer is - <strong>you don’t</strong> (decide). PSQL automatically routes your inserts to the correct partitioned table based on the partitioning key or column. Additionally the data fetching is dependent on your query. The idea is to identify the correct partitions your data may lie into and scanning those partitions. If the partitioned column is a part of your query, the query executor will not search partitions not containing your data. It will prune / exclude those partitions from search thus leading to a more selective and optimized search across the required partitions. In the worst case, the planner cannot prune extra partitions and behaves similar to a UNION ALL scan. In your context, nothing changes. You will still insert / fetch data in the same manner that you used to before a partitioned table.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you haven’t already, subscribe to the newsletter and never miss out on our simplified tech articles!</div>
</div>

<h2 id="heading-why-to-partition-our-table">Why To Partition (our table) ?</h2>
<p>Speed. Efficiency. Storage optimization. These are the top 3 reasons I would consider partitioning my table. The queries including the partitioned key can execute significantly faster as we have discussed above because of partition pruning. Due to faster query execution, you get to save on average CPU and memory usage of your database. Additionally if needed, you can drop / archive old partitions easily. This helps you save on your database storage costs as well. This is better than identifying and deleting records selectively.</p>
<pre><code class="lang-pgsql"><span class="hljs-comment">-- Query used to identify impact of partitioning</span>
<span class="hljs-keyword">EXPLAIN</span> <span class="hljs-keyword">ANALYZE</span> 
<span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> orders 
<span class="hljs-keyword">WHERE</span> customer_id = <span class="hljs-number">236</span> 
<span class="hljs-keyword">AND</span> order_date = <span class="hljs-string">'2026-01-17'</span>;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768759309190/66478825-8a8d-494e-a01e-35bb47307478.png" alt class="image--center mx-auto" /></p>
<p>Notice how the query execution time reduces by more than 80% after partitioning our table.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768759328118/98b66e4d-66d8-447b-bdea-0af51018cd57.png" alt class="image--center mx-auto" /></p>
<p>One may argue that the same benefits can be realized by just archiving the old table and creating a new one in place of it. While it is true, this approach won’t work where you may frequently require reading data from your archived table to execute queries. For e.g., a table powering the customer support dashboard. The support agents may want to see past customer interactions to resolve their queries effectively.</p>
<h2 id="heading-when-should-you-not-partition">When Should You Not Partition</h2>
<p>If you have a write heavy table, where write performance matters and reads are rare, partitioning may not be helpful. Incase of a medium-sized table with a moderate number of records, you can instead focus on revisiting your current indexes to optimize your read performance than opting for partitioning your table. Partitioning comes with an added overhead of creating and maintaining partitions. You can refer to this <a target="_blank" href="https://blog.lakshyabuilds.com/read-this-before-creating-your-next-index-in-psql">article</a> if you want to know more about how you unlock maximum performance from your table indexes.</p>
<p>Another hurdle while implementing partitioning is the tricky situation with the unique constraints. In PSQL, to ensure uniqueness at a table level, the constraint needs to be always be created in combination with the partition key. For e.g., In a payments table partitioned by creation date monthly where transaction ID is needed to be unique, a unique constraint on trx ID and creation date would only ensure that the trx ID is unique in each partition, i.e. in every month’s data. Technically it means, one can insert duplicate trx ID with two different dates. This may lead to serious validation issues incase any invalid data were to be captured in the database. This limitation is not by accident, but by choice while designing partitions in PSQL to support partition isolation and not hamper write performance. An industry accepted workaround is to create a separate table (not partitioned) recording such unique entries to ensure system’s integrity. Once the entry is created in this table, it can be safely inserted in our original partitioned table.</p>
<h2 id="heading-how-to-partition">How To Partition?</h2>
<p>While there are multiple ways to create and maintain partitions, these two are my current favorites:</p>
<ol>
<li><p><strong>Raw Dog Way</strong></p>
<ol>
<li><p>Create a partitioned table.</p>
</li>
<li><pre><code class="lang-pgsql"> <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> orders (
     id           <span class="hljs-type">BIGSERIAL</span>,
     order_date   <span class="hljs-type">DATE</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">NULL</span>,
     customer_id  <span class="hljs-type">BIGINT</span>,
     amount       <span class="hljs-type">NUMERIC</span>(<span class="hljs-number">10</span>,<span class="hljs-number">2</span>),
     <span class="hljs-keyword">PRIMARY KEY</span> (id, order_date)
 ) <span class="hljs-keyword">PARTITION BY RANGE</span> (order_date);
</code></pre>
</li>
<li><p>Run an automation job at regular intervals to check for existing partitions.</p>
</li>
<li><p>Create new partitions and attach them to parent table.</p>
</li>
<li><pre><code class="lang-pgsql"> <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> orders_p20260116
 <span class="hljs-keyword">PARTITION</span> <span class="hljs-keyword">OF</span> orders
 <span class="hljs-keyword">FOR</span> <span class="hljs-keyword">VALUES</span> <span class="hljs-keyword">FROM</span> (<span class="hljs-string">'2026-01-16'</span>) <span class="hljs-keyword">TO</span> (<span class="hljs-string">'2026-01-17'</span>);
</code></pre>
</li>
<li><p>If not needed, drop old partitions.</p>
</li>
</ol>
</li>
<li><p><strong>The Abstracted Extension Way</strong></p>
<ol>
<li><p>Install pg_partman - a pSQL extension for partition management.</p>
</li>
<li><p>Create a partitioned table. [same as shown in the above approach]</p>
</li>
<li><p>Register your table with pg_partman and define partition intervals.</p>
</li>
<li><pre><code class="lang-pgsql"> <span class="hljs-keyword">SELECT</span> <span class="hljs-built_in">public</span>.create_parent(
     p_parent_table := <span class="hljs-string">'public.orders'</span>,
     p_control      := <span class="hljs-string">'order_date'</span>,
     p_interval     := <span class="hljs-string">'1 month'</span>
 );

 <span class="hljs-comment">-- This will create parititions of orders table with 1 month intervals</span>
</code></pre>
</li>
<li><p>Use an automation job to run partman maintenance job at regular intervals.</p>
</li>
<li><p>The maintenance job takes care of creating new and deleting old partitions on its own.</p>
</li>
</ol>
</li>
</ol>
<h2 id="heading-final-remarks">Final Remarks</h2>
<p>Partitioning is a good strategy. It helps when your queries naturally filter by the partition key. However it comes with its own limitations regarding enforcing uniqueness at a table level and the overhead of maintaining partitions. Therefore, it is very important to identify your needs and setup before implementing it. Partitioning is a scalpel, not a hammer. Use it where it cuts deep, not everywhere it can.</p>
<p>If you liked this blog, you will surely love the upcoming blog where we discuss using <strong>pg_partman</strong> in detail with step-by-step instructions on how to install and configure it on your own. We will also cover the usage of <strong>pg_cron</strong> to simplify the overall process of partition management. Stay tuned as the blog drops early next month. Thanks for reading this article. Namaste!</p>
]]></content:encoded></item><item><title><![CDATA[Read this Before Creating Your Next Index in PSQL]]></title><description><![CDATA[If you are a beginner in your developer journey tasked with optimizing a database query, you must have read about applying indexes. But not all indexes bring performance gains. Some can even degrade it. Read this blog before your create your next ind...]]></description><link>https://blog.lakshyabuilds.com/read-this-before-creating-your-next-index-in-psql</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/read-this-before-creating-your-next-index-in-psql</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[indexing]]></category><category><![CDATA[Database Optimization,]]></category><category><![CDATA[Composite Index]]></category><category><![CDATA[Indexing strategy]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[psql]]></category><category><![CDATA[postgres]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Wed, 31 Dec 2025 00:49:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767141848296/a0be8d6d-5c30-4bdf-ab09-094627a253f2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you are a beginner in your developer journey tasked with optimizing a database query, you must have read about applying indexes. But not all indexes bring performance gains. Some can even degrade it. Read this blog before your create your next index.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767141232189/6dad9d08-7c36-4917-b933-9ce641ce7cff.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-indexing-works-in-psql">How Indexing Works in PSQL</h2>
<h3 id="heading-query-planner">Query Planner</h3>
<p>Whenever you run a query on your database, a query planner analyzes the query and creates a plan to execute it. This plan optimizes for performance. It focuses on the following aspects: the number of workers to use, scan strategy (sequential or indexed), and whether to use in-memory or file storage sorting. It also estimates the time to execute and records (tuples) that will be fetched during the query. All this is estimated by analyzing historical data trends (PG statistics). By adding “explain” before your query, you can access the above-mentioned query plan.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">EXPLAIN</span> <span class="hljs-keyword">ANALYZE</span>
<span class="hljs-keyword">SELECT</span> count(*) <span class="hljs-keyword">FROM</span> invoices 
<span class="hljs-keyword">WHERE</span> billing_country = <span class="hljs-string">'india'</span>;
</code></pre>
<p>Query Response:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aggregate (cost=10000000012.38..10000000012.39 rows=1 width=8) (actual time=0.100..0.100 rows=1 loops=1)</td></tr>
</thead>
<tbody>
<tr>
<td>-&gt; Seq Scan on invoices (cost=10000000000.00..10000000012.38 rows=1 width=0) (actual time=0.100..0.100 rows=0 loops=1)</td></tr>
<tr>
<td>Filter: ((billing_country)::text = 'india'::text)</td></tr>
<tr>
<td>Rows Removed by Filter: 10</td></tr>
<tr>
<td>Planning Time: 0.200 ms</td></tr>
<tr>
<td>Execution Time: 0.100 ms</td></tr>
</tbody>
</table>
</div><p><a target="_blank" href="https://playcode.io/sql-playground">link</a> for accessing the above mentioned playground database.</p>
<p>Before creating any index, you should read the query plan of the commonly used queries that will be affected by the index. Since this plan is based on historical data, it is important that your metrics are updated continuously. This is handled by running “vacuum analyze” on your table.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">vacuum</span> <span class="hljs-keyword">analyze</span> invoices
</code></pre>
<h3 id="heading-sequential-scan-vs-indexed-scan">Sequential Scan vs Indexed Scan</h3>
<p>The cost of executing a query is directly proportional to the number of tuples or DB pages it fetches. Say the cost of fetching records by performing a sequential scan is X units. Then, for fetching 1 lakh pages, the cost will be 100k into X units. These records are fetched in a sequential manner, resulting in less cost for accessing continuous records.</p>
<p>Similarly, when utilizing an index, the cost of fetching a page from a table can be 4X units. This is more than the cost of accessing a page in a sequential scan. This is because your index stores tuple references. To access the data, the query executor needs to fetch pages from the actual disk storage. This access can be in a random manner. Therefore, the DB query engine adds a punishing cost for this random behavior. As a result, the cost per page can be up to 4 times the cost of accessing records in a sequential manner.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767138234771/412d233e-24b1-48a8-8f4f-c361ddcc9512.png" alt class="image--center mx-auto" /></p>
<p>So, it all boils down to the number of records (database pages) fetched. If by using an index, the query planner estimates it will have to fetch about 40k records, the cost will be rough around 40,000 * 4 X = 160,000 units. This cost is higher than performing a sequential scan involving the fetching of 1 lakh records. Hence query planner may prefer not using the index. This is an important aspect in deciding if adding an index will help in your case or not.</p>
<p>There’s one important consideration. When using high speed storage hardware like SSD for your database physical storage, the penalty of accessing entries in a random order may not be 4 times the cost of accessing them sequentially. This parameter can then be tuned accordingly in such situations as per your specific needs.</p>
<h2 id="heading-selectivity-of-your-index">Selectivity of Your Index</h2>
<p>In the above section, we discussed how the total number of records to be fetched has a direct impact on your overall query performance. To promote the usage of your index, it is important for your query to return less number of records or be more selective. To identify this information, you can refer to the cost estimates in your query planner results. It shows the estimated number of records to be fetched.</p>
<p>Additionally, the query planner estimates the record count by reading values from the historical data we discussed above. PG statistics contain information about the data distribution of all your tables.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> pg_stats
<span class="hljs-keyword">WHERE</span> tablename = <span class="hljs-string">'invoices'</span>
</code></pre>
<p>Query Response:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>TABLENAME</strong></td><td><strong>ATTNAME</strong></td><td><strong>MOST_COMMON_VALS</strong></td><td><strong>MOST_COMMON_FREQS</strong></td><td><strong>CORRELATION</strong></td><td><strong>HISTOGRAM_BOUNDS</strong></td></tr>
</thead>
<tbody>
<tr>
<td>invoices</td><td>billing_address</td><td>NULL</td><td>NULL</td><td>NULL</td><td>NULL</td></tr>
<tr>
<td>invoices</td><td>invoice_id</td><td><em>NULL</em></td><td><em>NULL</em></td><td>1</td><td>{1,2,3,4,5,6,7,8,9,10}</td></tr>
<tr>
<td>invoices</td><td>customer_id</td><td>{1,2,3,4,5}</td><td>0.2,0.2,0.2,0.2,0.2</td><td>0.6364</td><td><em>NULL</em></td></tr>
<tr>
<td>invoices</td><td>invoice_date</td><td><em>NULL</em></td><td><em>NULL</em></td><td>1</td><td>{"2021-01-01 00:00:00","2021-01-02 00:00:00","2021-01-03 00:00:00","2021-01-06 00:00:00","2021-01-11 00:00:00","2021-02-01 00:00:00","2021-02-02 00:00:00","2021-02-08 00:00:00","2021-02-11 00:00:00","2021-02-19 00:00:00"}</td></tr>
<tr>
<td>invoices</td><td>billing_city</td><td>{Montreal,Oslo,Prague,"Sao Paulo",Stuttgart}</td><td>0.2,0.2,0.2,0.2,0.2</td><td>-0.0909</td><td><em>NULL</em></td></tr>
<tr>
<td>invoices</td><td>billing_country</td><td>{Brazil,Canada,"Czech Republic",Germany,Norway}</td><td>0.2,0.2,0.2,0.2,0.2</td><td>0.3939</td><td><em>NULL</em></td></tr>
<tr>
<td>invoices</td><td>total</td><td>{1.98,3.96,5.94}</td><td>0.3,0.2,0.2</td><td>0.1152</td><td>{0.99,8.91,13.86}</td></tr>
</tbody>
</table>
</div><p>Here is a brief explanation about some of the fields shown above:</p>
<ol>
<li><p><strong>Most_common_vals</strong> - some of the values that appear frequently in a column.</p>
</li>
<li><p><strong>Most_common_freqs</strong> - for each of the frequently occurring values, this shows their frequency in percentage [in decimals]. So to estimate how many records exist with a value of billing country as Brazil, we can multiply the total number of records in the table by the frequency of Brazil, i.e., 0.2</p>
</li>
<li><p><strong>Correlation</strong> - it describes how correlated the values are with each other. A correlation value nearing 1 or -1 demonstrates a strong correlation. This reduces random access cost when fetching records in a range. (more on this later)</p>
</li>
<li><p><strong>Histogram_bounds</strong> - for range-based queries (date &gt; yesterday), the estimated records are fetched by multiplying the records in each histogram interval.</p>
</li>
</ol>
<p>I have used the above command numerous times to identify the distinct values and their frequencies. If you are querying on a single column with very low distinct values, or on a value that appears very frequently (high most_common_freq), chances are the query planner will always favor a sequential scan over an index scan in a table with a large number of records.</p>
<h2 id="heading-composite-index">Composite Index</h2>
<p>A composite index is made on two or more columns. Here ordering of the columns matters greatly. Say you have an index on columns A, B, and C. When querying over only A or A and B, your index is used. However, when querying over only C or A and C, there is a chance that this index is not used. This is because in an index, entries are stored by sorting them on value A, then value B, and then value C. So querying directly on value C via an index made on A, B, and C is not optimal in most cases. Hence, it is advised to order your columns in index as per your query pattern. While some people suggest ordering on the basis of selectivity (more selective values earlier), there’s one more advice that you should consider.</p>
<h2 id="heading-equality-over-range">Equality Over Range</h2>
<p>If your query is filtering on Col B with an equality symbol ( = ) and then on Column C using a range filter ( ≥= ), then more priority should be given to Col B in the ordering of the index. This is because, by having an index over B and C, your query planner will find the first possible record and then perform a sequential scan to satisfy the range filter. This reduces the number of tuples fetched in random order from the heap. Adding a reverse index will not allow this possibility to hold true. Hence, it is advised to value the relational signs in your query as well while deciding the ordering of columns in your index.</p>
<h2 id="heading-index-only-scan">Index Only Scan</h2>
<p>The query executor has to fetch tuples from the heap since the index generally stores a reference to your original tuple along with the data used to create the index. In case your query only needs the data stored in the index, it can result into an index only scan, removing the need to fetch records from the tuple. To fully utilize the performance gain from an index-only scan, you can store additional data in your index without creating an index on those columns. For e.g, if your table has 7 columns and your top query fetches data from 3 columns only. In case the query is only filtering on 1 column, you can create an index on that column and store the data for the additional two columns in that same index using the include command.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> idx_billing_country <span class="hljs-keyword">ON</span> invoices (billing_country) <span class="hljs-keyword">INCLUDE</span> (payment_method)
</code></pre>
<p>Benefits of this approach are the reduced data fetching time and performance gains from an index-only scan. You could have achieved the same gains by creating a composite index, but that is not advised in case your query pattern does not query on all those columns in a single query.</p>
<p>The downside of this approach is the increased index size due to the storage of additional data in your index.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>PostgreSQL database is a very vast database where the devil is in the details. The above article was curated from my personal experience of using a Postgres database at a production scale, along with diving deep into various published articles and documentation. The official PostgreSQL <a target="_blank" href="https://www.postgresql.org/docs/">docs</a> are a very good means for someone looking to understand how indexing works in detail. Additionally, if you are interested in knowing some important PSQL commands that can come in handy during DB outages, you can consider reading this <a target="_blank" href="https://blog.lakshyabuilds.com/mastering-psql-7-essential-commands-for-database-efficiency">article</a>. Thank you for reading! Namaste.</p>
]]></content:encoded></item><item><title><![CDATA[5 Things About Kafka Consumers That Beginners Should Know]]></title><description><![CDATA[Kafka is an event streaming library used for asynchronous communication between microservices. It is know for its ability to handle high throughput of events with ease. In the story of Kafka, there are two main actors - producers and consumers. Produ...]]></description><link>https://blog.lakshyabuilds.com/5-things-about-kafka-consumers-that-beginners-should-know</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/5-things-about-kafka-consumers-that-beginners-should-know</guid><category><![CDATA[kafka]]></category><category><![CDATA[Apache Kafka]]></category><category><![CDATA[event streaming]]></category><category><![CDATA[SQS]]></category><category><![CDATA[kafka consumers]]></category><category><![CDATA[asynchronous programming]]></category><category><![CDATA[event-driven-architecture]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Wed, 24 Dec 2025 22:44:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766616149819/670f42e6-c67c-4be2-9981-1ef2a2c78a0a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kafka is an event streaming library used for asynchronous communication between microservices. It is know for its ability to handle high throughput of events with ease. In the story of Kafka, there are two main actors - producers and consumers. Producers are simple: publish a message, then go to sleep. Consumers are like books of Franz Kafka - looks simple on first read yet so much complicated from inside. There are some important configurations about consumers that every developer should be aware of.</p>
<h2 id="heading-max-poll-interval">Max Poll Interval</h2>
<p>Consumers are members of a group, managed by a <strong>Group Coordinator</strong> (which lives on the Kafka Broker). The coordinator has too many consumers to look after. So it prioritizes work (event consumption) over anything and loves micro managing. No, I am not talking about your boss. It expects every consumer to keep checking in and asking for more work (polling) at regular intervals.</p>
<p>If a consumer is processing a heavy task and doesn't ask for more work within a specified time, the Coordinator assumes the consumer has died or is stuck in an infinite loop. this threshold is known as <code>max.poll.interval.ms</code>. By definition, it is the max duration between two subsequent polling calls of a consumer before the consumer is kicked from the group and the remaining group members rebalance.</p>
<p><img src="https://i.imgflip.com/afsdca.jpg" alt class="image--center mx-auto" /></p>
<p>Why wouldn’t a consumer poll for messages? Either the consumer is dead or the consumer has not completed executing the last polled messages. In this case, your message has the potential to be consumed twice. Hence it is important to keep the value in <code>max.poll.interval.ms</code> big enough to ensure that you are providing adequate time to your consumer for processing the message successfully. A time interval shorter than the average time needed to process an event will result in continuous rebalancing of your consumers without any event being processed.</p>
<h2 id="heading-max-records">Max Records</h2>
<p>When consumers poll for messages, they can specify how many events they want to consume at a time. For e.g. you can have 20 incoming messages at a time. Say you have 10 consumers. if the max records is set as 2, it will mean each consumer will consume two messages before making the next polling call. A large <code>max.poll.records</code> ensures fewer polling calls thus saving on network calls to fetch data. However, one needs to ensure <code>max.poll.interval.ms</code> is configured accordingly by keeping in mind the <code>max.poll.records</code> set for the consumer.</p>
<h2 id="heading-fetch-max-wait">Fetch Max Wait</h2>
<p>While <code>max.poll.records</code> property helps us configure the max records that a polling call can return at a time, it may be the scenario that pending messages in queue are less than the configured <code>fetch.min.bytes</code>. In this case two things can be done - start consuming what we have in our queue or wait for sometime until <code>fetch.min.bytes</code> is satisfied and then start consuming. The max duration that a consumer can wait before it starts consuming the message by returning from a polling call is known as <code>fetch.max.wait.ms</code>. A wait time (<code>fetch.max.wait.ms</code>) of zero means the broker will return immediately without waiting for more data to accumulate. The amount of data returned will be the Maximum Available Data, but it is capped by <code>max.poll.records</code>.</p>
<p>Depending upon your incoming traffic pattern, you can configure this parameter along with the <code>fetch.min.bytes</code> parameter to reduce frequent polling calls. The downside is the added latency due to the time spent waiting for new records.</p>
<h2 id="heading-heartbeat-interval">Heartbeat Interval</h2>
<p>Just like humans, consumers liveliness is captured by their heartbeats. These are small “I am alive” pings that consumers keeps sending to the group coordinator (broker side) to let them know about their existence. This helps the broker to maintain a track of which consumers are working fine. <code>heartbeat.interval.ms</code> specifies the time interval between any two consecutive heartbeat calls. Sending many such calls in a small duration of time can cause unnecessary spamming at broker.</p>
<p><img src="https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fi.pinimg.com%2F736x%2F37%2F51%2F5c%2F37515c644387bde3661a30c611b650e6.jpg&amp;f=1&amp;nofb=1&amp;ipt=8311dbff551c0a5162cf917baf83f243d85358cd6f0337a9f9be363bc4d2d73b" alt="Abhi hum zinda hai - Indian Meme Templates" /></p>
<p>(POV: consumers at while sending heartbeats at regular intervals)</p>
<h2 id="heading-session-timeout">Session Timeout</h2>
<p>Kafka broker is a chill guy. It trusts the consumers to keep sending “i am alive” signals at regular intervals. But once they don’t send it for some time, the group coordinator assumes them dead. It removes them from the group and triggers a rebalance. The time it waits before assuming them stuck or hang is defined by <code>session.timeout.ms</code>. So depending upon your use case, you can define the session timeout and <code>heartbeat.interval.ms</code> in such a manner, that the broker waits for atleast 3 heartbeat intervals before marking the consumer as inactive. This ensures temporary network fluctuations (if any) are handled gracefully and does not trigger any unnecessary rebalancing.</p>
<h2 id="heading-bonus-question">Bonus Question:</h2>
<p>Say your consumer is processing messages from 1 to 50. A new member joins your consumer group triggering a rebalance. The broker asks your consumer to stop processing and return the partition. What will happen to the messages you were consuming?</p>
<h2 id="heading-ending">Ending</h2>
<p>Just like Franz Kafka’s metamorphosis, your Kafka consumer also has the potential to transform into a bug if you have not configured it properly. The only way to become better is to make mistakes and learn from them. I hope you enjoyed reading this blog. Namaste!</p>
]]></content:encoded></item><item><title><![CDATA[Mastering PSQL: 7 Essential Commands for Database Efficiency]]></title><description><![CDATA[Postgres or PSQL Database is a type of relational database widely used across the industry. Various big tech corporations across the globe use it to store mission-critical data. The language it understands is SQL. People are already familiar with man...]]></description><link>https://blog.lakshyabuilds.com/mastering-psql-7-essential-commands-for-database-efficiency</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/mastering-psql-7-essential-commands-for-database-efficiency</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[psql]]></category><category><![CDATA[Relational Database]]></category><category><![CDATA[Databases]]></category><category><![CDATA[SQL]]></category><category><![CDATA[tips]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Mon, 22 Dec 2025 19:22:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766431222671/f8064e86-5d10-4c86-9158-c4f59d052c7e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Postgres or PSQL Database is a type of relational database widely used across the industry. Various big tech corporations across the globe use it to store mission-critical data. The language it understands is SQL. People are already familiar with many DML and DDL commands for interacting with the database. However, there are some important commands that are often overlooked by beginners. Knowing them can make you confident while working with a PSQL database.</p>
<h2 id="heading-explain-analyze">Explain Analyze</h2>
<p>This command is the "think before you speak equivalent" of the database world. We often struggle with slow-running queries in our database. To understand why they are slow, you need to understand how the database engine actually executes your query. In brief, it creates a plan to execute your query. This plan includes information about how many rows the planner thinks it will have to scan in your database to execute the query and the indexes (if any) it will be using.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">EXPLAIN</span> <span class="hljs-keyword">ANALYZE</span> 
<span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> <span class="hljs-built_in">table_name</span> 
<span class="hljs-keyword">WHERE</span> col_name = <span class="hljs-string">'value'</span>;
</code></pre>
<pre><code class="lang-plaintext">QUERY PLAN
---------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..11614.43 rows=1 width=244) (actual time=0.345..185.120 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   -&gt;  Parallel Seq Scan on users  (cost=0.00..10614.33 rows=1 width=244) (actual time=120.450..180.200 rows=1 loops=3)
         Filter: ((email)::text = 'john.doe@example.com'::text)
         Rows Removed by Filter: 333333
 Planning Time: 0.120 ms
 Execution Time: 185.250 ms
</code></pre>
<p>As you can see in the above output, it explains that if we execute the select query, it will result in a sequential scan. You can analyze the execution time to identify scope of improvements.</p>
<h2 id="heading-index-usage">Index Usage</h2>
<p>Creating indexes is crucial for ensuring fast read performance. However, it's also important to regularly review the indexes you've created and remove any that aren't being used. To find indexes that are rarely used in your table, use the following command:</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">SELECT</span>
    schemaname,
    relname <span class="hljs-keyword">AS</span> <span class="hljs-built_in">table_name</span>,
    indexrelname <span class="hljs-keyword">AS</span> index_name,
    idx_scan <span class="hljs-keyword">AS</span> number_of_scans,
    idx_tup_read <span class="hljs-keyword">AS</span> tuples_read,
    idx_tup_fetch <span class="hljs-keyword">AS</span> tuples_fetched
<span class="hljs-keyword">FROM</span>
    pg_stat_user_indexes
<span class="hljs-keyword">WHERE</span>
     relname = your_table_name
<span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span>
    idx_scan <span class="hljs-keyword">DESC</span>;
</code></pre>
<p>The number_of_scans column of the above result, shows how many times your created index was used by the PSQL query planner to execute queries. Indexes with zero / low scan count often contribute negligible to read queries performance and add to write queries bloat. Dropping them can be a good option.</p>
<h2 id="heading-data-distribution-inside-a-table">Data Distribution Inside a Table</h2>
<p>Ever faced a situation where your created index wasn't being used? This next command is all you need. An important factor often overlooked when creating a new index is the data distribution within a table. For columns with very few distinct values, adding an index doesn't improve performance in large tables. Use the following command to visualize data distribution inside your table.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">SELECT</span>
    attname <span class="hljs-keyword">AS</span> <span class="hljs-built_in">column_name</span>,
    n_distinct,
    most_common_vals <span class="hljs-keyword">AS</span> top_values,
    most_common_freqs <span class="hljs-keyword">AS</span> frequencies
<span class="hljs-keyword">FROM</span>
    pg_stats
<span class="hljs-keyword">WHERE</span>
    tablename = <span class="hljs-string">'your_table_name'</span>
</code></pre>
<p>The query planner uses the result of the above query to identify how many rows it will have to fetch to execute the query. On a high level, it estimates by multiplying the frequency of the value with the total number of rows in a table. This row count is then used to decide whether an index scan should be performed or a sequential scan.</p>
<h2 id="heading-vacuum-for-cleaning-up-dead-rows">Vacuum - For Cleaning Up Dead Rows</h2>
<p>This is one of the most useful commands. Every time you update or delete a row, PSQL does not immediately remove that row from your physical storage. It just changes the row reference. This makes the update/delete operations incredibly fast. It then periodically removes dead tuples in a separate process known as vacuum. You can either run this manually or configure your database to run it at automatic intervals. Its regular execution ensures your database reclaims available space by cleaning up dead tuples. Running this command manually is especially important after performing bulk update/delete operations.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">VACUUM</span> your_table_name;
</code></pre>
<p>Note: while this is a non-table locking command, it can shoot up the overall CPU usage a little bit. Therefore, try running it when the DB is not at its peak usage.</p>
<h2 id="heading-monitoring-connections">Monitoring Connections</h2>
<p>To run any query, your application must connect to the database. Depending on your database settings, there is a maximum number of connections you can have at one time. Usually, your application will close the connection after running the query. However, sometimes due to a problem in your code, connections may stay open, causing connection leaks. It's crucial to find and fix these issues before they overwhelm your database. Use the following command to check the connection status of your database at any time.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">SELECT</span>
    pid,
    usename <span class="hljs-keyword">AS</span> username,
    datname <span class="hljs-keyword">AS</span> database_name,
    client_addr <span class="hljs-keyword">AS</span> client_ip,
    application_name,
    state,
    now() - query_start <span class="hljs-keyword">AS</span> duration,
    query
<span class="hljs-keyword">FROM</span>
    pg_stat_activity
<span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span>
    duration <span class="hljs-keyword">DESC</span>;
</code></pre>
<p>In case of monolithic services, where multiple applications share the same database, you can use the above query to identify which application is using the most number of active connections.</p>
<h2 id="heading-long-executing-queries">Long Executing Queries</h2>
<p>Your DB is choking. CPU utilization is very high. Freeable memory is depleting very fast. You don’t know what to do. Relax, run this command.</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">SELECT</span>
    pid,
    usename <span class="hljs-keyword">AS</span> <span class="hljs-keyword">user</span>,
    pg_blocking_pids(pid) <span class="hljs-keyword">as</span> blocked_by,
    now() - query_start <span class="hljs-keyword">AS</span> duration,
    state,
    query
<span class="hljs-keyword">FROM</span>
    pg_stat_activity
<span class="hljs-keyword">WHERE</span>
    state = <span class="hljs-string">'active'</span>
    <span class="hljs-keyword">AND</span> (now() - query_start) &gt; <span class="hljs-type">interval</span> <span class="hljs-string">'1 minute'</span>
<span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span>
    duration <span class="hljs-keyword">DESC</span>;
</code></pre>
<p>It will display all the long running queries which are executing for over a minute. Killing such queries directly can give breathing space to your choked database.</p>
<h2 id="heading-kill-process">Kill Process</h2>
<p>I may be exaggerating a bit, but this final command is the Bharamastra. In production, if you ever face a situation where you see any long running query hurting the database, the first thing that you should do is kill that process manually. You can identify the PID of the long running process by executing the above command. Once you have that, run the commands below to actually kill it.</p>
<pre><code class="lang-pgsql"><span class="hljs-meta">#Soft Kill (Cancel): Sends a polite signal to stop the query but keep the connection active.</span>
<span class="hljs-keyword">SELECT</span> pg_cancel_backend(process_id); 

<span class="hljs-meta">#Hard Kill (Terminate): Closes the entire connection (use if Cancel doesn't work).</span>
<span class="hljs-keyword">SELECT</span> pg_terminate_backend(process_id);
</code></pre>
<h2 id="heading-the-end">The End</h2>
<p>That's it for now. PSQL is a great database with many built-in features. You can really dive deep into understanding the query planner and how it estimates the cost of running a query. This knowledge helps you become more confident in debugging production issues. I'm also learning, so if I missed anything, feel free to leave a comment below. Thanks for reading. Namaste!</p>
]]></content:encoded></item><item><title><![CDATA[How I Reduced Infra Costs by 50% Using GitHub Actions]]></title><description><![CDATA[Building a full stack web app is fun. Deploying it? Not so much. And what good are those projects that never make it out of localhost. While frontend deployment for small side projects is pretty much sorted, thanks to generous free tiers of vercel, m...]]></description><link>https://blog.lakshyabuilds.com/how-i-reduced-infra-costs-by-50-using-github-actions</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/how-i-reduced-infra-costs-by-50-using-github-actions</guid><category><![CDATA[serverless]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[Azure Functions]]></category><category><![CDATA[serverless computing]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 21 Dec 2025 21:45:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766353223391/5290ae8d-6426-4f8a-91ee-05066d699693.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building a full stack web app is fun. Deploying it? Not so much. And what good are those projects that never make it out of localhost. While frontend deployment for small side projects is pretty much sorted, thanks to generous free tiers of vercel, managing backend deployments are still complicated. Providers like render, and railway have small free tier limits or cold start issues affecting latencies. Deployment costs on cloud providers like amazon’s AWS or Microsoft’s Azure starts from around $10 monthly. Github Actions - a serverless solution - can bring those costs down to almost zero if it fits in your use case. This blog covers how I refactored my project from a web app to a serverless function, saving massively on deployment costs.</p>
<h2 id="heading-before-we-begin">Before We Begin</h2>
<p>I recently built a personal journaling self help tool - <a target="_blank" href="https://jurnai.site">JurnAI</a>. Its core functionality was a journal fetching and processing pipeline that used to run every morning at a fixed time. It was responsible for processing journals from last night and trigger mails to users after processing the content. Besides this functionality, the web server also supported new user signup flows. The current backend comprised of a python app built on <a target="_blank" href="https://fastapi.tiangolo.com/">FastAPI</a>. The frontend app was connected to this server for handling user signups. The daily journal processing also used to run on this server, triggered by a cron job every morning. Besides that, there were some additional cron based flows that used to run at regular intervals e.g - deactivating inactive users, and sending reminder mails to churning users. Inshort, apart from signups, there was no other flow that required the server to be active all the time. However, when deploying this web app on cloud, we need to take into account all the flows when deciding how much computation power is needed. For e.g. we need a server that stays active 24x7 for user signups. However we also need the server to be powerful enough (memory and CPU) to handle daily journal processing as it involves multiple concurrent processing for all the active users. Here is a snapshot of the current backend structure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766345752762/c99f99f5-d4e0-472e-b43e-84958cc16afb.png" alt class="image--center mx-auto" /></p>
<p>As a result, my monthly estimated costs for running this server came at around $12 per month on Azure. Thanks to my leftover student credits, I did not care much. However, once the credits depleted and my project went down, I came to realize the cost of my running server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766346256580/e0193190-5cc4-46e0-bfa2-47cbc31bb716.png" alt class="image--center mx-auto" /></p>
<p>I wanted to keep running this project [for myself]. So, I decided to explore options to keep running it at minimal costs. Creating new accounts for free credits is something that is not worth the efforts / unethical in some sense. As I dig deeper, I came to know about serverless deployments. So the problem is, I have built a web app with non-uniform incoming traffic and resources usage. My signup flows requires the server to be active all the time, however my daily processing pipeline needs a slightly powerful machine for a fixed duration to handle computation.</p>
<h2 id="heading-serverless-stops-you-from-becoming-money-less">Serverless - Stops You From Becoming Money Less</h2>
<p>What if I could deploy my web app in two places. The first server will solely be responsible for handling signups. It will stay active 24x7 but will be a very small machine, since the signups don’t consume that much of computation resources for me. The second server will handle all the data processing pipelines. I will use a slightly powerful machine to account for the processing needed. Additionally, since I don’t need this machine to be active all the time, I can just stop this instance when not in use. As the instances are charged per hour, the cost savings can be huge. For e.g running a t3.micro instance on AWS costs $0.0112 per hour. Running it for 24 hours a day will cost ~<strong>$8.18</strong> monthly. However running the same instance 1 hour daily, will cost ~<strong>$0.34</strong> per month. A massive <strong>96%</strong> savings in monthly cost. Now obviously, It is practically not feasible to start / stop instances manually everyday. This is where I decided to use Github Actions - an automation offering to run scheduled workflows.</p>
<p><em>Note: While there are ways to automate the instance to start and stop, it comes with added costs and various other nuances which are not being discussed in this blog.</em></p>
<h2 id="heading-moving-to-github-actions">Moving to Github Actions</h2>
<p>I cannot run my code the same way on a serverless infra, like I was running on my current server. In simple words, we can’t deploy web apps directly. The whole concept of serverless revolves around functions. Think of your code in terms of functions. You can deploy all the code required for running those functions and then create a script [workflow] to trigger those functions. So technically speaking, I don’t need to run my FastAPI app. Instead, I can extract the logic in wrapper functions and call those functions directly. Confusing? Let’s understand with an example.</p>
<p>Say you have a web app. In that you have an endpoint responsible for deactivating inactive users. The endpoint calls an internal method that handles the business logic. In a traditional setting, to trigger this flow, you need to first run the python app, then hit the endpoint.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask, jsonify
<span class="hljs-keyword">from</span> shared.database <span class="hljs-keyword">import</span> init_db
<span class="hljs-keyword">from</span> shared.business_logic <span class="hljs-keyword">import</span> deactivate_inactive_users

app = Flask(__name__)

<span class="hljs-comment"># 1. Initialize DB when the web server starts</span>
init_db()

<span class="hljs-meta">@app.route('/admin/deactivate-users', methods=['POST'])</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">api_deactivate_users</span>():</span>
    <span class="hljs-string">"""
    Triggered by an Admin clicking a button or an external API call.
    """</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-comment"># Call the shared business logic</span>
        result = deactivate_inactive_users()
        <span class="hljs-keyword">return</span> jsonify(result), <span class="hljs-number">200</span>
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">return</span> jsonify({<span class="hljs-string">"error"</span>: str(e)}), <span class="hljs-number">500</span>

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    app.run(port=<span class="hljs-number">5000</span>)
</code></pre>
<p>In serverless context, you can think of this endpoint as a function. To handle the same functionality, you need to call the <code>deactivateUsers()</code> function. You can define this function separately in another script. Both the endpoint and your function still calls the same business logic. Only thing that has changed is how you call them.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> sys
<span class="hljs-keyword">import</span> os

<span class="hljs-comment"># Add the parent directory to sys.path so we can import 'shared'</span>
sys.path.append(os.path.join(os.path.dirname(__file__), <span class="hljs-string">'..'</span>))

<span class="hljs-keyword">from</span> shared.database <span class="hljs-keyword">import</span> init_db
<span class="hljs-keyword">from</span> shared.business_logic <span class="hljs-keyword">import</span> deactivate_inactive_users

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    print(<span class="hljs-string">"🚀 Starting Batch Job: User Deactivation"</span>)

    <span class="hljs-comment"># STEP 1: Manual Initialization (The part the Web App usually handles)</span>
    <span class="hljs-comment"># We must explicitly load env vars and connect to DB here.</span>
    <span class="hljs-keyword">try</span>:
        init_db()
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"❌ Critical Error: Could not connect to DB. <span class="hljs-subst">{e}</span>"</span>)
        sys.exit(<span class="hljs-number">1</span>)

    <span class="hljs-comment"># STEP 2: Execute the Business Logic</span>
    <span class="hljs-keyword">try</span>:
        result = deactivate_inactive_users(days_inactive=<span class="hljs-number">30</span>)
        print(<span class="hljs-string">f"🏁 Job Finished Successfully: <span class="hljs-subst">{result}</span>"</span>)
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"❌ Error during execution: <span class="hljs-subst">{e}</span>"</span>)
        sys.exit(<span class="hljs-number">1</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    main()
</code></pre>
<p>Since you are not initialising the web app, hence you need to take care of the processes that are taken care of while starting the server - initialising database, and loggers separately. Again you can just bundle this logic in a single script and call it before executing your main function. Not a big issue.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">cleanup:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">Code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">Deactivation</span> <span class="hljs-string">Script</span>
        <span class="hljs-attr">env:</span>
          <span class="hljs-attr">DATABASE_URL:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DATABASE_URL</span> <span class="hljs-string">}}</span>
        <span class="hljs-comment"># Simply run the script, no need to start the flask server</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">python</span> <span class="hljs-string">scripts/run_cleanup.py</span>
</code></pre>
<p>After making the move to serverless, I had refactored my web app. While my signup flows are called from the web server context, all my processing pipelines are encompassed in a single script. This script is executed accordingly via the configured github action workflows.</p>
<pre><code class="lang-plaintext">/my-project
├── app.py                  # The Web Server (FastAPI)
├── scripts/
│   └── run_cleanup.py      # The Standalone Script (For GitHub Actions)
└── shared/
    ├── __init__.py
    ├── database.py         # DB Connection logic
    └── business_logic.py   # The actual "Deactivate Users" logic
</code></pre>
<h2 id="heading-how-much-scale-can-it-handle">How much scale can it handle?</h2>
<p>Your server used to run 24x7 giving you complete visibility into resource utilization. How much will it cost to run it now? What if serverless is more expensive in the long run? If you are thinking along these lines, then you already my friend. Let’s do a cost analysis. Serverless infrastructure charges you for the total minutes you use the server for. In my current infrastructure, the daily content processing pipeline runs for about 20 seconds on an average to process about 100 concurrent users every day. So to handle processing for around 10k users, we require around 2k seconds daily or ~1000 minutes monthly. Using a standard github hosted runner (ubuntu - linux 2-core cpu machine), it will cost around <strong>6$</strong>. If you have a public repository, github actions are free of cost. With the only limitation that you can’t run jobs longer than 6 hours. So basically, the entire cost comes down to literally <strong>zero</strong> for your side projects. Incase you don’t want to make your code public, private repositories also gets around 2k minutes monthly of free usage. compare that to <strong>12$</strong> of monthly usage in fixed server costs.</p>
<p><em>Note: With the above usage, you may also fall under free tier of AWS Lambda as well.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766351472984/376ec85a-8cdb-4749-aeef-f175e2a61cca.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-but-what-about-signup-flow">But What About Signup Flow?</h2>
<p>Github Actions does not support invocation via HTTP triggers. However other cloud service providers like AWS Lambda and Microsoft Azure Functions supports them. This is because actions are supposed to run project’s automation tasks like running builds, test suites and not built for true serverless computing. However Lambda and Functions are truly serverless. What that means is that you can invoke these serverless functions by calling a specific endpoint, similar to how you call an API of your hosted server. But should you do it? Maybe not. Serverless functions have a cold-start issue - the time it takes to warm up before it actually starts executing your function. This latency is manageable for background tasks. However, for client-interactions like signups, using this does not make much sense to me. So, you will still need a small lightweight server to handle client-side interacted flows. But, since your heavy processing is now on serverless infra, you can consider opting for a lightweight machine. For e.g railway’s free tier usage of 1$ monthly will be able to help you out. Additionally, you can also take their hobby plan (5$ monthly) and use that limit shared between multiple projects.</p>
<h2 id="heading-should-you-move-your-infra-to-serverless">Should You Move Your Infra To Serverless?</h2>
<p>Serverless is not some discount coupon that you can apply to your backend infrastructure to cut down deployment costs. It is a smart and cost efficient utilisation of infrastructure for use-cases involving fixed usage patterns dedicated to specific time intervals everyday. If it does not make sense to your use case - don’t.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766351961544/c9741f43-76a7-40d2-875f-b8551f194ffa.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-problems-i-faced-while-using-github-actions">Problems I faced while using Github Actions</h2>
<p>It was not a smooth ride. There are some scenarios that you should be aware of before using github actions for your workflows.</p>
<h3 id="heading-inconsistent-timings">Inconsistent Timings</h3>
<p>My Workflow was supposed to run at sharp 8 AM in the morning every day. What I have observed on standard github runners is that there is always some delay in running my workflows. Sometimes around 10 minutes, other days upto 60 minutes of delay. This can happen during high traffic scenarios where multiple user flows fight for execution slot. But this delay is consistent for some reasons. For e.g, one of my workflow runs with a delay of 10 minutes consistently. so, you can tweak your scheduled timings accordingly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766352151543/941ac8a7-a17e-4fab-8d51-0c681f22e137.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-cold-start-latency">Cold Start Latency</h3>
<p>Though it is a problem with every serverless solution. For Lambda, it is a nominal 1-2 seconds of delay. Whereas for github actions using standard hosted runners, it can be as high as 20-30 seconds for setting the initial containers. However, I did not face major issue with github actions as such regarding this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766352317320/b057f63f-5428-4e28-87eb-6e6921b83883.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-final-verdict">Final Verdict</h2>
<p>As an engineer, never stop exploring. Few decades ago, people used to run and maintain their own physical servers. Then came cloud providers making deployments simplified. With the rise of companies like <a target="_blank" href="https://vercel.com">vercel</a>, <a target="_blank" href="https://railway.com/">railway</a> and <a target="_blank" href="https://render.com/">render</a>, deployments literally went one code push away. Serverless compute will contribute in their own way for powering agentic asynchronous workflows. Optimize for learning, not for costs if you are building projects. However, there’s no doubt that these serverless solutions significantly bring down the overall deployment costs for low to moderate traffic projects, if used correctly. That’s it for this blog. I hope you enjoyed reading this deep dive about my infra-adventures. Do you want me to cover a step by step tutorial of hosting a project on github actions? Let me know in the comments below. Until next time! Namaste.</p>
]]></content:encoded></item><item><title><![CDATA[Read This Before Buying a Domain from Cloudflare]]></title><description><![CDATA[If you don’t have a personal domain in 2025, you are NGMI. When you deploy your portfolio / personal project on a personalized domain, it sends a strong public signal - that this person is not kidding. Due to the increased demand, there are a lot of ...]]></description><link>https://blog.lakshyabuilds.com/read-this-before-buying-a-domain-from-cloudflare</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/read-this-before-buying-a-domain-from-cloudflare</guid><category><![CDATA[domain]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[SEO]]></category><category><![CDATA[dns]]></category><category><![CDATA[cheapest domain registrar]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 21 Dec 2025 05:30:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766268680873/a7a670d4-c164-4f5c-9073-876debb31062.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you don’t have a <a target="_blank" href="https://lakshyabuilds.com">personal</a> domain in 2025, you are NGMI. When you deploy your portfolio / personal <a target="_blank" href="https://jurnai.site">project</a> on a personalized domain, it sends a strong public signal - that this person is not kidding. Due to the increased demand, there are a lot of domain name registrars in the market today. One of them is cloudflare - the company that took down half of global internet on 18th November 2025. Why would a security / request proxying service provider enter the business of selling domains? Let’s find out.</p>
<h2 id="heading-benefits-of-buying-from-cloudflare">Benefits of Buying from Cloudflare</h2>
<p>There are a couple of benefits of buying a domain directly from cloudflare.</p>
<h3 id="heading-zero-markup">Zero markup</h3>
<p>They don’t charge any premium on the domains. Cloudflare charges exactly what they have to pay to the registry - without adding their own margins. They don’t intend to profit directly from selling domains. The price difference is significant if you compare annual renewals. Many other domain registrar have a discounted first year plan but charge a renewal premium from year 2 and 3. So, if you are planning to keep the domain for a long time, buying it from cloudflare may be worth it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766264820542/cf7418a3-6d9c-4187-923e-767197fef3d9.png" alt="Price Comparison - Cloudflare Renewal and GoDaddy Renewal for same domain" class="image--center mx-auto" /></p>
<h3 id="heading-cloudflare-coverage-from-day-one">Cloudflare Coverage from Day One</h3>
<p>All your hosted content on the domain will have the required cloudflare provided requests reverse proxying [orange cloud], edge level caching. The dashboard to update DNS records is also very clean and simple to use. I like the web traffic analysis feature of cloudflare as well. We can access geographical distribution of all the incoming traffic without adding a single line of code in our website. Above mentioned features has limited availability on free tier on cloudflare but generous enough to get one started.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766265043756/1e804037-6365-404c-bc7b-396f661f6322.png" alt="A screengrab of the cloudflare traffic analysis dashboard" class="image--center mx-auto" /></p>
<h2 id="heading-disadvantages-of-buying-from-cloudflare">Disadvantages of Buying from Cloudflare</h2>
<p>Before making the final purchase, here are some important points that you should read regarding buying a domain from cloudflare.</p>
<h3 id="heading-nameservers-lock-in">Nameservers Lock-In</h3>
<p>When you purchase a domain, you get something called nameservers. They are used to manage your domain’s DNS records. Normally, you have an option to update these records and transfer your domain management to any other service provider by pointing to their nameservers. This allows you to manage DNS records with your desired service provider. Unfortunately, this is not supported for domains bought from cloudflare. They enforce this to keep you locked in to their platform. This can also be one of the reasons, why they don’t charge markup fees during domain registration, hoping to profit from you in their other offerings. You can however transfer your domain to another registrar - which requires to buy additional 1 year renewal, defeating the whole edge of cloudflare’s “no-margin” renewals.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766266766981/28845f23-d165-456e-9fd4-89923524483f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-lack-of-transparency-in-upfront-payment">Lack of Transparency in Upfront Payment</h3>
<p>This might be a personal opinion but I felt the entire process of buying the domain was not fully transparent. On the dashboard upto the final payment gateway page, it showed me a price exclusive on taxes. After executing the transaction, by bank account got charged with an added tax. This amount breakup was only visible in the final tax invoice received post payment. Ideally it should have been displayed upfront to make a better choice. In my case the tax was around 18%, hence hits hard if you are on tight budgets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766266984972/c073fc4d-ea82-48fa-981b-45e21a2319f6.png" alt="(no visibility of tax to be charged in the total amount shown on dashboard)" class="image--center mx-auto" /></p>
<h3 id="heading-no-local-currency-support">No Local Currency Support</h3>
<p>On the dashboard, all the prices shown are in US dollars. If you are buying from India with a debit card - your card gets charged in INR. The exchange rate that is taken into account is something I am unable to understand. Even after looking at my end-of-month account slips - I could not understand the exact exchange rate that was used while charging my card. This was also the first time I have bought something in dollars with my Indian debit card, so I maybe lacking proper knowledge on how to access this information. But again, in comparison, buying a domain from a provider that has a base in india [GoDaddy], the payment process is seamless. So, it can be a bit of a situation to deal with for some users.</p>
<h2 id="heading-how-to-buy-a-domain-from-cloudflare">How to Buy a Domain from Cloudflare</h2>
<p>Step 1: Login into Cloudflare Dashboard</p>
<p>Step 2: Go to domains page using the side menu</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766267558043/eb11495d-232d-4b22-be28-e6421e069e24.png" alt="Cloudflare Domain Dashboard" class="image--center mx-auto" /></p>
<p>Step 3: Click on “Buy a domain” button and search for your desired domain</p>
<p>Step 4: After finalizing the domain, proceed with checkout. You need to enter some personal information for completing the purchase</p>
<p>Step 5: KaChing! the domain is now yours.</p>
<p>[Tip: keep auto renewals off to prevent any unwanted charges for your forgotten domains.]</p>
<h2 id="heading-final-verdict">Final Verdict</h2>
<p>Should you buy a domain from cloudflare? Yes, go with it.. if looking for multi-year ownership. Incase you are looking to buy a domain for less than a year, then don’t forget to search for deals at other registrars like GoDaddy and NameCheap - you may be able to find a better deal. This was me sharing my experience of buying a domain from cloudflare. I hope it helps you make a informed decision. Incase you have any insights to share regarding the currency exchange charge - do let me know. Thanks for reading!</p>
]]></content:encoded></item><item><title><![CDATA[How I Built a SaaS in a Weekend with AI as my Co-founder]]></title><description><![CDATA[In this age of digital connectivity, one can be digitally connected with people around the globe, yet feel all alone. Every night when the room went silent, the voices in my head became the loudest. I wanted to feel heard. Writing my thoughts felt li...]]></description><link>https://blog.lakshyabuilds.com/how-i-built-a-saas-in-a-weekend-with-ai-as-my-co-founder</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/how-i-built-a-saas-in-a-weekend-with-ai-as-my-co-founder</guid><category><![CDATA[AI]]></category><category><![CDATA[vibe coding]]></category><category><![CDATA[SaaS]]></category><category><![CDATA[Build In Public]]></category><category><![CDATA[AI coding]]></category><category><![CDATA[AISaaS]]></category><category><![CDATA[llm]]></category><category><![CDATA[gemini]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sat, 23 Aug 2025 20:42:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755978448139/27c35ce4-3ed9-4bac-837a-4f7c92c783a3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this age of digital connectivity, one can be digitally connected with people around the globe, yet feel all alone. Every night when the room went silent, the voices in my head became the loudest. I wanted to feel heard. Writing my thoughts felt like dropping bottles with a message inside in vast oceans. I keep sending them, but no one responds. This is the story of how I built JurnAI, a platform designed to help people feel heard using AI, driven by my passion for building products.</p>
<p>TL;DR This blog covers how I built a SaaS product, from ideation to production, explaining how I incorporated AI in my workflows and the honest limitations I encountered along the way.</p>
<h1 id="heading-understanding-the-why-of-building">Understanding the Why of Building</h1>
<p>At times, I felt lonely sharing about problems in my personal life. One may suggest sharing it with friends or family, but what if I tell you they all have their own problems to deal with? In such a situation, daily journaling came like a burst of rain in a deserted land of my thoughts. Suddenly, I started feeling a lot lighter. The voices found an opening and slowly started to vacate my mind. But not for long. At some point, when shit hit the ceiling, the daily journaling started feeling pointless. I wanted to feel heard. Writing down my thoughts in a diary felt like dropping bottles with a message inside in vast oceans. I keep sending them, but no one responds back. This is when it hit me: what if I could build something to solve this problem? A friend who will listen to my thoughts and offer me blunt, honest advice every morning. <a target="_blank" href="https://jurnai.site">JurnAI</a> was born from an idea of helping others like me to feel heard, supported, and loved. Ironically, the solution to my digital loneliness was a digital assistant. A few years ago, building something like this would have been a dream, often requiring lots of effort to develop. However, thanks to AI, I developed the first prototype over a weekend.</p>
<p>In the following sections, I'll break down the exact 8-step process I followed, the AI tools I used at each stage, and the honest limitations I encountered along the way.</p>
<h1 id="heading-step-1-idea-refinement">Step 1: Idea Refinement</h1>
<p>I started with an end goal—to make people’s mornings happier and make them feel loved by using their journal entries from last night. I had little clarity about how I was going to do it. I’m a software engineer by profession, so I was a bit inclined towards building a web app-based solution. To make sense of my idea, I turned to perplexity, utilizing deep research mode to learn more about it and explore possible solutions. To my surprise, it did a good job in articulating the steps I need to follow to achieve my goal. Perplexity researched online journaling, how people use it in their day-to-day lives, and what makes it hard for people to continue doing it. It then suggested to me how I can structure my web app, down to the tech stack I can use. In a corporate analogy, this is equivalent to a PRD or Product Requirements Document. It is supposed to cover requirements, breaking down into smaller tasks regarding what is needed from the end project. They can be seen as Functional Requirements.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755979050964/4c28615a-40d2-4255-9227-341bcc0f44ad.png" alt="Screenshot of a webpage titled &quot;Brainstorming Session with Perplexity&quot; showing a recommended free tool stack for developers. It includes services like Vercel, Railway, Neon PostgreSQL, Supabase Auth, and Mailgun, with details on free tier benefits and integration. The background is a gradient of red to yellow." class="image--center mx-auto" /></p>
<p><strong>Pros</strong>: Using the deep research mode saved the time it would require to explore various existing solutions, identify the gaps, and research about what exactly needs to be built.<br /><strong>Cons</strong>: It can suggest solutions. But it may not match what you need. So you may require a little bit of two and fros.</p>
<p>At last, since I am working with AI, I asked perplexity to document the requirements in a markdown file, which is very helpful when working with AI agents while building the actual product.</p>
<h1 id="heading-step-2-tech-exploration">Step 2: Tech Exploration</h1>
<p>For building the project technically, I utilized the Gemini CLI tool. It works within your terminal with the entire context of your current project directory. I prefer using this over Cursor because Gemini's free tier limits are very generous and provide a decent quality of responses, often at par with other leading LLMs. The goal was to first identify various approaches I can take to build the project. In software analogy, this process is known as the “exploration phase”, where an engineer explores multiple ways to build a solution according to the requirements. While there are multiple ways to achieve the same solution, you should select the one that aligns with your skill set. To begin, I loaded the earlier downloaded <code>Requirements.MD</code> file into my Gemini agent’s memory and started giving it instructions to plan a potential solution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755979467467/cd3ce942-0cf5-4131-a398-5591760daf94.png" alt="Screenshot of the Gemini CLI interface on a colorful gradient background. The header reads &quot;Gemini CLI - LLM in your Terminal.&quot; The interface provides tips for getting started and a prompt for typing messages or file paths. An update notification is displayed." class="image--center mx-auto" /></p>
<p>Using AI without defining appropriate constraints is like throwing a dart blindfolded and hoping it makes it to the dart board. While it is true that you may not know everything from the beginning, telling as much as you know is still helpful. For example, when discussing about building a journal help tool, I specifically told about my preferred tech stack for frontend and backend development, along with the use of relational databases over non-relational. This makes the output more deterministic.</p>
<p>The entire conversation felt like discussing with a fellow software engineer. You are discussing about what all you will need to make this project deployable, along with third party dependencies. I decided to spend a little longer time discussing approaches instead of jumping straight to coding, to prevent issues at a later stage.</p>
<p>Another critical aspect is not to trust AI blindly. Whenever it suggested me something I was not aware about, I decided to first explore it myself and then give it permission to go ahead. For example, for building this tool, I could follow two approaches:</p>
<ol>
<li><p>Build a self-hosted journaling platform and then generate insights on top of journal entries created in my app, requiring the overhead of maintaining and storing user entries.</p>
</li>
<li><p>Build a public doc integration [like Notion], allowing users to connect their existing Notion account and securely read their entries.</p>
</li>
</ol>
<p>The second approach reduces the overhead of maintaining journal entries at my end. When Gemini suggested using Notion integration, I was unsure if this even existed. So I did a small POC to ensure the integration serves my purpose as needed.</p>
<p>Once I was satisfied with the suggested approach, I decided to do subtasking with the help of AI and clearly define the requirements and approach to follow for each feature in a markdown format file. I split this into two parts - frontend and backend.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755979777811/6e945d1f-b930-4844-bd87-99750dfada5a.png" alt="A screenshot of a project structure for &quot;emotional-support-app.&quot; It includes a &quot;backend&quot; directory with Python files for a FastAPI application and a &quot;frontend&quot; directory with directories and a JSON file for a React application. There is also a README file." class="image--center mx-auto" /></p>
<p>In this way, I could spin up two agents, each working on completely different tasks. It helps in passing domain specific instructions to the AI agents and helps improve the output response. Tackling multiple things in the same session is generally not advised because of too much context overloading for the agents, too. Just like humans, they also struggle with multitasking, I guess :)</p>
<h1 id="heading-step-3-building-the-prototype-backend">Step 3. Building the prototype - backend</h1>
<p>Since backend interests me more than frontend, I decided to start implementing the backend part of the project. I cloned a template fastAPI project and loaded the <code>Requirements.MD</code> in my Gemini CLI agent. Then like a senior engineer, I drafted a crisp and clear prompt to explain what needs to be done and how. Doing all of this at once would have been disastrous. Hence, I started with small tasks first and gradually built the entire project.</p>
<p>We started with integrating the NotionSDK and setting up a dummy route to test it. I sipped on my coffee as the agent kept working in the background. I am not a fan of dangerous mode and prefer reviewing every code change. Although it takes more time, It allows me to manage the work of the agent in more detail, ensuring it does not go off track. Once the integration was done, we gradually moved to setting up the database.</p>
<p>Vibe coding is nice. But having prior experience makes the outcome more predictable. The AI doesn’t always suggest the best solution. In my case, while setting up the database, in the first try, Gemini used Python DB drivers, raw-querying the database via cursors. This is not a suggested approach ever since the introduction of ORMs (Object Relational Mappers), which abstract the query writing process. My prior experience made me notice this inconsistency and suggest using Tortoise instead. Upon getting corrected, it migrated all the existing flows within a couple of minutes. Something which would have taken me hours.</p>
<p>Before I hit my daily quota limits for the Gemini model, we were able to set up half of the project and test critical flows regarding notion data fetching and generating AI insights from the fetched content.</p>
<h1 id="heading-step-4-building-the-website">Step 4: Building the website</h1>
<p>Back in college, I used to work on the frontend a lot, designing beautiful web pages, developing them with all those animations and gradients. But for the past one year, I had lost touch with building it. Additionally, I always felt there was a lot of repetitive work involved in building those UI components. The boom of AI tools in this space was promising enough to give them a try.</p>
<p><a target="_blank" href="https://v0.app/community/saas-landing-page-fnLkUW05eg3"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755980126993/04699abc-42b5-437a-bfab-1ed2a0bfcbef.png" alt="A screengrab showing the templates section on the v0 website" class="image--center mx-auto" /></a></p>
<p>I asked a couple of my experienced frontend friends, and v0 appeared as a common suggestion for building landing pages. They were not wrong. v0 already had a large number of community-contributed templates that were enough for my use case. I selected one and decided to refine it using the v0 online editor. I gave prompts regarding what I am trying to build, along with a color theme that I had in mind.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755980402567/91d1c3da-084f-477b-b180-b1cc4d64cbd9.png" alt="Mockup of a website with a purple gradient background, featuring a journaling app called &quot;MindfulPages.&quot; The site highlights features like AI-powered journaling, motivational messages, and connectivity with Notion." class="image--center mx-auto" /></p>
<p>Within a few minutes, the first draft was ready. It looked promising, but not there yet. Figuratively, it was a 60% match with what I had in my mind. Following up with v0 did not bear any positive results after a point. So I decided to clone the website in its current state locally and continue building it using Gemini.</p>
<p>As I'm concerned, Gemini did a great job understanding the project and how to implement the changes I had in mind. I ran into an issue due to a mismatch in node version while running the Next.js website locally, and Gemini aptly recognised the issue and suggested a solution. I was genuinely amazed that all my UI suggestions were just a prompt away.</p>
<p>Suggestions like removing some sections, making other sections responsive for phone, tinkering with background gradients, all done within a couple on minutes. This allowed me to focus on the content that I want to put inside, rather than spend my energy on designing the website. Apart from designing components, the Gemini agent also acted as my personal SEO expert, suggesting the use of SEO-focussed meta tags, helping in adding OpenGraph-related details, and adding a sitemap file. It also helped me refine the content, fixing grammatical mistakes and, at times, suggesting the use of better words to convey the same thing. It was like having a trio of a frontend developer, content writer, and SEO expert by your side, helping you close on important tasks with a couple of prompts.</p>
<p><strong>Pros</strong> (of using v0): A good enough template to build upon, reduced TAT of building a landing page.<br /><strong>Cons</strong>: Saturated designs and difficult to stand out in the market, since everyone is using the same approach.</p>
<h1 id="heading-step-5-deployment-where-ai-reached-its-limits">Step 5: Deployment - Where AI Reached Its Limits</h1>
<p>After a couple of iterations, I had the frontend and backend both integrated and working together locally. Now it was time to deploy it online and test the entire flow end-to-end securely. Here, I was not able to utilize AI to its fullest. I had to hop between multiple service providers, create accounts, and configure them to read from my code repository. Every provider has their own steps, some of which were time-consuming. One-click deployments are cool, but they are at times expensive and may not offer all the features you need. Bare metal server providers may be cheaper, but they have a strong learning curve to begin with. I preferred a hybrid approach. I deployed the frontend using Vercel. The site was up and running in a few minutes. Deploying the backend was a little tricky. Based on my past experience, deploying a Python application has always been a pain in figuring out a compatible startup command. I decided to go ahead with Azure App Service. Since I had already worked with it before, I was familiar with the Azure dashboard. The integrated Copilot in the Azure dashboard was of no use to me. It could not correctly identify the issues I ran into while deploying. Using online tools like GPT was not up to the mark either. In one scenario, I needed to retain my Linux web app logs to analyze its functioning. Azure dashboard, by default, shows live log trails. When I asked GPT, it said it is not possible to retain logs for the Linux app. However, on doing a Google search, I landed upon a blog by the Azure team regarding how I can download a dump of my container logs for each day. This is exactly what I needed. What it signifies is that AI is still not good when you have to work with internal tools of third-party service providers because of their limited knowledge in that context. After setting up the continuous deployment pipelines, I went ahead to purchase a domain and configure the required DNS records. Here, I used AI as my knowledge partner to understand the significance of different types of DNS records and the role of Name Service Providers. If not for AI, I may not have put in the effort to search and learn more. AI eases the process of such impromptu learning sessions.</p>
<h1 id="heading-step-6-marketing-your-saas">Step 6: Marketing Your SaaS</h1>
<p>After having confidence that my product is working as expected, it was time to market it to the world and onboard real users. If you have read the blog till here, you may already know how strong (or weak) my content game is. This time, I switched to GPT-4, explaining about my project and drafting the marketing strategy. Here is how I executed it:</p>
<ol>
<li><p>Long Format <a target="_blank" href="https://www.linkedin.com/posts/lakshya-gupta-01_aitooling-vybecoded-ai-activity-7356390628928229377-HvJd?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAADVjXm0BWMM3SfQd5Gx4tcbghdq_55jsvfw">posts</a></p>
<ol>
<li><p>Ideal for: <a target="_blank" href="https://www.linkedin.com/posts/lakshya-gupta-01_aitooling-vybecoded-ai-activity-7356390628928229377-HvJd?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAADVjXm0BWMM3SfQd5Gx4tcbghdq_55jsvfw">LinkedIn</a>, Reddit, <a target="_blank" href="https://news.ycombinator.com/item?id=44771343">Hacker News</a></p>
</li>
<li><p>Tips: Draft an initial message and then refine it using AI. Don’t forget the human touch. It is very easy to identify AI slop, especially if you are posting on Reddit.</p>
</li>
</ol>
</li>
<li><p>Short AI-generated ads/Videos</p>
<ol>
<li><p>Ideal for: Instagram, X, Short Attention Span Platforms</p>
</li>
<li><p>Tips: Veo3, along with basic video editing, does a pretty good job. Low effort, high output.</p>
</li>
</ol>
</li>
</ol>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/void_stack/status/1952775979983655222">https://twitter.com/void_stack/status/1952775979983655222</a></div>
<p> </p>
<ol start="3">
<li><p>Brand Story</p>
<ol>
<li><p>Ideal for: <a target="_blank" href="https://www.producthunt.com/products/jurnai">Product Hunt</a>, <a target="_blank" href="https://peerlist.io/void_stack/project/jurnai--a-virtual-friend">Peerlist</a>, Community Targeted Platforms</p>
</li>
<li><p>Tips: Don’t use too much AI over here. Take some time to figure out what your product is about and then identify your product’s USP. Use AI to deep-research about your competitors, or back your claims with figures. The above platforms are best to get genuine feedback about your project and onboard early adopters.</p>
</li>
</ol>
</li>
<li><p>Shit Posting</p>
<ol>
<li><p>Ideal for: X, Reddit</p>
</li>
<li><p>Tips: AI can never match humans in shit posting. So, post as you feel like. Reddit helped me the most in driving initial traffic to my website, but the conversion rate was trash. However, blunt criticism in some Reddit groups actually helped me improve upon my product before scaling it further</p>
</li>
</ol>
</li>
</ol>
<p>All in all, when marketing, use AI as a sidekick to brainstorm ideas. Copy-pasting AI generated content does more harm than actual good. I may be wrong, but it is very easy to spot AI-generated content. And people don’t interact much with such polished content. Instead, consider keeping it raw.</p>
<h1 id="heading-step-7-maintaining-the-momentum">Step 7: Maintaining the Momentum</h1>
<p>Being a solo developer, it is very fun to build a project. But to keep the momentum going takes a little bit more effort. It requires thinking from the user’s perspective to continue adding new features. You need to have an eye for good coding practices to identify engineering optimizations to make the system more resilient. After doing all this, you see the site visits getting tanked and doubt if all the efforts are worth it and whether your should focus on creating a distribution instead. I was in this exact situation. To overcome this, I started maintaining a personal project tracker. I divided each requirement into three sections: Product, Engineering, and Marketing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755981387664/a9dc7c7e-5364-4461-a7c9-e91cf79262a9.png" alt class="image--center mx-auto" /></p>
<p>Every time something new popped up in my head, I updated this list. Then I had a section containing the most important stuff I need to execute immediately. Items from all the other three sections progressively came into this section to be executed. This ensured an equal and balanced growth of my product in all directions.</p>
<p>With the help of AI, I only had to come up with ideas. Since executing them did not take as much time. It was because of AI that I had enough time to segregate my requirements and work on them in an organised manner. Otherwise, I would have been so occupied in fixing things that by the time I was free, I would have lost interest in adding any new feature.</p>
<h1 id="heading-step-8-the-end">Step 8: The End</h1>
<p>With this, I had built my first <a target="_blank" href="https://jurnai.site">SaaS product</a>, which is currently being used by multiple people across the globe on a daily basis to help make their mornings happier. This journey proved to me that AI isn't a replacement for the developer; it's a force multiplier. It's the tireless junior dev, the brainstorming partner, and the marketing assistant that allows a single person to achieve what once took a team. Isn’t it crazy how you can test your crazy ideas in such a short span of time without being dependent on any other person?</p>
<p>While I loved the overall experience of building stuff with AI, the critical aspect is to maintain a balance. and use it efficiently. I have seen criticism that vibe coding is only good for building MVPs and no serious projects. But I think with an experienced person, you can build wonders using AI at an exponentially faster speed.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/theandreboso/status/1954823581273141417">https://twitter.com/theandreboso/status/1954823581273141417</a></div>
<p> </p>
<p><strong>What's your take? How are you using AI in your workflow?</strong> <strong>Let me know in the comments.</strong> If you are reading this, please don’t forget to drop a like. It gives me motivation to put in more effort and continue sharing stuff with you all.</p>
]]></content:encoded></item><item><title><![CDATA[Building My First AI Tool: Technical Deep Dive and Launch Guide]]></title><description><![CDATA[Do you plan to take your weekend project out of localhost and want people around the world to use it? Then this blog is all you need to understand how to make your project - launch ready. In this blog, I will cover how I made JurnAI - an AI powered s...]]></description><link>https://blog.lakshyabuilds.com/building-my-first-ai-tool-technical-deep-dive-and-launch-guide</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/building-my-first-ai-tool-technical-deep-dive-and-launch-guide</guid><category><![CDATA[AI]]></category><category><![CDATA[SaaS]]></category><category><![CDATA[System Design]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[AI Tool ]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[ratelimit]]></category><category><![CDATA[software development]]></category><category><![CDATA[software security]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 17 Aug 2025 04:30:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755381625694/b1103f85-33ba-49d0-994e-e2b83066b0c1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Do you plan to take your weekend project out of localhost and want people around the world to use it? Then this blog is all you need to understand how to make your project - launch ready. In this blog, I will cover how I made JurnAI - an AI powered self-help tool, exploring everything from project setup to infra and deployment. At last, I will also share some critical security features that you should add before making your project public.</p>
<h1 id="heading-introduction">Introduction</h1>
<p>Before beginning, I want to take a moment to let you know about JurnAI. It is a virtual-friend like AI assistant that you can talk to via your personal diaries. You can vent out your feelings or rant about your day securely in your notion diary. It will then send you a personalised heartwarming message based on what you wrote last night via email. It is not another AI assistant but a friend who understands you, who you can talk to. Interested? Try from <a target="_blank" href="https://jurnai.site">here</a>.</p>
<p>With more than 20k+ impressions across social media, and notable mentions on <a target="_blank" href="https://www.producthunt.com/products/jurnai">product hunt</a> and hacker news, JurnAI received a significant amount of traffic very quickly. Here’s an overview of the tech that powers it behind the scenes.</p>
<h1 id="heading-technical-overview">Technical Overview</h1>
<p>At a very high level, It is an AI agent integrated within your notion workspace. However, as we go deep, we have a client interface to onboard users, a backend server to handle daily email pipeline and most importantly, a scalable setup to cater to increasing load.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755380146011/01100f95-f0cc-4b60-92d0-5fdbf0b97ecf.png" alt="Technical high level overview of how JurnAI works" class="image--center mx-auto" /></p>
<p>This is a high level overview of how JurnAI works behind the scenes. There are multiple parts that comes together to deliver a smooth end-to-end user experience. Before we dive deep into some critical user flows, I would like to give an overview of the tools and platforms used while building this project. if you are planning to build a public-ready full-stack project, you can just refer to this.</p>
<ol>
<li><p><strong>Frontend</strong> - Next.Js and ShadCN</p>
</li>
<li><p><strong>Backend:</strong></p>
<ol>
<li><p>Server - A fastAPI Application</p>
</li>
<li><p>Database - PSQL for storing user’s info. Redis for implementing rate-limiting strategy.</p>
</li>
</ol>
</li>
<li><p><strong>Infrastructure:</strong></p>
<ol>
<li><p>Frontend - Vercel.</p>
</li>
<li><p>Server - Azure Web App and <a target="_blank" href="https://railway.com/">Railway</a>.</p>
</li>
<li><p>Database - <a target="_blank" href="https://neon.com/">Neon</a> for PSQL and Railway for Redis.</p>
</li>
<li><p>Security - Cloudflare - for protection from DDoS.</p>
</li>
</ol>
</li>
<li><p><strong>AI Tooling:</strong></p>
<ol>
<li><p>Google GenAI Python SDK</p>
</li>
<li><p>Model - Gemini 2.5 PRO</p>
</li>
</ol>
</li>
<li><p><strong>Automation</strong> - <a target="_blank" href="https://cron-job.org/en/">Cron Job</a></p>
</li>
<li><p><strong>Emailing Client</strong> - <a target="_blank" href="https://www.mailgun.com/">Mailgun</a></p>
</li>
<li><p><strong>Domain Provider</strong> - GoDaddy</p>
</li>
<li><p><strong>Analytics</strong> - Google Analytics</p>
</li>
<li><p><strong>Other Tools</strong> - Notion Python SDK</p>
</li>
</ol>
<p>The sections below focuses on technical aspects of some very interesting problems I ran into while building JurnAI and how I solved them. Keep reading to understand how things work under the hoods for one of the most loved self-help assistant.</p>
<h1 id="heading-a-secure-user-onboarding-flow">A Secure User Onboarding Flow</h1>
<p>For the project to work, I needed temporary read-only access to user’s last night diary entries. Notion makes this process a breeze, thank to their public integrations. It allows users to give access to their workspace via a one-time authorization. The access can be revoked anytime by the user making it a go-to option.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755380464203/082a3547-3181-4e98-bd79-17751a613b8c.png" alt="Flowchart illustrating the user onboarding process for JurnAI. It shows the user accessing the JurnAI website, which interfaces with Notion Integration for authorization, a backend server for authentication and data fetching, and a database for integration token retrieval." class="image--center mx-auto" /></p>
<p>While onboarding a new user, the process is as follows:</p>
<ol>
<li><p>User clicks on the Connect with Notion button on our website.</p>
</li>
<li><p>The notion integration site opens up in a new tab. Once the user is done with authorization, they are redirected to the main site with a temp auth code as a query param.</p>
</li>
<li><p>The server receives this code, makes a request to notion via their sdk to generate an access token. Upon successful authentication, basic user details like email is returned from notion along with the access token. This token is then encrypted and stored securely in our database.</p>
</li>
<li><p>We can then query user’s diary entries by passing this authentication token along with their notion database id. This ensures, that no other content except what is shared by the user explicitly can be accessed by us.</p>
</li>
</ol>
<p>A critical aspect of the above interaction was to handle client side error sanitization. The whole process of authenticating with notion, and then creating database entries at our end could end up taking some time. To address this, I added a loader animation on client side, keeping user updated about the progress.</p>
<p>Additionally, I found it important to ensure system-level errors originating during failed authentication are not exposed to user. For e.g. using an invalid code, resulted in notion authentication failing. In such cases, a generic server error asking user to retry was propagated instead, abstracting the server logic from user.</p>
<h1 id="heading-automating-a-daily-content-generation-pipeline">Automating a Daily Content Generation Pipeline</h1>
<p>At night your write your diary entry. In the morning, at exactly 8 AM, you receive a personalized AI generated mail based on what you wrote last night. Now imagine doing this at scale for thousands of users.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755380634009/01302554-6548-48ad-a923-68fbb01edc1e.png" alt="Diagram of a feedback generation pipeline showing a sequence of processes involving a Cronjob, backend server, database, Notion integration, LLM, mail client, and user. Steps include scheduling tasks, fetching tokens, retrieving content, generating feedback, drafting mail, and sending mail, culminating in the user receiving the mail." class="image--center mx-auto" /></p>
<p>This was a hefty task, both in terms of compute and resource utilization because of multiple steps involved. This is how I implemented it.</p>
<ol>
<li><p>Scan the database to search for active users. For each user, the following steps needs to be executed.</p>
<ol>
<li><p>Fetch the latest diary entry generated in the last 24 hours.</p>
</li>
<li><p>Generate AI reflections based on this entry via integrated GenAI service.</p>
</li>
<li><p>Draft a mail template using the above response.</p>
</li>
<li><p>Trigger Mail via the mailing client.</p>
</li>
</ol>
</li>
<li><p>AI content generation being a time-consuming task, took around 15s on an average for generating a single message. Due to Python's Global Interpreter Lock (GIL), achieving true parallelism for CPU-bound tasks is complex. However, since our pipeline is I/O-bound (waiting for Notion, AI models, and email services), we can use concurrency to great effect.</p>
</li>
<li><p>In order to optimize the above flow to work at scale, I divided the list of available users in smaller batches. For each batch, schedule an asynchronous task to be executed for each user. These tasks are then executed concurrently. Once a batch is completed, the execution moves to next batch.</p>
</li>
</ol>
<p>This approach served as the perfect balance between efficiency and resource-optimization. I could process a large number of users together without adding stress on my servers or getting rate-limited from the external services. Since we are dealing with multiple I/O calls, it was important to ensure we used asynchronous calls everywhere to prevent thread blocking. Do you have a better approach for handling this tricky situation? I am all ears. Feel free to comment or reach out to me.</p>
<h1 id="heading-creating-a-cron-job-for-true-automation">Creating a Cron Job for True Automation</h1>
<p>The final piece in this jigsaw puzzle was to automatically run the above pipeline at a scheduled time. For this I created an endpoint, which when hit, triggered the above workflow in the background. I created a cron job that runs at a fixed time and hits the above endpoint. Think of cron jobs as a bot who performs the action specified by you at the time specified by you.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755380914157/ae44c087-cfcb-4675-931a-35e98f398bc4.png" alt="A cartoon robot is sitting at a desk in front of a computer. In the first panel, it's relaxed, playing solitaire and holding a coffee cup, with the clock showing 9:00. In the second panel, the robot looks panicked as the clock shows 3:10, and a &quot;send mail&quot; prompt appears on the screen over the solitaire game." class="image--center mx-auto" /></p>
<p>Additionally, we can also create an Azure Function App for performing the above action. If you want to know how they work, you can check out this <a target="_blank" href="https://void-ness.hashnode.dev/build-your-next-viral-project-serverless-using-azure-function-and-copilot">blog</a>.</p>
<h1 id="heading-multi-server-setup-to-increase-resilience">Multi Server Setup to Increase Resilience</h1>
<p>I deployed by server side code on two different service providers. I know it sounds a bit unconventional, but it was a strategic decision to balance cost, performance, and resilience.</p>
<p>I was serving two critical operations via my server - user onboarding and daily content generation pipeline. Both of these operations were critical and had different SLA requirements. While the user onboarding part was not resource intensive, it needed to be up all the time. the daily content generation pipeline was a compute heavy job and it used to run at a particular time in the background. To meet the varying demands I decided to maintain two replicas of my server - one deployed on azure and the other on railway.</p>
<p>This allowed me to configure computation needs as per my requirements. For eg, It would make more sense to me to spin-down my background server when it is inactive, leading to some cold-start delays at max. This helped me save on compute credits. At the same time, I could keep my foreground server active all the time, but scale it down to lower computational requirements. Since both the servers read from the same source code, It did not cause any difficulty in my development experience. Having dual-server setup also helped provide a backup option, incase any one of them was to suffer from production outage.</p>
<h1 id="heading-key-things-before-launching-your-project">Key Things Before Launching Your Project</h1>
<p>Imagine you make your project public and it goes viral. This is your moment to shine and suddenly your server usage spikes up. Your credits starts running out exponentially. Unfortunately, your potential users are left with a product that does not work. Don’t let your momentum fade away because you were too lazy to implement basic security features. Continue reading to avoid making mistakes that I did.</p>
<ol>
<li><p><strong>CORS (Cross Origin Resource Sharing)</strong></p>
<p> This prevents requests from unrecognized origins from reaching your server via browser. It can be configured easily in your server code. Having this in place, ensures only your website can access the server directly, thus reducing chances of server spamming.</p>
</li>
<li><p><strong>Cloudflare Edge Level Protection</strong></p>
<p> When I made my project public, the last thing I expected was a bot attack. My server was hit with thousands of requests in a few seconds. Though, nothing of value was lost, it surely increased the compute usage to a minor extent. This could have been prevented easily by moving the domain behind cloudflare protection. Once behind, it ensures that all the requests are routed via their secure cloudflare servers. They also have inbuilt support to limit the max number of requests that can be made in a time frame. Additionally it also allows for blocking traffic from suspicious IPs. All of this is handled directly by cloudflare making the process seamless for you. I cannot stress this more, but please audit your app before making it public, to ensure no “AI Vibe” make it in the hands of bad actors.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755381427615/4694b83f-0575-4a1b-930a-1fab6aaa3ad9.png" alt="Graph showing threat data over the past 30 days. Total threats: 296. Top country: France. Top threat type: Bad browser. Peaks on August 31 and September 5 indicate higher threat activity." class="image--center mx-auto" /></p>
</li>
<li><p><strong>Application Level Rate Limiting</strong></p>
<p> Cloudflare handles rate limiting at edge level, that is before requests even make it to your server. Incase you are not able to add that, you can consider adding application level rate-limiting to critical routes. FastAPI comes with an inbuilt library for handling this. In order to consistently track and limit incoming traffic, we need to use an in-memory database like redis. This keeps a count of number of requests made by a particular user in a given timeframe. Storing it temporary in redis, which is an in-memory database, ensures a fast data retrieval speed essential to ensure no notable increase in overall latency of the request.</p>
</li>
</ol>
<h1 id="heading-one-small-step-many-benefits-integrating-analytics">One Small Step, Many Benefits - Integrating Analytics</h1>
<p>When taking a leap of faith from localhost to making your project public, you can consider integrating analytics in your website. This helps you in monitoring user behaviour, incoming traffic and better understand what works and what does not. I personally prefer using google analytics because of the ease of setup and support for creating custom events.</p>
<p>If you are looking for a simpler solution to monitor incoming traffic, you can also use vercel provided analytics. It requires a one-time setup and costs you on the basis of your usage.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>That’s it for now. We've covered the core technologies behind JurnAI, navigated some challenging engineering hurdles, and outlined the essential security checks that can make or break a public launch. I hope this article helps you before your next public launch. Till then, feel free to checkout JurnAI using this <a target="_blank" href="https://jurnai.site">link</a>.</p>
<p>If you have any thoughts or questions, please connect with me on <strong>X</strong>. And if you found this article useful, don't forget to leave a like! Finally, let's make this a two-way conversation. <strong>What's your #1 tip for developers preparing to go public with a project?</strong></p>
]]></content:encoded></item><item><title><![CDATA[Build Your Next Viral Project Serverless using Azure Function and CoPilot]]></title><description><![CDATA[A wise person once said, “Don’t do the hard work, do the smart work.” Imagine you are an aspiring blog writer. To showcase your work, you have two options. You can either set up your own blogging infrastructure by self-hosting your website and mainta...]]></description><link>https://blog.lakshyabuilds.com/build-your-next-viral-project-serverless-using-azure-function-and-copilot</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/build-your-next-viral-project-serverless-using-azure-function-and-copilot</guid><category><![CDATA[serverless computing]]></category><category><![CDATA[Azure Functions]]></category><category><![CDATA[github copilot]]></category><category><![CDATA[Cloud Development ]]></category><category><![CDATA[Backend Development]]></category><category><![CDATA[scalability]]></category><category><![CDATA[Cost efficiency]]></category><category><![CDATA[image generation]]></category><category><![CDATA[Tech Tutorial]]></category><category><![CDATA[cloud architecture]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Fri, 13 Jun 2025 15:30:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749827812206/4b90c887-2c4e-4842-8b33-746e2b0f2af0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A wise person once said, “Don’t do the hard work, do the smart work.” Imagine you are an aspiring blog writer. To showcase your work, you have two options. You can either set up your own blogging infrastructure by self-hosting your website and maintaining everything end-to-end, or you can consider using existing blog hosting websites such as Hashnode. It is easy to work with, you pay a small fee based on your usage, and it is usually less hassle to begin with. This is similar to how serverless compute works. Instead of self-hosting your entire backend infrastructure, you can host them on serverless computes, popularly known as functions, for small repetitive tasks.</p>
<h2 id="heading-what-do-i-mean-by-serverless">What do I Mean by Serverless?</h2>
<p>The most popular way of deploying your backend application is by using the offerings of cloud providers such as Azure and AWS. They provide dedicated servers in Linux and Windows runtime to host a variety of services written in popular programming languages. They give users the flexibility to choose their required storage, memory, and processing power. However, they are costly and mostly charged on a fixed per-hour rate. Serverless, on the other hand, charges users based on their actual usage, i.e., in compute seconds used. This allows scaling as per their needs without worrying about costs incurred during non-peak times.</p>
<h2 id="heading-servers-with-benefits">Servers with Benefits?</h2>
<ol>
<li><p><strong>Cost Savings</strong> - For small-duration, repetitive tasks that are unevenly spread across the day, opting for a serverless infrastructure can help save costs in terms of the infrastructure required to host it compared to deploying on a dedicated server.</p>
</li>
<li><p><strong>Scalability</strong> - With flexible consumption plans, users need not worry about scaling their functions app. The servers auto-scale depending on the traffic, again only costing you for the compute seconds used.</p>
</li>
<li><p><strong>Less Hassle</strong> - Since you don’t need to set up the entire servers on your own, it helps in getting rid of managing operations end-to-end. The deployments are blazing fast for REST application-based servers.</p>
</li>
<li><p><strong>High Availability</strong> - Some cloud providers like Azure provide functionality like "always on," which prevents your function from sleeping, thus preventing the latency arising from cold-start.</p>
</li>
</ol>
<p>Different cloud-service providers have their own versions of serverless offerings. Some of the popular ones are AWS’s Lambda functions, Microsoft Azure’s Functions App (with flexible consumption plan), and Google Cloud Platform's Cloud Functions. For the purpose of this tutorial, we will try building a simple image generation service over Azure’s Functions App. You may go ahead with any other service provider offering too. I went ahead with Azure because my student credits were about to expire xD.</p>
<h2 id="heading-chaliye-shuru-karte-hai">Chaliye Shuru karte hai!</h2>
<p>So, today I will try building an Azure Functions App which will be HTTP triggered. So every time someone clicks on the link, the function will be triggered. The function is pretty simple. It takes the user’s bank balance as input from the query parameters. Based on the balance, it gives advice to the user on whether they should resign or not. Instead of returning bland advice, we will add a pinch of humor. Our Honest Billa (cat) will spit facts that are hard to digest in a PNG format. Pretty simple, right?</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Here is what my current system setup looks like. If you have something different, please find alternatives to the things mentioned before. They are just a GPT query away.</p>
<p>System OS - Windows 11<br />IDE - Visual Studio Code<br />Prerequisites - Python v3.11 and Node.JS v23</p>
<h3 id="heading-installing-the-required-packages">Installing the Required Packages</h3>
<p>First of all, we need to install some libraries which will come in handy during local development.</p>
<pre><code class="lang-bash">npm i -g azure-functions-core-tools@4 --unsafe-perm <span class="hljs-literal">true</span>
</code></pre>
<p>This is the core library required to scaffold a local Azure Functions App and also run it locally. To make things easy for us, we can install the Azure Functions VS Code <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions"><strong>extension</strong></a>. With this extension, creating new functions and deploying them to Azure are just a set of a few clicks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749811134565/b7ea7e29-9192-432a-83e5-ce8d87706ee9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-scaffolding-the-initial-project">Scaffolding the Initial Project</h3>
<p>Once you have the extension installed, log in to your Azure account to access all the resources you have already created.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749827707126/0478bd0a-c0b8-49cf-be55-6762a7fd5ec3.png" alt class="image--center mx-auto" /></p>
<p>Then, go to the workspace tab at the bottom of the extension. From there, you can scaffold a new local function project. For this project, I will keep the trigger method as HTTP and auth permission as anonymous, allowing the function's link to be accessible by a larger audience.</p>
<p>Based on my preference, I chose Python as the project's language and HTTP trigger as the template. Additionally, I selected the Authorization level as anonymous, which will allow users to trigger my function without any added authorization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749811600633/a3d3a283-7938-4a76-b328-5042fa93e388.png" alt /></p>
<h3 id="heading-developing-the-core-logic-of-function-with-copilot">Developing the Core Logic of Function with CoPilot</h3>
<p>With the initial scaffolding done, it’s time to bring out our secret weapon - GitHub CoPilot. With the power of GPT-4, it will be able to build the initial draft of our application within a few prompts.</p>
<p>We will be editing the <code>functions_app.py</code> file. It will contain the core logic of our function. Since we are going to generate some meme images and return them as a response, we need to store some static content. For that, we will create a separate folder <code>static</code> for storing the images.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749812728826/c022d06c-ebb1-4c20-8a55-ed1ee337d270.png" alt /></p>
<p>After a few iterations, we have an initial draft of the function ready to be tested. We will run it locally via the Azure function extension.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749812956361/76f0a3ae-c934-4d2d-9793-7c24e45d3a31.png" alt /></p>
<h3 id="heading-executing-the-function-locally">Executing the Function Locally</h3>
<p>To run this function locally, go to the run and debug section in VS Code. From there, choose the <code>Attach to Python functions</code> option in the dropdown and click on the small green triangle button. And voila, your function will be ready to test locally!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749827741967/14adfcd3-2f81-484e-9539-901625b38d92.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749825341955/d6fd9ede-9b68-4717-a7b2-08df8787d7da.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-deploying-to-azure-function-app-in-few-clicks">Deploying to Azure Function App in Few Clicks</h3>
<p>Once you have verified the basic flow, it is time to deploy this over Azure. For that, ensure that an Azure Functions App exists in your connected Azure account. You can create it either via the Azure web dashboard or by using the VS Code functions extension. Here is a link to the official article from the Microsoft dev team explaining the process in <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-app-portal?tabs=core-tools&amp;pivots=flex-consumption-plan"><strong>deta</strong></a><a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-app-portal?tabs=core-tools&amp;pivots=flex-consumption-plan"><strong>il</strong>.</a></p>
<p>With the app created successfully, deploying it to Azure is just a few clicks away. Click on the Deploy to Azure option and follow the on-screen instructions. It will give a warning regarding existing data deployed on the Azure function app being overwritten, but you can ignore that.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749827760316/b7e6b772-211a-449c-9f12-22707e1a7cd3.png" alt class="image--center mx-auto" /></p>
<p>And you're done. Now go to the Azure dashboard and open your function app from the Azure resource center. From there, you will find the URL where your function app is hosted, allowing you to trigger it by making the appropriate request.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749814240017/d428a8b5-e9e8-4aa1-ba23-a965693a52ac.png" alt class="image--center mx-auto" /></p>
<p>LLM not working as expected for you? No worries, here is a <a target="_blank" href="https://github.com/void-ness/ShouldYouResign">link</a> to my GitHub code repository. Just clone it and deploy it to your own Azure function app. Feel free to improve it as per your need. Also, if you want to try my deployed function app, you can check it using this <a target="_blank" href="https://shouldyouresign.azurewebsites.net/api/if?balance=100">link</a>. It will be up and running (unless my student credits have expired).</p>
<h2 id="heading-what-could-be-improved"><strong>What Could Be Improved?</strong></h2>
<p>Functions apps are suitable for lightweight applications that can be executed quickly, such as CSV processing or triggering mail alerts. However, they are not meant for heavy computing tasks like real-time video generation due to their limited resources. Generally, any serverless resource should not be used for heavy computing as it defeats their purpose. In our small project, we uploaded a couple of static images and fonts with our application. Since the size is small, it won’t be a big issue, but ideally, these resources should be stored in a blob storage, independent of our function app, and accessed via a typical CDN setup. However, this project is for educational purposes, so we don’t need to think that hard.</p>
<h2 id="heading-why-did-i-do-this"><strong>Why Did I Do This?</strong></h2>
<p>You might be wondering about the point of this project. Who would make an HTTP request every time for this silly thing? It depends on how you integrate it into your workflow. For example, you can add this to your personal finance app. So every time a user considers an impulsive purchase, you can fetch their balance in real-time and show them this result. One can integrate this into their chatbot with a similar use case. The opportunities are endless, limited only by our vision and imagination. The main advantage that Azure Functions App offers over traditional REST API projects is the ease of setting up the project and deploying it with better cost control.</p>
<p>Today, we covered what serverless resources are and how they can power your lightweight backend applications. We also built a small serverless function, deployed it using Azure, and tested it end-to-end with the help of LLMs and useful VS Code extensions. I hope you learned something new today. I would love to know what you will build as your next serverless app. If you face any difficulty, feel free to connect with me.</p>
]]></content:encoded></item><item><title><![CDATA[Unit Testing in Python Simplified with PyTest and Sanic]]></title><description><![CDATA[Speed and reliability are two of the most sought-after skills when selecting a fast-paced software engineer. Engineers are expected to deliver quickly and get the code right on the first try. In such an agile environment, quality control measures oft...]]></description><link>https://blog.lakshyabuilds.com/unit-testing-in-python-simplified</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/unit-testing-in-python-simplified</guid><category><![CDATA[python sanic]]></category><category><![CDATA[async testing]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Python]]></category><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[unit testing]]></category><category><![CDATA[pytest]]></category><category><![CDATA[python-testing]]></category><category><![CDATA[Integration Testing]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Mon, 05 May 2025 04:40:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746391041348/6294496e-126a-452e-869d-b68365f7066b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Speed and reliability are two of the most sought-after skills when selecting a fast-paced software engineer. Engineers are expected to deliver quickly and get the code right on the first try. In such an agile environment, quality control measures often take a backseat. So, how can you, as an engineer, balance speed with quality? The answer is Test Driven Development (TDD). Unit testing helps discover potential bugs early and gives you the confidence to ship new features. In today’s blog, I will guide you through unit testing in Python with easy-to-follow code samples. Make sure to read until the end, as we will discuss the role of unit testing in catching bugs early.</p>
<h2 id="heading-why-is-testing-needed">Why is Testing Needed?</h2>
<p>Quality control measures like black box testing and integration testing are crucial for catching critical bugs that impact user flows. However, they can be time-consuming and often get rushed when deadlines are tight. This happens more frequently with lean and high-performing teams. So, how do you ensure overall system stability without relying solely on your QA team? Developer testing is the answer. Think of them as tests written at the code level. With modern testing tools, they are quick to run and analyze. Instead of manually testing all the flows, you can run these test suites locally to ensure no major flows are breaking. These tests focus on specific pieces of code and are relatively easier to write. The mental shift towards writing tests before actual code is known as test-driven development.</p>
<h2 id="heading-testing-in-python-with-sanic-and-why-it-can-be-challenging">Testing in Python with Sanic and Why It Can Be Challenging</h2>
<p>Sanic is an asynchronous framework in Python for building fast and highly scalable systems. However, there is limited documentation on testing your Sanic application. While the official Sanic documentation provides some examples, they lack depth and diverse coding examples. Additionally, resources for testing async Sanic apps are even scarcer. In this blog, we will cover testing an eCommerce application using Sanic and PyTest.</p>
<h2 id="heading-setting-up-the-app">Setting Up the App</h2>
<p>For the purpose of this blog, imagine we have an eCommerce application that sells bamboo t-shirts. Let's focus on the code responsible for placing orders.</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OrderPlacementManager</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, user, cart, amount_payable</span>):</span>
        self.user = user
        self.cart = cart
        self.amount_payable = amount_payable

    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">place_order</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-string">"""
        Place an order by validating the cart, processing payment, and creating the order.
        """</span>
        self.validate_cart()
        payment_mode = self.get_payment_mode()

        order_id = <span class="hljs-keyword">await</span> self.generate_order_id()
        order = <span class="hljs-keyword">await</span> self.create_order(
            order_id=order_id,
            user=self.user,
            cart=self.cart,
            amount_payable=self.amount_payable,
            payment_mode=payment_mode
        )

        <span class="hljs-keyword">return</span> order

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">validate_cart</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-string">"""
        Validate the cart to ensure it is not empty and contains valid items.
        """</span>
        <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> self.cart <span class="hljs-keyword">or</span> len(self.cart) == <span class="hljs-number">0</span>:
            <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"Cart is empty. Cannot place an order."</span>)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_payment_mode</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-string">"""
        Get the payment mode for the order.
        """</span>
        <span class="hljs-keyword">return</span> <span class="hljs-string">"COD"</span> <span class="hljs-keyword">if</span> self.amount_payable &gt; <span class="hljs-number">0</span> <span class="hljs-keyword">else</span> <span class="hljs-string">"ONLINE"</span>
</code></pre>
<p>The <code>OrderPlacementManager</code> class is responsible for placing orders. Any unexpected change in this class can directly impact order creation. Now, let's write unit test cases for the <code>place_order</code> function using the <code>pytest-asyncio</code> library. If you don’t have it installed, you can do so with the following command:</p>
<pre><code class="lang-powershell">pip install py<span class="hljs-built_in">test-asyncio</span>
</code></pre>
<h2 id="heading-setting-up-the-testing-files">Setting Up the Testing Files</h2>
<p>We begin by importing the required libraries and setting up the test directory. For PyTest to recognize them, the file name should begin with “test_[identifier of test]”, for example, <code>test_order_creation.py</code>. This convention is also followed when naming classes and marking functions for testing with PyTest. Here’s a sample file:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pytest <span class="hljs-comment"># the testing library</span>
<span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> AsyncMock, Mock
<span class="hljs-keyword">from</span> app.managers.order_placement_manager <span class="hljs-keyword">import</span> OrderPlacementManager <span class="hljs-comment"># the manager to test</span>

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestOrderPlacementManager</span>:</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_validate_cart_with_empty_cart</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[], amount_payable=<span class="hljs-number">100.0</span>)
        <span class="hljs-keyword">with</span> pytest.raises(ValueError, match=<span class="hljs-string">"Cart is empty. Cannot place an order."</span>):
            manager.validate_cart()

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_validate_cart_with_non_empty_cart</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>], amount_payable=<span class="hljs-number">100.0</span>)
        <span class="hljs-comment"># Should not raise any exception</span>
        manager.validate_cart()
</code></pre>
<p>In the first test case, we check if the cart validation function raises an error for empty input. By using <code>pytest.raises</code>, we expect the function <code>manager.validate_cart()</code> to raise a <code>ValueError</code>, and we validate that part. In the second test case, we ensure the validation function does not throw any error for proper input.</p>
<h2 id="heading-running-the-tests">Running the tests</h2>
<p>To run these tests, use the following command in your terminal. Pytest automatically identifies the files with <code>test</code> prefix in the file name and execute the available test functions inside them</p>
<pre><code class="lang-python">pytest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746307639509/07aeba85-d2f9-499a-8b27-ce69d13dde5b.png" alt class="image--center mx-auto" /></p>
<p>As we can see, both of our test cases passed successfully. Now, let's expand our scope to test the payment mode function.</p>
<pre><code class="lang-python">    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">place_order</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-string">"""
        Place an order by validating the cart, processing payment, and creating the order.
        """</span>
        self.validate_cart()
        payment_mode = self.get_payment_mode()

        <span class="hljs-keyword">if</span> payment_mode == <span class="hljs-string">"COD"</span>:
            payment_status = <span class="hljs-string">"pending"</span>
        <span class="hljs-keyword">else</span>:
            <span class="hljs-keyword">await</span> self.payment_processor.process_payment(self.amount_payable)
            payment_status = <span class="hljs-string">"completed"</span>

        <span class="hljs-comment"># existing implementation</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_payment_mode</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-string">"""
        Get the payment mode for the order.
        """</span>
        <span class="hljs-keyword">return</span> <span class="hljs-string">"COD"</span> <span class="hljs-keyword">if</span> self.amount_payable &gt; <span class="hljs-number">0</span> <span class="hljs-keyword">else</span> <span class="hljs-string">"ONLINE"</span>
</code></pre>
<p>We have introduced a payment processor to handle online payments. To distinguish between online and Cash-on-Delivery payments, it is important for the <code>get_payment_mode</code> function to work as expected. Let’s add some unit test cases for the same:</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestOrderPlacementManager</span>:</span>    

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_get_payment_mode_cod</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>], amount_payable=<span class="hljs-number">100.0</span>)
        <span class="hljs-keyword">assert</span> manager.get_payment_mode() == <span class="hljs-string">"COD"</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_get_payment_mode_online</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>], amount_payable=<span class="hljs-number">0.0</span>)
        <span class="hljs-keyword">assert</span> manager.get_payment_mode() == <span class="hljs-string">"ONLINE"</span>
</code></pre>
<p>Notice how we pass dummy values as arguments when initializing the manager. These are referred to as mock data, used to test the core logic of the function. Depending on the complexity of the function, it can be hardcoded or generated dynamically for testing. Now, let us see if the newly added test cases are running as expected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746308497117/b753fd77-ccfa-46a2-876b-81bb2c0d98f4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-test-mocks">Test Mocks</h2>
<p>Great, let’s move on to the next step: testing the order placement function as a whole. This function is critical and depends on other modules like the <code>PaymentProcessor</code>. While unit testing, you may want to focus on a specific piece of code. You can achieve this by mocking the response of such function calls. This is where Mock comes into play. Here’s an example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> AsyncMock
<span class="hljs-keyword">import</span> pytest

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestOrderPlacementManager</span>:</span>

<span class="hljs-meta">    @pytest.mark.asyncio</span>
    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_place_order</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>], amount_payable=<span class="hljs-number">100.0</span>)

        <span class="hljs-comment"># Mock generate_order_id and create_order</span>
        manager.generate_order_id = AsyncMock(return_value=<span class="hljs-string">"ORDER123"</span>)
        manager.create_order = AsyncMock(return_value={<span class="hljs-string">"order_id"</span>: <span class="hljs-string">"ORDER123"</span>, <span class="hljs-string">"status"</span>: <span class="hljs-string">"created"</span>})

        <span class="hljs-comment"># Call place_order</span>
        order = <span class="hljs-keyword">await</span> manager.place_order()

        manager.generate_order_id.assert_called_once()
        manager.create_order.assert_called_once_with(
            order_id=<span class="hljs-string">"ORDER123"</span>,
            user=<span class="hljs-string">"test_user"</span>,
            cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>],
            amount_payable=<span class="hljs-number">100.0</span>,
            payment_mode=<span class="hljs-string">"COD"</span>
        )
</code></pre>
<p>We use the <code>@pytest.mark.asyncio</code> decorator on this test function because, unlike other test cases, this one tests an asynchronous function. This decorator allows PyTest to set up an event loop and execute this asynchronous function for us.</p>
<h3 id="heading-method-mocking">Method Mocking</h3>
<pre><code class="lang-python"><span class="hljs-comment"># Mock generate_order_id and create_order</span>
manager.generate_order_id = AsyncMock(return_value=<span class="hljs-string">"ORDER123"</span>)
manager.create_order = AsyncMock(return_value={<span class="hljs-string">"order_id"</span>: <span class="hljs-string">"ORDER123"</span>, <span class="hljs-string">"status"</span>: <span class="hljs-string">"created"</span>})
</code></pre>
<p>Here, we specify that whenever the <code>generate_order_id</code> function is called, it should return a coroutine mock object with a value of “ORDER123” instead of executing its actual implementation. Notice that we used <code>AsyncMock</code> instead of <code>Mock</code>, as <code>generate_order_id</code> is an async function.</p>
<p>Mocking the value of a dependency allows us to test specific parts of our code. This is referred to as unit testing, as we are testing by breaking our code into small units. When we test multiple units of our code together, it is referred to as integration testing.</p>
<p>In the above example, we passed <code>return_value</code> as an argument while initializing the <code>AsyncMocks</code>. In scenarios where you want these functions to raise an error, we can use side effects. For example:</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestOrderPlacementManager</span>:</span>
<span class="hljs-meta">    @pytest.mark.asyncio</span>
    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_place_order</span>(<span class="hljs-params">self, mocked_generate_order_id</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>], amount_payable=<span class="hljs-number">100.0</span>)
        manager.validate_cart = Mock(side_effect=ValueError(<span class="hljs-string">"Cart is empty. Cannot place an order."</span>))

        <span class="hljs-comment"># Call place_order</span>
        <span class="hljs-keyword">with</span> pytest.raises(ValueError, match=<span class="hljs-string">"Cart is empty. Cannot place an order."</span>):
            <span class="hljs-keyword">await</span> manager.place_order()
</code></pre>
<h3 id="heading-mocking-using-patch">Mocking using Patch</h3>
<p>Another way of mocking the value of a method is by using the <code>@patch</code> decorator:</p>
<pre><code class="lang-python"><span class="hljs-string">"""
syntax for @patch decorator
"""</span>

<span class="hljs-meta">@patch.object(Object to patch, target method, return value [optional)mocked_generate_order_id.assert_called_once()</span>
</code></pre>
<pre><code class="lang-python"><span class="hljs-meta">@pytest.mark.asyncio</span>
<span class="hljs-meta">@patch.object(OrderPlacementManager, 'generate_order_id', return_value="ORDER123")</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_place_order</span>(<span class="hljs-params">self, mocked_generate_order_id</span>):</span>
    manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>], amount_payable=<span class="hljs-number">100.0</span>)
    mocked_generate_order_id.assert_called_once()
</code></pre>
<p>The <code>patch</code> decorator, based on the target (<code>generate_order_id</code> in our case), automatically decides whether to return a <code>Mock</code> or an <code>AsyncMock</code>. This helps reduce our efforts and speeds up the process of writing tests. We have to pass a reference to the mocked function as part of the function signature. As a result, we can then use the <code>mocked_generate_order_id</code> variable to update its return value or assert some facts using this instance.</p>
<h2 id="heading-running-all-test-cases">Running All Test Cases</h2>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pytest
<span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> AsyncMock, Mock, patch
<span class="hljs-keyword">from</span> app.managers.order_placement_manager <span class="hljs-keyword">import</span> OrderPlacementManager

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestOrderPlacementManager</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_validate_cart_with_empty_cart</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[], amount_payable=<span class="hljs-number">100.0</span>)
        <span class="hljs-keyword">with</span> pytest.raises(ValueError, match=<span class="hljs-string">"Cart is empty. Cannot place an order."</span>):
            manager.validate_cart()

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_validate_cart_with_non_empty_cart</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>], amount_payable=<span class="hljs-number">100.0</span>)
        <span class="hljs-comment"># Should not raise any exception</span>
        manager.validate_cart()

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_get_payment_mode_cod</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>], amount_payable=<span class="hljs-number">100.0</span>)
        <span class="hljs-keyword">assert</span> manager.get_payment_mode() == <span class="hljs-string">"COD"</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_get_payment_mode_online</span>(<span class="hljs-params">self</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>], amount_payable=<span class="hljs-number">0.0</span>)
        <span class="hljs-keyword">assert</span> manager.get_payment_mode() == <span class="hljs-string">"ONLINE"</span>

<span class="hljs-meta">    @pytest.mark.asyncio</span>
<span class="hljs-meta">    @patch.object(OrderPlacementManager, 'generate_order_id', return_value="ORDER123")</span>
    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_place_order</span>(<span class="hljs-params">self, mocked_generate_order_id</span>):</span>
        manager = OrderPlacementManager(user=<span class="hljs-string">"test_user"</span>, cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>], amount_payable=<span class="hljs-number">100.0</span>)
        manager.create_order = AsyncMock(return_value=Mock())

        order = <span class="hljs-keyword">await</span> manager.place_order()

        mocked_generate_order_id.assert_called_once()
        manager.create_order.assert_called_once_with(
            order_id=<span class="hljs-string">"ORDER123"</span>,
            user=<span class="hljs-string">"test_user"</span>,
            cart=[<span class="hljs-string">"item1"</span>, <span class="hljs-string">"item2"</span>],
            amount_payable=<span class="hljs-number">100.0</span>,
            payment_mode=<span class="hljs-string">"COD"</span>
        )
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746310876555/1ef8e118-6b76-4426-a5e3-44d5cdff382c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-catching-issues-with-tests">Catching Issues with Tests</h2>
<p>What good are tests if they can’t catch issues? Let us see what happens when a new modification is made to our original order placement manager, introducing a bug.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_payment_mode</span>(<span class="hljs-params">self</span>):</span>
    <span class="hljs-string">"""
    Get the payment mode for the order.
    """</span>
    <span class="hljs-comment"># Introduced bug: Incorrect condition for determining payment mode</span>
    <span class="hljs-keyword">return</span> <span class="hljs-string">"COD"</span> <span class="hljs-keyword">if</span> self.amount_payable &gt;= <span class="hljs-number">0</span> <span class="hljs-keyword">else</span> <span class="hljs-string">"ONLINE"</span>
</code></pre>
<p>Say someone modified the <code>get_payment_mode</code> function by replacing the strictly greater than condition (<code>&gt;</code>) with a flexible greater than or equal to condition (<code>&gt;=</code>). This will result in ONLINE orders being marked as COD. Let’s try running our test cases after this change.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746311345861/4ce2e4ad-5bd9-41db-91b2-a308b3c9ae95.png" alt class="image--center mx-auto" /></p>
<p>As we can see, one of our test cases failed, and rightfully so. This might have been missed if the developer was focused on testing whether the orders are being created or not. They may have missed checking the scenario regarding payment mode. However, having unit test cases helped flag this bug early, thus resulting in less damage.</p>
<p>For this reason, in many projects, these tests are part of the CI/CD pipeline. Every code change must pass the testing pipeline before being deployed to production. It helps flag faulty code, which often gets missed when multiple people are working on the same module.</p>
<p>Here’s a <a target="_blank" href="https://github.com/void-ness/SanicTeesTesting">link</a> to the project repository. Feel free to clone it and play around.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this blog, we covered the need for developer-driven testing. We discussed how to perform unit testing of specific modules in a Python Sanic application using PyTest. <code>AsyncMocks</code> and the <code>pytest.mark</code> decorator are some of the many utilities provided in Python for testing asynchronous code. We also looked at how unit tests can help catch bugs early during development and save a lot of testers' bandwidth.</p>
<p>I hope this blog helps fill part of the void regarding insufficient coding samples for testing in Sanic. While this blog focused on testing modules, in the next blog, we will cover testing endpoints in a Sanic application. So make sure you follow me to never miss an update. If you liked this blog, please don’t forget to like it. If you have any other concerns, feel free to drop a comment.</p>
<p>Oh, by the way, what are your thoughts about writing test cases as a developer? Do you think this is something that should be handled by a tester? What are your thoughts on the same?</p>
]]></content:encoded></item><item><title><![CDATA[Speed Matters: A Deep Dive into API Optimization]]></title><description><![CDATA[In Today’s world, time isn’t just money anymore; it’s a luxury. Everything is sold with one promise: “It’s faster.” Think about it: flights over trains, watching movies in theaters instead of waiting for OTT releases, or buying YouTube Premium to ski...]]></description><link>https://blog.lakshyabuilds.com/speed-matters-a-deep-dive-into-api-optimization</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/speed-matters-a-deep-dive-into-api-optimization</guid><category><![CDATA[Resource Caching]]></category><category><![CDATA[APIs]]></category><category><![CDATA[indexing]]></category><category><![CDATA[API Optimization]]></category><category><![CDATA[Load Balancing]]></category><category><![CDATA[caching]]></category><category><![CDATA[SEO]]></category><category><![CDATA[conversion rate optimization]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 22 Dec 2024 16:30:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734884854124/1f353dcb-e294-4627-9f33-f77242e9daa8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In Today’s world, time isn’t just money anymore; it’s a luxury. Everything is sold with one promise: “It’s faster.” Think about it: flights over trains, watching movies in theaters instead of waiting for OTT releases, or buying YouTube Premium to skip ads. We’re all chasing speed. A few years ago, deliveries took days. Then came same-day deliveries. Now, with the rise of quick commerce, we’ve moved to deliveries in mere minutes. And guess what? Nobody’s complaining about things being too fast.</p>
<p>From a software perspective, the same rules apply. Even a few milliseconds of delay in your page’s loading time can drastically impact the customer experience. Everything must be optimized for a seamless user journey. While there are countless ways to optimize software for speed, this blog will focus on APIs.</p>
<h2 id="heading-what-are-apis">What are APIs?</h2>
<p>API stands for Application Programming Interface. Simply put, it’s the interface between your application (client-side) and your program (server-side).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734883507512/f578ac01-3d00-447b-8b70-43dddf3f9c18.png" alt="An illustration showing customers at a toy shop" class="image--center mx-auto" /></p>
<p>Let’s break it down with an analogy: Imagine you’re at a toy store. You ask the shopkeeper to show you toys for your niece’s sixth birthday. The shopkeeper takes a moment to think, finds toys suitable for a six-year-old, and shows them to you. Here, the toy store’s inventory is like a database, your request is the client-side query, and the shopkeeper processes and returns the results. That’s essentially what an API does.</p>
<p>Now that we know what APIs are, let’s discuss why their execution speed is crucial.</p>
<h2 id="heading-why-execution-speed-is-a-game-changer">Why Execution Speed Is a Game-Changer</h2>
<p>Imagine you’re running late for a party. You ask the shopkeeper for a gift, but they take a few minutes to show you the options. Then, when you refine your request (“Something football-related”), it takes them even longer. This delay hampers your shopping experience.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734883585426/63ca167c-86c5-44d1-bbd2-413377025891.png" alt="An illustration showing a user frustrated because of slow internet while surfing a website" class="image--center mx-auto" /></p>
<p>Similarly, imagine a user visiting your website to search for a specific medicine. A slight delay in showing results might make them abandon your app and switch to a competitor. The situation worsens if the user has a slow or limited internet connection. Poor performance impacts the user experience and, ultimately, your <a target="_blank" href="https://portent.com/blog/analytics/research-site-speed-hurting-everyones-revenue.htm">conversion rates.</a></p>
<p>Execution speed matters. Now that we’ve identified the problem, let’s talk about the solutions.</p>
<h2 id="heading-mastering-the-art-of-speed-optimization">Mastering the Art of Speed Optimization</h2>
<p>To optimize for speed, you must first identify areas of improvement. Analyze the entire timeline of a request—from when the user clicks the search button to when the application displays the results. Let’s explore some scenarios and their solutions.</p>
<h3 id="heading-load-balancing-during-high-traffic-scenarios">Load Balancing During High Traffic Scenarios</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734883658825/aa276086-b39b-4ee9-8ed0-6d301291d10f.png" alt="A toy store crowded with angry customers demonstrating a higher than expected footfall" class="image--center mx-auto" /></p>
<p>Sometimes, occasional traffic spikes can degrade performance. Imagine a toy store used to handling five customers suddenly gets 100 during a festive sale. As the shop owner, you’d hire more shopkeepers to handle the crowd. Similarly, you can <strong>horizontally scale</strong> your application by increasing the number of servers and using tools like Nginx to distribute traffic evenly. This ensures consistent user experience even during traffic surges.</p>
<h3 id="heading-caching-responses-for-faster-access">Caching Responses for Faster Access</h3>
<p>What if the delay wasn’t due to a lack of shopkeepers but the inefficiency in how they fetched toys? To address this, the shopkeeper could prepare a list of the most commonly requested toys, enabling them to respond faster to customer requests. Similarly, in the world of APIs, <a target="_blank" href="https://pieces.app/blog/api-caching-techniques-for-better-performance">caching</a> serves as this quick-access list. By storing frequently requested responses temporarily, caching avoids the need to recompute or refetch data each time. For instance, if you’re fetching static information, like medicine details that don’t change often, caching can significantly reduce response times. When updates occur, the cache can be purged to ensure the data remains accurate and relevant.</p>
<h3 id="heading-optimizing-computationally-expensive-db-queries">Optimizing Computationally Expensive DB queries</h3>
<p>Now imagine you’ve picked a big football kit but want it gift-wrapped in a specific paper. You are also looking to pay via credit card because you forgot to carry enough cash and your phone’s battery is dead. The shopkeeper needs to check with the packaging team for availability and the accounting team for payment options. This back-and-forth increases wait time. What if instead of asking multiple teams, the shopkeeper had limited info about whether or not a specific toy can be packed, acceptable payment modes for a gift without requiring to check with the respective teams?</p>
<p>One way to do this is to start storing this information along with the toy with the help of an additional note on top of it. Or you can just divide concerns. the shopkeeper will only be concerned with helping your customer select the gifts. for packaging/payment the customer may go to respective counters.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734883800367/444fb2db-7ec0-4a0d-93c9-97bd9d472398.png" alt="Comparison of database structures: The top has three separate tables labeled &quot;items,&quot; &quot;orders,&quot; and &quot;payment_modes,&quot; with arrows indicating &quot;multiple tables.&quot; The bottom combines attributes into a single &quot;orders&quot; table, with an arrow labeled &quot;less joins.&quot;" class="image--center mx-auto" /></p>
<p>In database terms, this means reducing joins by denormalizing data wherever possible. For instance, instead of repeatedly querying a “cold-storage items” table to check if an order is temperature-sensitive, you could add a “cold-order” flag directly to the orders table. Reducing unnecessary joins significantly improves query performance. Unoptimized queries are a bottlenecks in the performance of many APIs. It is a complicated problem to solve considering the simplicity of fetching from multiple tables instead of identifying what improvements can be made to the current database schema.</p>
<h3 id="heading-the-magical-wand-of-indexing">The Magical Wand of Indexing</h3>
<p>A disorganized toy warehouse makes finding specific toys time-consuming. But if the toys are sorted by age group (e.g., Shelf A for ages 1-2, Shelf B for ages 2-3), the shopkeeper’s job becomes easier. This is indexing in databases. <strong>Indexing</strong> improves query execution time by narrowing the search scope. However, it comes with a trade-off: while read operations become faster, write operations may slow down since data must be correctly placed in the index.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734883839199/2f46b6a9-9662-4329-ac08-db4e3c0e5a57.png" alt="Two people in a toy store, looking at shelves filled with various toys, including a globe, stuffed animals, and toy vehicles. A store employee stands behind the counter." class="image--center mx-auto" /></p>
<p>Indexes are especially useful for read-heavy databases but may not be ideal for <a target="_blank" href="https://stackoverflow.com/questions/11229258/query-executes-slower-after-indexes-are-created-and-dbms-stats-compute-is-used">write-heavy systems</a>. Understanding when and how to use indexing is crucial for optimizing API performance.</p>
<h3 id="heading-identifying-the-right-metrics">Identifying the Right Metrics</h3>
<p>So far we have talked about our problems and identifying our solution. But it is also important to analyze whether or not our solutions actually work. There are multiple metrics to measure the performance of an API. Some of them are as follows:</p>
<ol>
<li><p><strong>Average Duration Time:</strong> The mean time to process a request.</p>
</li>
<li><p><strong>P50 Latency:</strong> The time it takes to process 50% of requests.</p>
</li>
<li><p><strong>P99 Latency:</strong> The time within which 99% of requests are processed.</p>
</li>
<li><p><strong>Min/Max Latency:</strong> The shortest and longest processing times.</p>
</li>
</ol>
<p>Monitoring these metrics helps ensure our optimizations deliver the desired results.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>To wrap up, we’ve covered what APIs are, why their execution speed matters, and some practical ways to improve it. Remember, building efficient systems is only half the job; the other half is continuously monitoring and optimizing them.</p>
<p>I hope you found this blog insightful and engaging. For those wondering about the writing style—no, I didn’t ask an AI to “rewrite this like a five-year-old.” My goal is to simplify technical concepts with relatable analogies. Thanks for reading. See you next time!</p>
]]></content:encoded></item><item><title><![CDATA[Kafka: The WhatsApp for Microservices Communication]]></title><description><![CDATA[We all lead busy lives. Some of us are buried in office presentations, while others are tackling assignments from the comfort of our beds. Yet, we always find time to catch up with each other, thanks to quick messaging apps that keep us connected. Wi...]]></description><link>https://blog.lakshyabuilds.com/kafka-the-whatsapp-for-microservices-communication</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/kafka-the-whatsapp-for-microservices-communication</guid><category><![CDATA[kafka]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[monolithic architecture]]></category><category><![CDATA[asynchronous]]></category><category><![CDATA[Kafka-python]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Thu, 03 Oct 2024 04:40:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727904196959/472a76a4-3be3-4a70-b820-67dfe210d2cb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We all lead busy lives. Some of us are buried in office presentations, while others are tackling assignments from the comfort of our beds. Yet, we always find time to catch up with each other, thanks to quick messaging apps that keep us connected. With these apps, your favorite person living across the globe is just a message away. But have you ever thought about the microservices running 24×7 inside their containers? They work tirelessly, handling all your exceptions, logging their "feelings" which you ignore until something breaks. Once upon a time, they were a big monolithic family, but some ingenious developers, in the name of optimization, split them into smaller, isolated services—microservices. Have you ever wondered how they communicate with each other? Enter Kafka—the WhatsApp of microservices, enabling them to chat from the safety of their individual containers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727901782837/441cde11-14dd-472a-a56d-7cb9e7542f89.png" alt="A meme showing how cloud service providers keeps pinging microservices to know their status but don't ask them how are they feeling." class="image--center mx-auto" /></p>
<h3 id="heading-what-is-kafka">What is Kafka?</h3>
<p>Think of Kafka as a WhatsApp Community Group. While the admins have the power to publish content, the other group members are focused on consuming it. In Kafka terms, this group is known as a topic. A Kafka cluster consists of multiple topics. A Kafka producer acts like the admin, producing and publishing content. The Kafka consumer, akin to a group member, subscribes to a topic to consume the published content. The messages sent are referred to as events in Kafka terminology. The concept is straightforward—producers publish events to a Kafka topic, and consumers subscribe to a topic to consume these events. Depending on the use case, multiple actions can be taken on the received events.</p>
<p>For example, consider a WhatsApp Community Group focused on planning upcoming events in a residential society. The society chairman, acting as the admin, informs other members about upcoming events. The finance head consumes these messages to handle budget allocation. The PR head starts drafting invites for society members, while the Catering head identifies catering requirements. Although the admin published a single message, different members consumed it differently. This is how Kafka serves as a brokering interface, facilitating communication between microservices.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727901881714/9bd0e312-b29e-49cd-a913-2a85405f8bb6.png" alt="An image showing how a same message is consumed differently by different people" class="image--center mx-auto" /></p>
<p>An order confirmation event might be processed by a payment-confirmation service to store payment details, an analytics service to extract information for analytics, and a logistics service to further process the order. Kafka enables all this communication to happen in real-time, concurrently. Its capabilities allow it to handle millions of such events effortlessly and asynchronously. You might be wondering, why Kafka?</p>
<h3 id="heading-why-kafka">Why Kafka?</h3>
<p>Why not Kafka? It passes the trinity test of microservices helper services—it is highly efficient, reliable, and scalable. It enables different services to communicate with each other. Without such a communication mechanism, microservices architecture would have been deprecated long ago. How else would you transfer data between two different services running in separate places? While making inter-service API calls is an option, it comes with the limitation of how many concurrent calls you can make when you need to communicate the same information to multiple services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727903109299/b381f7cb-84a1-4611-8ec8-214e2623d05b.png" alt="A diagram showing difference between making multiple API calls vs using Kafka for communication between microservices" class="image--center mx-auto" /></p>
<h3 id="heading-kafka-in-action">Kafka in action</h3>
<p>Here's a small code snippet showing how to create a Kafka producer and consumer using Python. The producer publishes a message to a topic, and the consumer reads the message from the topic.</p>
<p><strong>Kafka Producer</strong></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> kafka <span class="hljs-keyword">import</span> KafkaProducer

<span class="hljs-comment"># Initialize Kafka Producer</span>
producer = KafkaProducer(bootstrap_servers=<span class="hljs-string">'localhost:9092'</span>)

<span class="hljs-comment"># Send a message to the topic 'void_ness_topic'</span>
producer.send(<span class="hljs-string">'void_ness_topic'</span>, <span class="hljs-string">b'Hello, Kafka!'</span>)

<span class="hljs-comment"># Ensure all messages are sent before closing the producer</span>
producer.flush()
producer.close()
</code></pre>
<h4 id="heading-kafka-consumer"><strong>Kafka Consumer</strong></h4>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> kafka <span class="hljs-keyword">import</span> KafkaConsumer

<span class="hljs-comment"># Initialize Kafka Consumer</span>
consumer = KafkaConsumer(<span class="hljs-string">'void_ness_topic'</span>, bootstrap_servers=<span class="hljs-string">'localhost:9092'</span>)

<span class="hljs-comment"># Read messages from the topic</span>
<span class="hljs-keyword">for</span> message <span class="hljs-keyword">in</span> consumer:
    print(<span class="hljs-string">f"Received message: <span class="hljs-subst">{message.value.decode(<span class="hljs-string">'utf-8'</span>)}</span>"</span>)
</code></pre>
<h3 id="heading-for-gui-lovers-conduktor">For GUI lovers - Conduktor</h3>
<p>I get it. The terminal is cool. But sometimes, when you're five hours into debugging and still can't find the bug, you would appreciate if reading/publishing to a Kafka topic was as simple as clicking a button. This is where Conduktor comes into play. In simple terms, it helps you connect to a Kafka cluster and read/write content to a topic with just a few clicks. There are two ways to run a Conduktor instance locally:</p>
<ol>
<li><p>Install the GUI as a desktop app. The download link and installation steps can be found on their <a target="_blank" href="https://conduktor.io/get-started"><strong>website</strong></a>.</p>
</li>
<li><p>Alternatively you can consider running a little docker-compose file and you will have the Conduktor up and running on a local server. You can then access the GUI by opening the link using your browser. It is as simple as that and the suggested way of accessing Conduktor as per their team. You will find more details about setting it up from over <a target="_blank" href="https://conduktor.io/get-started">here</a>.</p>
</li>
</ol>
<p><a target="_blank" href="https://www.geeksforgeeks.org/how-to-create-kafka-topics-using-conduktor-tool/"><img src="https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fmedia.geeksforgeeks.org%2Fwp-content%2Fcdn-uploads%2F20220714232812%2FKafka-Topic-11.png&amp;f=1&amp;nofb=1&amp;ipt=bfe55a19aba41e39df03bc4866eaec6f8aee56b9d914935d3c0e176c74389081&amp;ipo=images" alt="A screenshot of the Conduktor application interface. The &quot;Topics&quot; section is highlighted in green on the left sidebar. The main area displays details of two topics: &quot;my-first-topic&quot; and &quot;my-second-topic.&quot; Key metrics such as partitions, count, size, and activity are shown. A &quot;+ CREATE&quot; button is located at the top right. credits - geeksforgeeks.org" /></a></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>So this is all for now. We covered the loneliness of microservices and how they stay connected with each other with the help of Kafka. Kafka makes it possible to process millions of concurrent messages between different microservices in real-time. I hope you enjoyed reading the analogy between how Kafka operates similarly to WhatsApp community groups for us. If you did, don’t forget to like this article. Do let me know in the comments below what other tech topics you would like to read about next. Till then, have a great time playing ping-pong with microservices :)</p>
]]></content:encoded></item><item><title><![CDATA[I built a Mobile App to Track Medicines Taken]]></title><description><![CDATA[In a world where our days are filled with endless tasks and distractions, remembering to take essential medication can become a daunting challenge. For my mother, this struggle became all too real, prompting the development of DoseUp - a medicine tra...]]></description><link>https://blog.lakshyabuilds.com/i-built-a-mobile-app-to-track-medicines-taken</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/i-built-a-mobile-app-to-track-medicines-taken</guid><category><![CDATA[Medicine Tracking App]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[no-code mobile app builder]]></category><category><![CDATA[Android]]></category><category><![CDATA[Mobile apps]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 21 Apr 2024 08:35:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713645228807/bdf1b927-0aea-4759-a4a3-6c4fa741d13d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In a world where our days are filled with endless tasks and distractions, remembering to take essential medication can become a daunting challenge. For my mother, this struggle became all too real, prompting the development of DoseUp - a medicine tracking mobile app.</p>
<h3 id="heading-the-problem-of-medication-management">The Problem of Medication Management</h3>
<p>As a loving son, it pained me to see my mother struggling to keep track of her medication. For my mother, keeping track of her medication routine was more than just a daily task—it was a matter of health and well-being. With an ever changing list of medications to take each day, the risk of missing a dose loomed large, potentially leading to serious health implications. Now you may wonder, why to over complicate things? Why can't my mom just use a pen and paper to mark whether she has taken her medicines for the day or not? Traditional methods of using a calendar or pen-and-paper simply couldn't keep up with the demand and complexity of her medication routine.</p>
<p>Fueled by the desire to ease my mother's burden and ensure her well-being, I set out to create a simple, user-friendly solution – the DoseUp mobile app. Developed over a weekend, this app was designed specifically to cater to the unique needs of my non-tech-savvy mother, providing a seamless way for her to manage and track her medication with just a single touch.</p>
<h3 id="heading-the-birth-of-doseup">The Birth of DoseUp</h3>
<p>Understanding that simplicity and ease of use were paramount, I meticulously documented the necessary features and translated them into a design that prioritized intuitive functionality. Despite having limited mobile development experience, I leveraged the power of MIT App Inventor, a no-code drag-and-drop app building tool, to bring my vision to life. Here's an overview of how the blocks in the app inventor tool looked like.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713644557030/e61ff6f8-5c45-4968-82c8-0984f6d7e134.png" alt class="image--center mx-auto" /></p>
<p>The focus on user experience was unwavering – from enlarged font sizes and buttons for effortless navigation to a simplified design that eliminated unnecessary complexities (see home screen). Data privacy was also a top priority, leading me to opt for local storage of all medication records, ensuring the security of my mother's sensitive health information. DoseUp addresses the critical issue of data privacy by storing all information locally on the user's mobile device, safeguarding sensitive health data.</p>
<h3 id="heading-key-features-of-doseup">Key Features of DoseUp</h3>
<p>One of the key features of DoseUp is its comprehensive history function, providing valuable insights for caregivers and loved ones to monitor medication adherence. Additionally, the app includes an options screen, where users can effortlessly customize their medication records, adding or removing medications based on their evolving needs. Automatic reset of dose status daily at 12:00 AM ensures seamless tracking and management.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713644619299/5eab8b56-50af-4578-8133-3ed6d3cfe279.png" alt class="image--center mx-auto" /></p>
<p>I have shared the MIT app inventor project file and the android apk for DoseUp in the following github <a target="_blank" href="https://github.com/void-ness/DoseUp">repository</a>. Feel free to check it out and let me know your thoughts about the same. If there's any specific feature that you would want to see in the application, do let me know by opening up an issue ticket on the above mentioned repository or by reaching out to me via email.</p>
<h3 id="heading-future-plans-and-improvements">Future Plans and Improvements</h3>
<p>After a week of testing the application with my first user - my mother, I got to know an interesting insight. The app is not serving its purpose to the fullest. Now, my mom keep forgetting to mark the doses on the application. So something needs to be done to remind her about the same. I plan to solve by further improving my application. Looking ahead, I am committed to enhancing DoseUp by implementing push notification functionality to deliver timely reminders to users. This feature will further streamline the medication management process and ensure that my mother—and others—stay on track with their prescribed regimen. Secondly some medicines are taken twice in a day. For such medicines, as of now users needs to create two different entries. I plan to fix it by making changes to the logic and introduce a functionality to track frequency of the medicines taken.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>As we navigate the complexities of modern life, the importance of effective medication management cannot be overstated. With DoseUp, I aim to provide a simple yet powerful tool that empowers users to take charge of their health with confidence. By leveraging technology to address real-world challenges, DoseUp paves the way for a future where managing medication is no longer a burden but a seamless part of daily life. Watch this space as we continue to innovate and improve DoseUp, ensuring that the journey to better health is just a tap away. Let DoseUp be your trusted companion in the quest for wellness and well-being—because when it comes to health, every dose counts.</p>
]]></content:encoded></item><item><title><![CDATA[I Participated in a Prompt Engineering Contest]]></title><description><![CDATA[Imagine stepping into the heart of technological innovation, where the brightest minds converge to push the boundaries of what’s possible. This is exactly what unfolded at the prestigious IIT Delhi during their annual tech fest, Tryst. Among the myri...]]></description><link>https://blog.lakshyabuilds.com/i-participated-in-a-prompt-engineering-contest</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/i-participated-in-a-prompt-engineering-contest</guid><category><![CDATA[generative ai]]></category><category><![CDATA[#PromptEngineering]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[genai]]></category><category><![CDATA[Experience ]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Fri, 12 Apr 2024 16:21:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1712938531668/663f0db4-1fb5-490b-995d-ccd8f7ef5b73.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine stepping into the heart of technological innovation, where the brightest minds converge to push the boundaries of what’s possible. This is exactly what unfolded at the prestigious IIT Delhi during their annual tech fest, <a target="_blank" href="https://www.instagram.com/tryst.iitd/">Tryst</a>. Among the myriad of events, one stood out for its unique blend of creativity and technology: the ImaGenAI contest. It wasn’t just a competition; it was a celebration of human ingenuity meeting artificial intelligence, a testament to the endless possibilities when these two forces join hands. The Goal was to use AI tools to the maximum extent while solving a problem.</p>
<h1 id="heading-round-one-the-prompt-pioneers"><strong>Round One: The Prompt Pioneers</strong></h1>
<p><img src="https://miro.medium.com/v2/resize:fit:875/1*IVX2-1Bb9_d-uY-sl0evNQ.png" alt="Images created using DreamStudio by Stability AI" /></p>
<p>The challenge was simple yet intriguing: find prompts to generate a specific image. As participants, we dove into the depths of our imagination, crafting prompts that would bring pixels to life. Our efforts paid off as we emerged victorious, scoring the highest points and securing our place in the next round. This victory was not just about out-scoring others; it was about proving to ourselves that we could translate abstract concepts into visual masterpieces.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:875/1*t4ntCnl-m4V5B-EHmnugUA.png" alt="Images created using DreamStudio by Stability AI" /></p>
<h1 id="heading-round-two-the-race-against-time"><strong>Round Two: The Race Against Time</strong></h1>
<p>With the thrill of victory still fresh, we faced the ultimate test in round two. Tasked with creating a presentation and a website using only AI tools, we tackled a complex problem related to billboard advertising. The catch? We had just one hour. As a team, we harnessed the power of AI, brainstorming and building a solution that showcased our collective ingenuity. While the initial 15 minutes were spent understanding the problem, we made full use of the remaining time by dividing the work between ourselves. While I and my other friend worked on generating a presentation, the third member of our team focussed on creating a propelling website as a prototype, that too by giving prompts to AI. Our solution revolved around creating a marketplace for billboard owners to list their holdings along with appropriate statistics. Our USP lies in the visualized, map-styled discovery platform, aimed at business owners looking for billboard hoardings near their target area. The map tool would allow them to analyze the traffic situations around their billboard and the impact of nearby holdings on their market campaign. With a few minutes remaining on our clock, we came back on track as we pieced together a project that was greater than the sum of its parts.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:875/1*wjDdpHFOpSvHFYy6v0Hy1w.png" alt="A glimpse of the event venue as we were brainstorming solutions" class="image--center mx-auto" /></p>
<h1 id="heading-a-revelation-ai-as-an-enhancer-not-a-creator"><strong>A Revelation: AI as an Enhancer, Not a Creator</strong></h1>
<p>This intense hour of collaboration led to a profound realization: AI tools are incredible enhancers, capable of elevating our work to new heights. However, they cannot replace the foundational human touch that ignites true innovation. In the high-octane world of tech innovation, AI is like the ultimate power-up. It’s that extra boost that propels your ideas into the stratosphere. But here’s the catch: it’s not the game itself. During the ImaGenAI contest, this truth hit home for me. We were racing against the clock, using AI to whip up a presentation and a website about billboard advertising. The tools were slick, the process was fast, but it was our human creativity that steered the ship. Think of AI as the world’s best assistant — it can sort, calculate, and even create, but it can’t dream or understand the heart behind the hustle. It’s like having a super-smart robot in a soccer match: it can kick the ball, sure, but it can’t feel the thrill of the game or the passion of the players. That’s why, even though AI can enhance the work, it can’t be the foundation. The foundation is built on late nights, crazy ideas, and that little voice that says, “What if?”. As we wrapped up our project, it was clear: AI had helped us shine, but the spark? That was all inside us.</p>
<h1 id="heading-exploring-dreams-the-iit-delhi-experience"><strong>Exploring Dreams: The IIT Delhi Experience</strong></h1>
<p><img src="https://miro.medium.com/v2/resize:fit:875/1*g7Jy6VDdKwVXb6xvOnbv9w.jpeg" alt /></p>
<p>Beyond the competition, we had the opportunity to explore the IIT Delhi campus. For many engineering aspirants, including myself, setting foot on this ground is a dream come true. The campus did not disappoint, with its sprawling layout and vibrant atmosphere. Every corner told a story of excellence and aspiration, inspiring us to dream bigger and reach further. There’s something magical about being in a place that’s a beacon of aspiration for so many, including myself. It’s where the future is shaped, where ideas aren’t just born — they’re forged into reality.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:875/1*BPj7WQvOBczJpCMOjZ8cjg.jpeg" alt /></p>
<p>It was a demonstration about an hardware product through which a lot of physics related experiments can be performed without the need of any expensive scientific instruments.</p>
<p>Adding to the excitement was an ISRO exhibition nestled within the campus. Stumbling upon the ISRO exhibition was like finding a hidden treasure. The models of satellites and rockets were not just impressive displays; they were symbols of human ambition reaching for the stars. Standing there, surrounded by the grandeur of India’s space endeavors, I felt a connection to something greater than myself.</p>
<h1 id="heading-the-wait-for-triumph"><strong>The Wait for Triumph</strong></h1>
<p>Returning home, we eagerly awaited the results, hoping each day to see our name among the winners. But as we all know, life has its own plans. Despite not placing in the top three, the experience was far from a loss. It was a journey filled with learning, growth, and the joy of pursuing our passions. In my final year of study, visiting IIT Delhi was more than a competition; it was the fulfillment of a long-held aspiration. Standing on the grounds of my dream college, I realized that some victories are not about accolades but about the journey and the memories we create along the way. And as I walked through the halls that have shaped some of the greatest minds, I knew that this was just the beginning of my adventure.</p>
<p>As my journey of wandering through the lawns of IIT Delhi comes to a close, I can’t help but reflect on the myriad of stories that each participant must hold. What’s your story? Have you ever participated in a tech fest or used AI in a unique way? Drop your stories in the comments below and let’s inspire each other with tales of technology and triumph!</p>
]]></content:encoded></item><item><title><![CDATA[Breaking the Sound Barriers - Lighting Up the Future of Alert Systems]]></title><description><![CDATA[Digital Solutions have changed our lives for good. Be it the travel apps that let us travel from one place to another or the instant messaging and video-calling applications through which you can reach out to your friends living across seven seas. Ca...]]></description><link>https://blog.lakshyabuilds.com/breaking-the-sound-barriers-with-alertify</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/breaking-the-sound-barriers-with-alertify</guid><category><![CDATA[engineering]]></category><category><![CDATA[Accessibility]]></category><category><![CDATA[iot]]></category><category><![CDATA[technology]]></category><category><![CDATA[innovation]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 24 Mar 2024 11:34:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711238259438/ac74c9f2-d807-4936-954d-876f38250ccc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Digital Solutions have changed our lives for good. Be it the travel apps that let us travel from one place to another or the instant messaging and video-calling applications through which you can reach out to your friends living across seven seas. Can you imagine even a day without such applications? Well for some of us, it has been years waiting, to fully use these technologies on their own. All because of a gap in the designing of such applications - the accessibility gap.</p>
<p>Before we start, I have a small task for you. Look around you. Pick anything. Now try to analyze that product from the perspective of any special-needs individual (blind, deaf, impaired motor, dumb, dyslexic). Can they use this thing the same way as you can? in 90% of the cases, they won’t be able to. This difference arises due to a lack of inclusivity and accessibility while designing such solutions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711236786569/3c99a0b3-2167-4ba7-b6f4-054570004ab2.png" alt="An image showing the contrast between two stairs - with the right one being inclusive in nature" class="image--center mx-auto" /></p>
<p>Let’s talk about one such solution that most of us rely on for our day-to-day work - our notifications system. What is common about an incoming call on your phone, someone ringing your doorbell, and a new message alert on your smartphone? A sound is generated to alert you about the same. The “ting-tong” that grabs your attention. However such interactions are not inclusive, keeping out the deaf and hard-of-hearing individuals. To cover this gap, I present to you Alertify - a smart alert system that transforms your smart home lighting into a visual alert system. Now, imagine your room lights turning blue to alert you about your ringing doorbell. Sounds interesting right? Read ahead as we transform this idea into a reality.</p>
<h3 id="heading-what-is-the-problem">What is the problem?</h3>
<p>A majority of the alerting system relies on sound-based communications to alert the users. Be it a ringing phone or the buzzing bell of your home. This poses a huge problem for deaf and hard-of-hearing individuals who have impaired auditory senses. Due to this, such individuals are dependent on others around them to alert them. To better understand the problem, imagine you are a deaf person living alone and expecting someone to come to your home. As of now, you need to wait by standing at the door beforehand since you can’t hear the ringing bell. While a notification alert from your favorite social media app can wait, you can’t afford to miss an alert about an incoming cyclone/flood from your local government. Not only this, haven’t we come across our grandparents finding it hard to hear incoming phone calls? This is the pain point I plan to solve using Alertify.</p>
<h3 id="heading-why-is-this-problem-worth-solving">Why is this problem worth solving?</h3>
<p>Notifications play a huge role in our day-to-day lives. Ever since the covid pandemic, our lives have gone digital. That means, most of our work has been digitalized. Confined to our homes, and glued to our phones, it has been getting difficult to stay away from our digital devices. For some, it is a medium to stay connected with the global world, for others it is a means to order their food, cabs, and whatnot. All of the above activities are stitched together with a common string of push notifications to notify users of real-time updates. This brings us back to the flaw in the design of notification systems in general. Be it our phones, laptops, or the doorbells of our homes. They rely heavily on our auditory senses to notify us by using sound waves. For deaf and hard-of-hearing individuals with impaired auditory senses, this corresponds to increased dependence on others to stay alerted or potentially missing out on important notifications. This restricts them from leading a normal life like others for no fault of their own. One may think it is only the deaf people facing such problems. However, the ground reality shows that the affected individuals include those suffering from temporary deafness too - ranging from people having ear infections to genZ and their music-during-work obsession. Your mind may be running around thinking about the potential alternatives to sound-based alert mechanisms like smartphone notification LEDs, vibrating sensors, etc. Continue reading as we compare those solutions with Alertify.</p>
<h3 id="heading-the-solution">The Solution</h3>
<p>The idea was simple - to convert a smart home lighting system into a visual alerting system that will notify users about incoming alerts. Since there are a lot of smart bulbs available in the market by different brands, my first intuition was to use existing smart bulbs and configure them for my use case. However, these bulbs can only be controlled via their own apps or smart assistants like Alexa and Google Assistant. Inside the bulbs, they use Bluetooth and WiFi-based communications to interact with the user’s phone. It was equally difficult to reverse-engineer them swiftly and added a lot of friction in the development of a custom-built application to control them. Thus preventing any external developer to drive innovation and find new use cases for their products. Determined to innovate, I took out the engineer inside me and developed a prototype of my own as a proof-of-concept, ditching the existing smart bulbs for now. After some brainstorming, I decided to make a simple LED blinking device that will listen to incoming notifications on the connected phone and blink a particular LED based on the type of notification received. Having tinkered with Arduino in the past, I decided to use an Arduino UNO board along with a Bluetooth HC-05 module for building the hardware part of the project. Due to my limited exposure to app development, I decided to take the help of MIT App Inventor to build my custom mobile application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711236874780/a6aabf86-99af-49d7-a6f3-ac6ccf68edcf.png" alt="A high level overview of the working of the prototype" class="image--center mx-auto" /></p>
<p>Sounds simple right? What I thought would be a cool weekend project ended up taking a week to complete. With no prior experience with app development, I first set out to explore no-code app development tools. This was when I stumbled upon MIT App Inventor. Now the only problem was, that this tool was meant to be used as a learning resource to teach school-going students about app development. Hence had limited support for complex functionalities. However, thanks to the vibrant community of developers, a lot of exciting new features are possible with the help of third-party extensions. For my project, I needed to listen to incoming notifications on the user’s phone by utilizing their notification service. While this support was not inbuilt, with the help of a wonderful notification listening extension by <a target="_blank" href="https://community.appinventor.mit.edu/t/notification-listener-extension-open-source/19973"><strong>Taifun</strong></a>, I was able to move forward with my project. As of now I was reading the incoming notifications, filtering them out as per my needs, then assigning each notification a numeric priority value. This data was then sent to the connected Arduino board with Bluetooth for further processing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711236904944/a0d69346-cd75-45b3-8e54-6d258846ffc8.png" alt="An image showing the snippets of final mobile app along with notification data processing steps" class="image--center mx-auto" /></p>
<p>Programming an Arduino is a whole new world. Here you are crippled by the limited space and computing power of the microcontroller. I had this O(n2) side of mine, which the Arduino’s O(1) personality didn’t really approve of. Everything had to be optimized to ensure smooth and low-latency operations. That meant keeping the logic on board as simple as possible. With this in mind, I scraped my initial thoughts of processing the data on the Arduino board and instead revamped the application side logic to process it on the user’s phone and only send minimal encoded data - enough to trigger a response. In simpler words, my Arduino board need not know whether the incoming notification is a WhatsApp message or a phone call. All it needs to know is whether the incoming notification priority is higher than the previous one or not. And if so, which LED no. to blink? Having worked with REST APIs where the majority of the data processing is on the server side, it was a bit challenging for me at first to optimize the data transfer as much as I could and send only the pre-processed information to the Arduino (backend). It required a shift in my mental model.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711237080877/c6eb0958-ebbe-40e8-b573-0ae3baa26168.png" alt="A pseudocode snippet showing the old versus the new optimized logic" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711237101106/8093d29d-71f7-4b30-a9be-8b86d4ec8747.png" alt="An image showing the circuit connections of arduino along with some snippets of the arduino code" class="image--center mx-auto" /></p>
<p>With my Android app ready and the Arduino code successfully compiled, it was time to fit the final block in the puzzle - making the connections. Armed with jumper wires, breadboards, and powerful microcontrollers, I set out connecting them like some little pieces of legos (no Jerry Rig was harmed during the process). As I was seeing the prototype coming together in front of my eyes, I felt like the doctor from Toy Story 2 fixing Woody, working with my small yet powerful tools. With all the pieces together, it was time for the most important part - testing the system. As the power began flowing through the Arduino, turning up the Bluetooth module with its blinking LED, I was doing my final checks to make sure of any loose connections and fixing them. I bring up my phone, connect it to the system and I wait. Wait for an incoming message. I haven’t been this anxious waiting for someone’s message as I was this time. Suddenly I received a message on WhatsApp and instantly I could see the white LED on my prototype blinking. Finally! My prototype was working as I expected it to work. Was it my eureka moment? Have I broken the ceiling? Okay, enough exaggeration. Let’s get back to the ground. I have made a minimalistic project that can listen to incoming notifications on my phone and blink respective LEDs to notify me about the same without looking at the phone or relying on its sound-based notifications to get alerted. Not only that, you can assign a priority to different types of notifications. E.g. An incoming phone call will have a higher priority than a reminder from your calendar app. This behavior can be customized using the custom-built app. Secondly, the users can even filter what notifications to listen to. Working on something important and only want to get alerted for phone calls? Alertify can help you with that.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711237165629/f6089913-fa36-476f-8861-f0faee24c7ad.png" alt="The final look of the prototype - v1" class="image--center mx-auto" /></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/SE7_NB2r5Pg">https://youtu.be/SE7_NB2r5Pg</a></div>
<p> </p>
<h3 id="heading-why-this-why-not-that">Why this? Why not that?</h3>
<p>For my critical-thinking readers, let me address the question you all have been waiting for - Why Alertify? If you have read so far, at some point you may have thought of some kind of currently existing solutions that can serve the same purpose as Alertify. Well, let me tell you, you are not alone. While researching this problem, my first step was to identify the existing solutions and how they are trying to solve this problem for deaf and hard-of-hearing individuals. One of the most exciting alternatives is the Apple Vision Pro (AVP) which has the capabilities for real-time speech-to-text. Imagine you are a deaf person talking to your friends, the AVP can listen to your conversation and visualize the speech in front of your eyes, somewhat similar to real-time captioning. This is not just limited to speech, but can also detect sounds like running water, someone knocking at your door or the “ting-tong” on your phone. Even better it can display on your screen whenever you receive a new alert on your phones. However, as of now, the tech is very new and you can’t go around wearing this device 10 hours a day without feeling a strain on your eyes. Then comes the smartwatches that people go around wearing throughout the day. They come fitted with vibrating sensors and can alert you about incoming notifications using it. Though they serve as a good utility for various other applications like tracking your health metrics and navigation, they fall short in providing comfort when used for longer periods of time. At home, where the users value comfort the most, they might not prefer wearing any device that alerts them at the cost of their comfort level. This is where Alertify shines. You don’t have to wear any device on your body. A device like Alertify can light up a corner of your room and serve the same purpose while at the same time adding to the aesthetics of your room.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711237808023/315dd84a-01cb-45fc-b375-003d48c9123b.png" alt="An illustration showing alertify weighs heavier than its alternatives" class="image--center mx-auto" /></p>
<p>At last, let’s talk about the notification LED that comes inbuilt into some smartphones. It is positioned on the top of your phone and constantly blinks in the event of an incoming notification. In the newer versions, this has been replaced by Always-On-Display or AOD. The major drawback of such an alerting system is that it requires the user to constantly keep their phones in sight if they want to be alerted. What if your phone is inside a drawer or kept at some faraway table? Or maybe it is kept in some other room. To tackle this problem, I plan to enhance Alertify and shift from Bluetooth-based to WiFi-based communication networks. In this manner, a network of interconnected smart lights can be created that can transform your whole home into a big alerting system. So if you are working in the kitchen and your phone is kept in your bedroom, the kitchen lights can change colors to alert you about any important incoming alerts on your phone. This behavior can also be configured by the users as per their preferences. This is what Alertify aims to achieve, making it stand out from other alternatives currently in the market.</p>
<h3 id="heading-the-road-ahead">The Road Ahead</h3>
<p>While Alertify, in its current form, is far away from being used in our day-to-day lives, it is a first step towards making the alerting system inclusive for the deaf and hard-of-hearing individuals. During the process of building the prototype, I decided to use the resources available at my disposal. To make it compact and increase the range of communication any WiFi-enabled Arduino NANO microcontroller can be used. Secondly, a RGB LED in place of multiple single-color LEDs can be used to provide more customization to the end users. This can then be neatly packed inside a smart-bulb-like 3D-printed enclosure along with some AC to DC converters. However, I would like to bring your attention back to why I built this prototype in the first place. Big brands who have perfected their hardware have kept the technology of interfacing with their products private. This prevents developers from building on top of their devices. E.g. if you have a smart bulb of XYZ brand, you need to download the app of the same brand to control your device. Now pinch out the problem a little bit to see the bigger picture. There is a lack of a common mechanism to control digital devices from different brands through a single app. You don’t have a single app to control all your smart bulbs from brands A, B, and C. This problem is not limited to just smart bulbs but also other smart home devices like doorbells, door locks, ACs, smart plugs, and switches. A feature through which you can listen to your phone notifications and change light colors is fairly easy to implement. While brand A may provide support for it through its app, brand B may not. Even though the hardware of both the products may be able to support this functionality. This is why we need open networks for controlling our devices. Imagine if we could have an open network that all these individual devices use to communicate with each other. In this way, instead of using multiple apps, one can prefer using a single app to control all their devices. Not only this, it will allow community-driven solutions to take precedence and drive forward the innovative use-case of such devices. This is the change that can be brought by open-sourcing such systems. While manufacturers can then focus on enhancing their hardware, consumer-focused apps can work on bringing exciting features to their large consumer base. This will in turn help drive up the sales of smart home devices. This is what I envision the future to be and I call this setup, the ONDC or Open Network for Digital Communication.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711237834788/207b0719-d830-4a00-8995-9291901d2ff1.png" alt="A high level overview of how ONDC may look at network level" class="image--center mx-auto" /></p>
<p>That marks the end of Alertify. However, I do hope, that this article opens up the gates of your mind into thinking about how inclusivity can be brought into many more things around us. If you want to try this project, feel free to build it on your own. Here is a <a target="_blank" href="https://github.com/void-ness/Alertify">GitHub repository</a> containing the Arduino code and APK file along with its source code which can be imported into the MIT App Inventor. Let us together build an inclusive world. If you enjoyed reading the article above, do show your support by giving it a like. At last, what do you think about such open networks from a security point of view? Do let me know your thoughts in the comments below.</p>
]]></content:encoded></item><item><title><![CDATA[From STEM to St. Stephen’s: A First-Timer's Adventure in DU's Heartland]]></title><description><![CDATA[When I was in 10th standard, I had a hard decision to make - Science or Commerce? A decision that was to shape the next 10 years of my life. While I was inclined towards taking science because of how much I enjoyed STEM, there was a part of me that w...]]></description><link>https://blog.lakshyabuilds.com/from-stem-to-st-stephens-a-first-timers-adventure-in-dus-heartland</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/from-stem-to-st-stephens-a-first-timers-adventure-in-dus-heartland</guid><category><![CDATA[campus culture]]></category><category><![CDATA[NorthCampus]]></category><category><![CDATA[engineering]]></category><category><![CDATA[coding]]></category><category><![CDATA[student]]></category><category><![CDATA[College life]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Thu, 14 Mar 2024 16:04:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1710430545820/367576d5-850a-41f0-a891-84c142815d8a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I was in 10th standard, I had a hard decision to make - Science or Commerce? A decision that was to shape the next 10 years of my life. While I was inclined towards taking science because of how much I enjoyed STEM, there was a part of me that was inclined towards living my student life as a DU student and taking commerce. Though I went ahead with taking science and joined an engineering college, In my final year I got a taste of this DU life while participating in a coding event. Read ahead as I take you down the streets of the north campus.</p>
<p>My stereotypical lens has seen DU as a place where you can find students singing, dancing, acting, rallying, protesting, literally anything but studying. Compare that to the image of engineering colleges, students running from class to lab to meet that minimum 70% attendance criteria and studying day and night before exams. It’s a parallel universe in many scenarios. So when I joined my engineering college I had this goal of experiencing this DU student life for once. I had heard from my commerce friends about North Campus - which is home to many different DU colleges. Of all these, a few colleges that stood out to me were SRCC and St. Stephen’s - because of their roof-touching acceptance criteria, selecting only the top 0.1% of students. If I were to ever go to a DU college, it has to be either one of them. Now I am a bit socially awkward person, so going to these colleges during a fest night was a no-go for me. Months passed by waiting for the right opportunity. On one fine day, the raven at unstop delivered me an opportunity that I couldn’t miss - Code-a-thon 2023. It was a coding competition organized by the Computer Science <a target="_blank" href="https://www.instagram.com/compsoc.ssc/">Society</a> of St Stephen’s College. Considering that my 3 am thoughts are still not worthy of a TEDx talk, I would not have gotten a better opportunity to explore the campus of St. Stephen’s than this. Knowing that the famous Bollywood film - Rockstar was shot on this campus, added to my excitement about visiting this college. Code-a-thon was divided into 2 rounds. An online round which comprised MCQs followed by an offline coding round where the task was to solve challenging coding problems under time constraints. An event like this is very common on any engineering campus, but the fact that it was being organized inside a DU college made things more interesting for me. With the intention of networking with new individuals, I registered for the opportunity and made it to the offline round.</p>
<p>While I tried to bribe my friends with some momos to accompany me to this college for the offline round, the red chutney spiced things up against me. St. Stephens. is also popular for its strict entry policy regarding outsiders which made sure I was alone on this journey. I was very excited as this was my first time exploring the north campus. The moment you come out of Vishwavidyalaya metro station - there’s this constant chanting of e-rickshaw drivers announcing that you have made it to the hub of all the colleges. The college was roughly 20 minutes walking from the metro station. I went all in and decided to walk to the venue instead of taking an e-rickshaw. Walking past lavish bungalows I was accompanied by an army of monkeys, swinging down the street with me. While I was walking down I was just hoping that none of these cute yet terrifying animals attack me. It also made me wonder how DUites tolerate them daily. What was supposed to be a 20-minute walk ended into a 30-minute stroll with me stopping in between, observing a little monkey casually eating chips after tearing a packet. Unlike some humans, this one went ahead and threw the wrapper in a dustbin (I wish I recorded that 😭).</p>
<p><a target="_blank" href="https://www.duupdates.in/wp-content/uploads/2020/09/1506160094phpkRW80Q.jpeg"><img src="https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fwww.duupdates.in%2Fwp-content%2Fuploads%2F2020%2F09%2F1506160094phpkRW80Q.jpeg&amp;f=1&amp;nofb=1&amp;ipt=93a2b02ce738a85a92f80c31d8e029f268e1bbe58ff1f9a836d940b615471ed2&amp;ipo=images" alt="DU ADMISSION: ST STEPHENS COLLEGE DELHI UNIVERSITY | Details here" /></a></p>
<p>Finally, I made it to the iconic road from where St. Stephen’s looks the most magnificent. To the disappointment of my fellow Instagrammers, It is constantly guarded to prevent people from clicking photos of the building. I am not sure why. Then I make my way into the campus through the main gate. There I met some fellow participants as we waited for the event coordinator to guide us to the event venue. The lush green campus of the college, with all different kinds of birds chirping welcomed us inside it by showing us the way through these blend of gothic and colonial styled buildings. We passed by some classrooms and to my surprise students were studying inside them. Apart from that the atmosphere was everything that I had dreamt of in a college - a few students casually dancing to the tunes of melodious music, others having their jamming session. The cherry on top is the theatre kids trying to perfect their storytelling. What makes this college stand out is the international students that it attracts for some of its courses. While I have worked with people across the globe in the past, I have always wondered about the impact such exposure can have on your personal growth. it surely resembles an experience of studying abroad in terms of having a diverse peer circle.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710427263407/93bcb2f3-f97e-4328-82c7-24367638db50.png" alt class="image--center mx-auto" /></p>
<p>Inside the event, I was networking with fellow participants while the event organizers were busy with last-moment preparations. Being a final-year student, I felt nostalgic interacting with my juniors, hearing them rant about their academic pressure and tense about their future placements. After talking to a bunch of enthusiastic folks about the latest technology trends I realized a thing. Though we were not pursuing the same course, we shared the same spark towards technology. The room was filled with discussions about various programming languages to the use of AI tools in our day-to-day lives. This shows that technology knows no boundaries. Seeing the up-to-date computing resources in their labs also shows their college authorities' commitment to upskilling their students in emerging technologies. My day was made when I met a fellow participant who was currently in his first year, pursuing a Bachelor of Sciences in Physics from DU. His interest in quantum computing and his ultimate goal of becoming a part of the CERN lab stood out to me. It is my biased take, but there’s an engineer hidden in all of us.</p>
<p>Post the event we had some complimentary snacks and went for a stroll around the campus. On my way out, I met an event coordinator. As we walked back home talking about the changing placements scenario for the current batch, I got some scoop about their most popular tea stall - “Sudama Ki Chai” and the reason behind its over-the-roof success (☘🤫). This was when our paths diverged and I decided to take an e-rickshaw for my remaining journey to the metro station. The final thing on my checklist was fulfilled as I crossed a group of protesting students.</p>
<p>If someone were to color-code the life of a DU student, I am sure the result would be a broad spectrum of colors. There is so much it has to offer. At the end of the day, the choice is yours whether you plan to soak yourself in the brights or indulge yourself in the darker shades. I went back home, having crossed another item from my goals list. While my phone’s gallery was full of aesthetics of the north campus, my heart was filled with the wholesome experiences I had throughout the day. While winning or losing one thing, what matters to me the most is the learning that comes from the process. I was delighted to meet so many new people and learn about their unique experiences. The engineer inside me became more appreciative of the different mindsets others can bring to the table. The 10th-grader inside of me broke the stereotypical lens it had formed about the life of a DU student. Oh did I also mention that I clinched the second prize in the event? If you feel like congratulating me, feel free to do so by giving a heart to this article. Have something more to add about the DU student life? Drop them in the comments. I would love to read them.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring India's Digital Rails - DPG and DPI]]></title><description><![CDATA[India is changing and this new change won't be limited to any specific part of the country. It is going to travel through the network of Digital rails - DPG and DPI and reach the remotest parts of our country. I got an opportunity to experience this ...]]></description><link>https://blog.lakshyabuilds.com/exploring-indias-digital-rails-dpg-and-dpi</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/exploring-indias-digital-rails-dpg-and-dpi</guid><category><![CDATA[DPG]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[technology]]></category><category><![CDATA[DPI]]></category><category><![CDATA[DPGDialogues]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Sun, 17 Sep 2023 08:30:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694879181027/1ac7b3fb-fa05-47c6-974a-e1a12b59cc6e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>India is changing and this new change won't be limited to any specific part of the country. It is going to travel through the network of Digital rails - DPG and DPI and reach the remotest parts of our country. I got an opportunity to experience this change in the first edition of DPG dialogues. If you are a student who wants a ticket to this Digital train of innovation, take a seat and keep reading as I take you through a trip you won’t regret taking.</p>
<p>But before moving ahead let's just understand what DPG and DPI are.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694876874674/f15a2de2-b2c0-4b4c-8b7e-869eaeb95962.png" alt class="image--center mx-auto" /></p>
<p>DPG stands for Digital Public Goods whereas DPI stands for Digital Public Infrastructure. Open sourced, Interoperability and Scalability, are some of the key qualities that you can find in them. They are projects designed to solve population-scale problems. E.g. Aadhar, UPI, DigiLocker and the recently popularised ONDC. The successful G20 New Delhi Leader's <a target="_blank" href="https://www.g20.org/content/dam/gtwenty/gtwenty_new/document/G20-New-Delhi-Leaders-Declaration.pdf">Declaration</a> mentioned building a Global Digital Public Infrastructure Repository (GDPIR) which will enable access to such solutions at a global level.</p>
<h2 id="heading-dpg-dialogues">DPG Dialogues</h2>
<p>The event came amidst the backdrop of a successful G20 event. It served as a platform to bring together all the stakeholders of the DPG ecosystem - Creators, Contributors and Consumers. The event was attended by a lot of pioneers of India’s Digital Ecosystem. <a target="_blank" href="https://www.linkedin.com/in/rssharma3/">Dr. R.S. Sharma</a> - Ex CEO of NHA &amp; UIDAI, <a target="_blank" href="https://www.linkedin.com/in/abhisheksinghias/">Shri Abhishek Singh</a> - MD &amp; CEO, Digital India Corporation &amp; <a target="_blank" href="https://www.linkedin.com/in/kumarvimal/">Vimal Kumar</a> - Founder and CEO of Juspay to name a few. It was organized by <a target="_blank" href="https://www.linkedin.com/company/samagra-transforming-governance/">Samagra</a> in collaboration with other partners who collaborated to put together this one-of-its-kind forum.</p>
<p>The event agenda comprised fireside chats with the distinguished leaders of the ecosystem, three different tracks covering Governments, Ecosystems and Markets. Each track consisted of a well-balanced panel where every person had their unique insights to share on several topics ranging from Community Building in the DPG ecosystem to a market potential of $100 Billion that is going to be there by 2030.</p>
<p>While the whole 7+ hours of the talks can't be described in words, here are some of my key insights from the forum.</p>
<p><img src="https://pbs.twimg.com/media/F5wY_SDaYAA5ZMO?format=jpg&amp;name=large" alt="An image from the DPG Dialogues - Markets Track" /></p>
<p><em>(Panel for markets track at DPG Dialogues)</em></p>
<h2 id="heading-beckn-protocol">BECKN Protocol</h2>
<p>In the simplest words, think of BECKN protocol as a language that can be used to transact with each other. A language that allows a buyer to connect with a seller and trade things. <a target="_blank" href="https://www.linkedin.com/in/warpcoderavi/">Ravi</a>, who currently heads FIDE (formerly BECKN), shared his insights about how we need to shift our mental model from platforms to networks. As of now, let's say we have something (e.g. money) and we want to purchase something (e.g. EV Charging spot), the first thing that comes to our mind is, "Is there an app for this?". How about we think, "Is there any network that can fulfill our request?". In this case, you don't care about what app you may use. This is the kind of mental model needed amongst the developers who try to build solutions at a protocol level which can then be used by any number of customer-focussed applications. One thing that stood out to me was that in such population-scale projects, Scalability can not be an afterthought. It has to be a key component while designing the product from day 1. BECKN protocol has been at the root of ONDC which aims to democratise eCommerce.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694870264602/01f10076-c94a-49c7-8fe7-c46f412ead61.png" alt class="image--center mx-auto" /></p>
<p><em>(Ravi Prakash - Head of Architecture and Technology Ecosystem at FIDE)</em></p>
<h2 id="heading-bhashini">BHASHINI</h2>
<p>Ours is a land of cultural diversity. We take pride in the different cultural identities that live together in harmony. They all have different cultures, eat different food, and speak different languages. Bhashini aims to cater to such an audience and build speech analysis tools that can identify and process such a diverse set of languages. It will be a victory for technology when it reaches millions of Indians who don't speak english. Imagine if a farmer in Orissa could check the prices of fertilizers by sending a voice note in his native language to a voice assistant. On the backend, the system receives the audio, converts it into text, finds a response and sends back a voice message - again in the native language of that farmer. It's an uphill task to train such language models in native languages.</p>
<blockquote>
<p>Bhashini aims to build a National Public Digital Platform for languages to develop services and products for citizens by leveraging the power of artificial intelligence and other emerging technologies.</p>
</blockquote>
<p>The Bhashini team has been spending nearly 60-70 crores INR for collecting the data in all such different languages. The future holds many applications of voice-based search technology and <a target="_blank" href="https://www.linkedin.com/in/amitabh-nag-56039b5/">Amitabh Nag’s</a> talk was a testimonial to all the developments that are currently unfolding at Bhashini.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694871414303/46bcb334-456d-405d-aadb-20e2cbacadd4.png" alt class="image--center mx-auto" /></p>
<p><em>(Left to right: Nitin Kashyap - Head of Product at Samagra, T. Koshy - MD &amp; CEO at ONDC, Amitabh Nag - CEO at Bhashini)</em></p>
<h2 id="heading-unlocking-the-power-of-communities">Unlocking the Power of Communities</h2>
<p>Here's the part you all have been waiting for. Read ahead to know how you can prove your skills and get a chance to implement your knowledge to solve real-world problems. Nowadays with so many job opportunities getting bombarded with thousands of applications within minutes, it has become quite difficult to stand out with just our resume. With changing times, the job recruitment process is also expecting a change sooner or later.</p>
<p>One way to get the much-needed exposure &amp; experience is to contribute to Digital Public Goods. The organizations behind such projects are looking for enthusiastic contributors who can shape future developments. If they want to hire, instead of carrying out the usual process of rolling out job posters they can easily identify the top contributors and directly reach out to them to fulfill their needs. In addition to that, these contributions serve as a testimonial to your skills.</p>
<p>This was a key topic touched upon in the community track of the event. Later in the talk, <a target="_blank" href="https://www.linkedin.com/in/warpcoderavi/">Ravi</a> mentioned how people nowadays appreciate instant gratification in return for their work. CodeForGovTech addresses this with their DPG Community toolkit. Mr. <a target="_blank" href="https://www.linkedin.com/in/rahul10100/">Rahul Kulkarni</a> - Chief Technologist at Samagra, raised the curtains from the C4GT <a target="_blank" href="https://www.codeforgovtech.in/community-program">Community Program</a>. It aims to build an ecosystem of developers who contribute regularly to DPG. Understanding the problems faced by people while contributing, they prepared a framework for organizations as well as contributors. While the project owners get to understand how to structure their repository to welcome the contributors, the developers get their efforts recognized and rewarded through a point-based system. As you climb up the ladder, you get access to rewards such as goodies, community recognition, and even potential job opportunities inside the ecosystem.</p>
<p>Here's a fact, I got an opportunity to attend this exclusive event because of my community contributions. As a student myself, I feel such opportunities provide the right mix of mentorship and exposure one needs to implement their knowledge into solving real-world problems.</p>
<p><img src="https://media.licdn.com/dms/image/D4D22AQH4M84HPgvOWA/feedshare-shrink_1280/0/1694596051293?e=1697673600&amp;v=beta&amp;t=zAc7hElBu_I8ovbM5Z6P4H2pTzsY1SG6H0UisQRVrA4" alt="No alt text provided for this image" /></p>
<p>These were some of the lessons I learned from the insightful talks that took place during the event. Post the event, I was fortunate enough to get an opportunity to interact with <a target="_blank" href="https://www.linkedin.com/in/kumarvimal/">Vimal</a> - Founder at Juspay, where he shared his working principles of life. The conversation was full of wisdom that it deserves a blog of its own but here are some insights from the man behind Namma Yatri - a popular auto-booking app.</p>
<h2 id="heading-the-aesthetics-of-designing-a-solution">The Aesthetics of Designing a Solution</h2>
<p>While the conversation started from appreciating the beauty of Rust and Ruby as a language, one thing led to another and Vimal shared his views about focussing on the aesthetics while creating something - e.g. a song, a painting or even a dashboard. He shared his interests in solving mathematical problems and how he found the art of aesthetics while learning to play piano. Just like while playing the piano he focussed on the aesthetics of the sounds instead of thinking subjectively, he suggested thinking aesthetically while designing solutions too. Be it any piece of code or a system, think about the aesthetics and you are on a path to finding success. And this skill is not something that can be learned but comes from inside.</p>
<p>He later went on to share his views about going into the depths of a topic. A depth can be seen from 2 perspectives-</p>
<ol>
<li><p>Depth of Understanding - how deeply you understand a particular topic or in simpler words, your clarity about the fundamentals of that particular topic.</p>
</li>
<li><p>Depth of Expression - how easily can you express your knowledge about that particular topic in front of others.</p>
</li>
</ol>
<p>For a dancer, it is a combination of both that makes them a great dancer. In our lives too we can adopt a similar approach. One of the key learnings for me was his advice about trying new things.</p>
<blockquote>
<p>Whenever you are unsure about a particular idea, just try building a prototype of it and releasing it to a small set of users. Even before releasing it to others, be your first user and think would you be willing to use it?</p>
</blockquote>
<p>His clarity about how some ideas sound good but might not be practical reflects in his way of doing work. It is an opportunity of a lifetime where you get to interact with such visionaries and exchange ideas. Vimal's humble nature and zeal to interact with the students present there surely taught me a lesson in humility that I will never forget.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694874805179/14fceee7-7d40-4539-8757-109775be34f5.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-concluding-note">Concluding note</h2>
<p>I went to the event with a mind full of curiosity and an appetite to learn and came back home with a stomach full of great food and a bag full of wisdom pearls that I am going to share as much as I can. The event allowed me to network with like-minded people, understand different perspectives on solving real-world problems and give me enough fuel to drive my inner curiosity to learn more. The impact that people like you and me can together make will help us achieve the vision of our country. What are you waiting for? Head over to the <a target="_blank" href="https://www.codeforgovtech.in/community-program-projects">community page</a> and get involved from today!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694875824924/445c2769-eb5a-456e-9c1f-bfee85aa13a0.png" alt class="image--center mx-auto" /></p>
<p><em>(Left to right: Bhavya Berlia - Senior Associate at Samagra, Gaurav Goel - Founder &amp; CEO at Samagra)</em></p>
<p>If you enjoyed reading the blog, don't forget to like it. Feel free to <a target="_blank" href="https://twitter.com/void_stack">connect</a> with me and share your opinion about the Digital Rails. If you have any suggestions do drop them in the comments below. At last, if you are interested in knowing more about the talks that happened, here's a live <a target="_blank" href="https://www.youtube.com/watch?v=PqBLrNNCq4s">recording</a> of the event.</p>
<p>Yours truly,<br />Void-ness</p>
]]></content:encoded></item><item><title><![CDATA[From Hacktoberfest to C4GT - Where it all started | part 1]]></title><description><![CDATA[I was a fresher in college, exploring web development or in simpler terms HTML, CSS and JS. I Got to know about Hacktoberfest. Made a few contributions to some small open-source projects and got my goodies for participating in Hacktoberfest. I update...]]></description><link>https://blog.lakshyabuilds.com/from-hacktoberfest-to-c4gt</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/from-hacktoberfest-to-c4gt</guid><category><![CDATA[C4GT]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[#hacktoberfest ]]></category><category><![CDATA[gsoc]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Thu, 22 Jun 2023 21:34:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687468720107/288a147f-51d2-4a23-b514-9c60e630cfa7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was a fresher in college, exploring web development or in simpler terms HTML, CSS and JS. I Got to know about Hacktoberfest. Made a few contributions to some small open-source projects and got my goodies for participating in Hacktoberfest. I updated my LinkedIn bio with "open-source enthusiast". Life was going well. And then I came across GSoC and understood what open source actually meant. My bubble regarding open source burst very early. It is much more than submitting 4 PRs to get a t-shirt.</p>
<p>After being rejected in the GSoC'21 proposal round, my gloomy eyes fall upon a poster of <a target="_blank" href="https://www.codeforgovtech.in/">C4GT</a> edition 1, just when the closing ceremony was to be conducted. Let me tell you one thing about me, I am a very curious person. The first thing I did was to join the C4GT discord server. So, the program has ended. But I went ahead and messaged on the general channel, asking if there is any way I could contribute. Little did I know that this was the start of something magical which will keep me under its control for the next 1 year. So what is this C4GT?</p>
<h3 id="heading-what-is-it">What is it?</h3>
<p>C4GT, as some of you might know, stands for "Code for Gov Tech". It is an initiative by <a target="_blank" href="https://tech.samagragovernance.in/">Samagra-X</a> to build a community of developers who can contribute to Digital Public Goods (DPGs). If that wasn't clear, I got to know UPI and DigiLocker were some prominent examples of DPGs. These examples were enough to motivate me to know more about C4GT. I did the most predictable thing. Reached out to one of the contributors to understand more about the program. She was kind enough to let me know about her experience and what was the application process. One may think, this is not needed as most of the info is already out there. I will come back to why reaching out to people is important. However, This wasn't it. My curiosity knew no limits and I made it my plan to become a part of this community.</p>
<p>I spent weeks wandering on the discord server hopping from channel to channel, solving the doubts of fellow beginners who came wandering like me regarding C4GT. Finally after 2 months, in the now inactive server, I saw my ray of sunshine. One of the projects had some good first issues up for grabs. And my curious mind saw this as an opportunity to get engaged with the community. I took up the issues. And then came the most difficult part for me, actually working on solving them. In my defense, I was a student with an intermediate knowledge of web development trying to solve an issue that needed sufficient knowledge in making admin dashboards. while I was able to solve one issue and get the PR merged related to unwanted dependencies in the project, I struggled to find a fix for the second one. Due to my time commitments, I did what sounded best to me at that point - unassigning myself from the issue. I felt so bad that I could not complete the assigned task. However, instead of going inactive on the issue and deserting it, I replied promptly when I realized I won't be able to solve the concerned issue at that point. But this experience of getting my first PR merged into an actual open-source project was enough to ignite my interest. The little contribution that I made, turned out to be super helpful in the future.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687469230455/c90ce98f-8a0d-4693-ac59-a9c85b83cd92.jpeg" alt class="image--center mx-auto" /></p>
<p>Finally, after months of wandering, I found my agenda. The plan was to grab a spot in the next edition of C4GT. I decided to not let go of this opportunity in the future, whatever it takes. While 90% of my motivation was related to the fact that I will get a chance to contribute to a DPG project, the remaining 10% of my motivation came from the amazing stipend the program carried with it (around 75k INR in the first edition). I had just completed an internship where I got paid 10% of the C4GT stipend for 3 months of work. The fact that I could possibly earn 10 times of my internship stipend in just 2 months was a catalyst to my aspirations. I prepared a multi-step strategy.</p>
<ol>
<li>Keep an eye on the discord channel for any program-related announcements.</li>
</ol>
<ol>
<li><p>Get on a call with the previous year's contributor to understand more about the program.</p>
</li>
<li><p>Learn new skills, so that I don't have to do that digital walk-of-shame on any assigned issue ever again in the future.</p>
</li>
</ol>
<p>These were my top 3 priorities while I waited patiently for the summers of 2023 for the next edition to begin. Did I apply for the second edition? Did my strategies sown last year reap any sweet fruit of selection or did they prove to be futile, Stay tuned to my C4GT blog series as I share my unfiltered experience about C4GT.  </p>
<p>Yours Truly<br />Void-ness</p>
]]></content:encoded></item><item><title><![CDATA[What is C4GT and how it is changing the open-source culture in Indian Colleges]]></title><description><![CDATA[Are you a fan of open-source and appreciate the role of programs such as Google Summer of Code (GSoC), Linux Foundation Mentorship (LFX), and Outreachy in promoting the culture of open-source amongst the students by incentivizing their contributions?...]]></description><link>https://blog.lakshyabuilds.com/what-is-c4gt-and-how-it-is-changing-the-open-source-culture-in-indian-colleges</link><guid isPermaLink="true">https://blog.lakshyabuilds.com/what-is-c4gt-and-how-it-is-changing-the-open-source-culture-in-indian-colleges</guid><category><![CDATA[C4GT]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Open Source Community]]></category><category><![CDATA[DPG]]></category><dc:creator><![CDATA[Lakshay Gupta]]></dc:creator><pubDate>Tue, 20 Jun 2023 21:29:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687296476153/9997ff1d-7133-4ef5-b195-479645802a10.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are you a fan of open-source and appreciate the role of programs such as Google Summer of Code (GSoC), Linux Foundation Mentorship (LFX), and Outreachy in promoting the culture of open-source amongst the students by incentivizing their contributions? Then you will love C4GT.</p>
<p>Code for Gov Tech or C4GT is an initiative by <a target="_blank" href="https://tech.samagragovernance.in/">SamagraX</a> to create a community that can build and contribute to global Digital Public Goods. Sounds fancy? let me explain in simpler terms. It is a 2-month long coding program where the goal is to contribute to different projects under the mentorship and guidance of industry experts. The projects here referred to as <a target="_blank" href="https://www.codeforgovtech.in/digital-public-goods">DPGs</a> above, are built to be used by a large number of people and they are all open-source.</p>
<h3 id="heading-what-is-the-driving-force-behind-c4gt"><strong>What is the driving force behind C4GT?</strong></h3>
<p>The plan is to build a community of students and working professionals who contribute actively towards the development of software products that can be used at a large scale by the governments. People around the world appreciate open-source projects. It is very widely said that a lot of big projects are built upon some open-source projects being maintained by a small group of contributors. By open sourcing, one can be sure that bugs can be easily caught and rectified. And you can easily integrate them into your projects by making modifications wherever needed as the whole code base is available to you. Compare that to hiring an external agency for building a product as per your needs. Not only that it will take higher resources, but having a small addition to the earlier demands will lead to multiple rounds of negotiation. And this is where the need for using and building open-source projects comes in. With DPGs, organizations like SamagraX are planning to fill this gap. With programs such as C4GT, they plan to spread awareness about the open-source culture, and its benefits. It also aims to encourage people to contribute to more such projects.</p>
<h3 id="heading-what-does-it-mean-for-college-students">What does it mean for college students?</h3>
<p>We are currently living in an era, where breakthrough innovations in Artificial Intelligence are powering a range of tools good enough to automate a lot of complex tasks ranging from building websites to writing a piece of code for doing a particular task. As a student, we must have the caliber to adapt to changing environments and don't limit our learning curve. A few years down the line, the skills and experience of an individual will be given more preference over their degrees or some certificates. As a beginner, it is also a bit difficult to land an opportunity to work on real-world projects in a company. Open source projects provide the perfect combination of working on large-scale projects and learning new skills on the way, just as you might while working in a company.</p>
<p>Programs such as GSoC, and C4GT make it even easier for beginners to get started with contributing to such open-sourced projects. In the long run, it helps build a credible record for the student demonstrating their contribution skills that no other certificate can convey at a glance. The mentorship that comes with such programs helps in the overall development of the student. In case they face any difficulty while contributing, they can easily consult their mentor and get the issue resolved.</p>
<p>Not only that, C4GT carries a fancy stipend to encourage even more participation from the contributors. The successful completion of the program by the contributors offers them a chance to get a PPI/PPO from the organization they contribute to. All these rewards and benefits make C4GT a perfect opportunity for students to get started with open-source and contribute to their favorite projects. While the program is going to last for 2 months, the aim is to encourage the contributors to keep contributing towards the project for years to come and spread the word about it. Programs like this empower students to gain real-world experience by working on large-scale projects while they are still pursuing their studies.</p>
<h3 id="heading-c4gt-2023">C4GT 2023</h3>
<p>The program had a successful edition 1 in 2022 where 13 contributors contributed to 9 different projects in two months under the guidance of 7 mentors. Currently, the program is in its second edition. With 104 projects across 38 products from 20 participating organizations and the mentorship of 70+ mentors, the program is sure to create tides, setting new records and leaving a long-lasting impact on the students. As of writing this article, the list of selected contributors is yet to be released. Fun fact, I have submitted three proposals for this edition. While I wait for the results, you can consider joining the <a target="_blank" href="http://bit.ly/C4GTCommunityChannel">discord</a> community of C4GT and get to know about the working of DPGs or get solved any other doubts you may have, by the helpful community members.</p>
<p>If you learned something new today, don't forget to share this knowledge with others. Do you know about any more such open-source programs aimed at students? let me know through the comments below.</p>
]]></content:encoded></item></channel></rss>