<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Sys.Log</title>
    <description>Personal notes on computer science, technology, side projects, and hobbies.</description>
    <link>https://blog.unstacked.cc</link>
    <atom:link href="https://blog.unstacked.cc/feed.xml" rel="self" type="application/rss+xml" />
    <language>en</language>
    <lastBuildDate>Fri, 24 Apr 2026 13:34:20 GMT</lastBuildDate>
    
    <item>
      <title>Notes and Learning about AI</title>
      <description><![CDATA[
My learning, notes and thoughts about the use of AI in Computer Science
      ]]></description>
  <link>https://blog.unstacked.cc/posts/notes-and-learning-about-ai/</link>
  <guid>https://blog.unstacked.cc/posts/notes-and-learning-about-ai/</guid>
      <pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate>
      <content:encoded><![CDATA[
&lt;p&gt;Everyone is talking about how LLMs will replace software developers these days, but so far it has not yet happened. Here is what I learned and think about AI in Computer Science.&lt;/p&gt;
&lt;h1&gt;Fake Productivity&lt;/h1&gt;
&lt;p&gt;I believe that LLMs are great at doing simple and tedious tasks. However, for more complex ones, the use of AI is showing quick and easy progress, while it hides the actual time-intensive tasks (maintaining and debugging), resulting in the use of AI being not as effective as it seems at first glance.&lt;/p&gt;
&lt;p&gt;Very often I hear from people that they created X in just a day or implemented Y in under 10 minutes using modern AI programming tools. But what this is not capturing is the time it will take to fix inevitable bugs or feature changes once this code is included in production. In the end, this means that writing by hand takes longer and might seem tedious, but it actually saves time in the long run, since the implementation is actually understood by a human and can be more easily understood by other humans than a typical LLM code. What I mean by that is that there is and always will be a certain style of coding from an AI, just like it is also obvious if a text is written by a real human or an AI. The same applies to code here. To conclude, I believe that every codebase that needs to be maintained and extended will be better off in the long run with pure human (or at least just extremely light AI-assisted) development.&lt;/p&gt;
&lt;p&gt;Where I found the use of AI a great choice is for all the small temporary scripts that, e.g., visualize data at hand or plot an analysis. Stuff that does not need to live in production helps me better understand the problem at hand. Seeing it this way, there comes a new skill I myself am still learning: differentiating tasks where LLMs are a good use and those where it seems like the LLM can help a lot, but in the long run, it just makes the project more complex and harder to work with. I see this as AI literacy. Just like society is slowly learning the bad effects of social media and learns to turn away from those services, computer scientists have to learn to resist the urge to just prompt their problems away (and get a dopamine rush) instead of putting in the effort.&lt;/p&gt;
&lt;p&gt;Another sign that the current LLM productivity is fake is also the fact that companies are still hiring new engineers (&lt;a href=&quot;https://trueup.io/engineering/reports&quot;&gt;trueup&lt;/a&gt;). This means that although many companies enforce the use of LLMs, they still need the manpower to actually steer those agents. This is a sign that the actual logic is still coming from humans, while the LLM is doing an ok job at implementing what will need even more time to be fixed later down the road. Franksworld has also summarized this &lt;a href=&quot;https://www.franksworld.com/2026/04/15/why-companies-are-quietly-rehiring-software-engineers-in-the-age-of-ai/&quot;&gt;pretty well&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is also supported by benchmarks like &lt;a href=&quot;https://arcprize.org/&quot;&gt;ARC&lt;/a&gt;, which show how far AI is from matching human reasoning levels.&lt;/p&gt;
&lt;h1&gt;Wrong Architecture&lt;/h1&gt;
&lt;p&gt;I do believe that LLMs are genuinely helpful and good at some tasks. For example, for everything where a strict machine-readable interface cannot be used or is not as efficient. Because this is where LLMs shine: at natural language. Software development is not natural human language; it&#39;s a human abstraction to better understand logic and algorithms. It&#39;s the computer&#39;s way of thinking pressed into a human-understandable way. So I do believe that help desk hotlines, personalized web search, teaching, and similar can very much be replaced by AI in the near future. But AI should not be used or forced onto any other task that over some translation or gap can also be described using language. The loss during that translation is bigger than the gain from using the LLM. One example is Cheminformatics. This is a field in the crossover of Chemistry and Computer Science, where molecules and reactions are described by a context-free grammar language. This is often used in LLMs to predict or model properties and reactions of these compounds. It&#39;s a new field in chemistry, but it lacks physical knowledge and constraints and furthermore is limited by the added complexity of its context-free grammar representation of molecules. Meanwhile, there is a way better geometric and more accurate representation that humans have used for years to get the same job done more intuitively. Also, note that the only groundbreaking model of this field (AlphaFold) did not use this human language but actually the accurate 3D geometry meshes.&lt;/p&gt;
&lt;p&gt;To conclude this, I believe that the right representation of data matters the most when it comes to a model&#39;s capabilities. As long as our current AI models can only communicate and think in human ways (images, audio, text), they will not get any smarter or better at the tasks at hand than the human already is. Perhaps the natural language we use is the same thing that is holding us back from being more intelligent, just as it will be for LLMs.&lt;/p&gt;
&lt;p&gt;This idea that current AI is inherently limited is beautifully described in &amp;quot;&lt;a href=&quot;http://www.incompleteideas.net/IncIdeas/BitterLesson.html&quot;&gt;The bitter lesson&lt;/a&gt;&amp;quot; by Richard Sutton and this fantastic &lt;a href=&quot;https://www.youtube.com/watch?v=2hcsmtkSzIw&amp;amp;t=2s&quot;&gt;video&lt;/a&gt; by &amp;quot;Welch Labs&amp;quot;.&lt;/p&gt;
&lt;p&gt;Another recent phenomenon I came across is &amp;quot;&lt;a href=&quot;https://github.com/JuliusBrussee/caveman&quot;&gt;cavemen&lt;/a&gt;&amp;quot;: the idea to interact with the LLM in caveman-like language to reduce tokens while preserving performance. In my opinion, this is a sign of reduced human thinking too. It&#39;s just too tempting to describe the tasks at hand in short, incomplete sentences and let the AI infer the details and actual task to be done. But this is the critical point where the AI does not have human intuition or the same intelligence. At this stage is where bugs are introduced or code is generated that is hard to integrate into existing human-made architecture.&lt;/p&gt;
&lt;p&gt;Another example I want to make for natural language as the wrong representation of reasoning and logic is the current evolution in robotics.&lt;/p&gt;
&lt;p&gt;In this &lt;a href=&quot;https://www.youtube.com/watch?v=2mrGMMmrVNE&amp;amp;t=26s&quot;&gt;video&lt;/a&gt; by Welch Labs at about minute 13, he compares two ideas of model communication (in this case, between a vision and an action model). One is using an LLM and tokenized language to first understand the task and its environment. Then, this planning is given to the action model in text form, which generates the optimal robot arm movement from that. The second architecture discussed is doing a similar thing, but instead of using human language, it communicates using embeddings. It turned out that this second method worked much better.&lt;/p&gt;

      ]]></content:encoded>
    </item>
    
  </channel>
</rss>
