Working with AI assistants
Working with AI assistants
NOTE: These blogs are passing thoughts. It might change the view with time.
Lately as the trend follows on setting up a code assistant on local and getting it to do the work feels the way forward as it shows how few tasks which takes time and effort can be completed so easily without any human intervention unless you have exhausted out of token or you haven’t provided all the necessary access to your assistant as you might be bit scared of the things it might peep into.
For someone who started writing code a decade ago on a book, and drawing how the HTML tags would occur on the screen by drawing it BOLD , italic with #H1to now seeing everything done from giving a single prompt to building a webpage which works fantastic is jaw dropping.
When I first executed the cpp code, memorising the headers to using the curly braces in the GCC code editor was quite tedious and amazing
#include <stdio.h>
cut to today , where I can run the code of the language, I don’t even know to fully build a repo out of it and write a README.md with all the emojis
,
and shipping it in a docker ready version is mind boggling.
One must have written codes from the scratch for an Assignment or project which came with the idea or to get a grade, there was always an intuition that it would work this way in my first review but would improve on features in the upcoming ones. But, hold on, now I have a assistant who can get me things done the way i imagined and i need not drop my sweat and blood on it.
There is a saying in Kannada which we call gaade Kai kesaradare bai mosaru , but now this is turning out to Claud ballavanige apayavilla Codex ballavanige tondareye illa adre Tokens iddaste kaalu chachu.
The idea or the term of vibe coding was not coined when we built a python GUI to count the NISL stained cells. It started with me trying out the Autoencoder on set of images to get me the right embedding so that i can get peak local max (a function in scikit image). Which was later done with an easier method led by Keerthi sir to do PCA and GMM. Once we had result as engineers , we wanted the tool to be used by the annotators and biologists to get us the cells data.
The best way to put out the tool was to make a Python GUI (PyQt still lingering the thoughts on what was so cute about it). This is the typical Researcher/Developer dilemma, why create a new thing when its already there, but ease of use wasn’t in my radar(still learning a user perspective of using things cause learning reward matters). Fresh morning with GPT3.5 with minimum prompt engineering(not sure if it was term back then), started this and had it in 20 mins , figuring out whats going on where and rerunning the things to find where was the error, debug , ask it to fix. Long before the skills.md had to do it manually. Ta Da, we already have the working GUI smoothly , runs as .exe on any machine.
Even though we had built a working GUI with machine learning backend , it never felt like an achievement because we never gave a thought about it. Maybe reward function was never defined that way. And now it does , but the very idea of it can create things which you have no idea about still get to show and work , Never ever thought this would be the future way of building the things.
But the satisfaction of seeing the things you wrote or made it work is never the same now. It surely does remove the mundane tasks , helps you in so many things but the very idea of the reward emotion is hacked!
Enjoy Reading This Article?
Here are some more articles you might like to read next: