love this: "Next, human ingenuity stems from our ability to explore genuine unknowns — spaces where concepts don't yet exist and language hasn't formed to describe them. While AI systems excel at recombining and extrapolating from existing human knowledge, they’re constrained by the linguistic and conceptual frameworks of their training data, offering a sophisticated but backwards-facing reflection of what humanity has already thought and expressed. True innovation often emerges from pre-linguistic intuition and embodied experience, from humans who can navigate uncharted territory where the vocabulary itself hasn't been invented yet. AI augmentation is already proving to have a negative impact on human cognition and critical thinking."
Ammon - excellent post. I, like many others have felt seen and heard when we saw the vibe-coding comic strip. Your point on energy being redirected to data centers vs humans hit home, especially ever since I read Altman’s comment on how much water usage each query costs.
I remain conflicted though. I have not been this energized in trying to experiment and build something in a very long time. How usable those apps are are another question altogether.
Here's what has been working so far for me. I'm not super technical, but know the basics and learn as I go.
1. I write my idea of the app that I want to build. Explicitly ask Gemini/ChatGPT/Claude not to build it, but give an idea of how it would build an app for personal use. I’ll explicitly state that I need all the tech stack for each step and ask it to give me multiple options. Give me reasoning on why an option is recommended vs another and ask it to tell me what order I need to build it.
Side note: I feel like with V0s and Lovables of the world, we have been sold UI first, when my experience (from coding years ago) has been the opposite. Define your data model, your API and then you can beautify your code using V0/Lovable. So I have gone back and rightly so. Otherwise you’re stuck making the UI pretty but it doesn’t really do much (which might be fine for demos). I also feel that being UI first you get junk data when I want to use real ones.
2. Gemini 2.5 Pro is a good starting point for actual code. Gemini Flash, ChatGPT and Claude have been good for providing reasoning in thinking and basic bug fixes.
3. Here’s my tech stack: React/next.js via V0, Flask/Python for the API and Supabase for the scripts and edge functions.
4. Since the only paid plans I have are with V0 and Perplexity Pro, I am bound by companies’ free tiers. So, I must switch between each model’s suggestions and run it against the other once the limit is reached. It does have me confused the next day– like who gave me what suggestion! But it’s also a way to test their reasoning and make sure that I’m being led on the right path.
For context, I used data.gov to do a tariff tracking app. It’s about 80% complete since I still need to figure out how to get the “web agent/cron job” working to get the latest JSON when a new version lands.
My evaluation is this:
I am not getting anything built and shipped over a weekend with my free plans and my skill set – yet.
I feel that I can create personal projects, but not something that others can use - yet.
How do I make it secure and not have vulnerability attacks? That is still a gnarly issue for me. I’m sure there are other tools that do it for you, but I do miss a human code reviewer!
I continue to remain hopeful of forward progress..
Thank you for sharing your approach Kalpana! I find it incredibly interesting to learn about what's working and what's not. I also feel many who are sharing their successes are not being transparent about their challenges. When I speak openly with skilled engineers with deep knowledge of AI tech stacks, I generally find a great deal of frustration along with their success stories. My friends who are technical project managers with a background in coding seem to be the most successful, as they understand how to plan and orchestrate the execution — along with a sharp eye for catching divergences and the technical language to clearly articulate needs.
Re. UI-first... I'm seeing an interesting shift towards UI first, or at least parallel path. I'm not sure I like the v0 or Lovable approach, as it doesn't have enough control for me. But I'm finding my design side is much more engaged in the prototyping phase, where I can sketch ideas in Figma, then quickly bring those ideas to life. The visual clarity is another layer of clear instruction for the AI.
Re. security... yes, this is a huge potential problem for many playing in this space. I'm sure there are thousands of bots trolling Vercel, looking for vulnerabilities. I try to fall back on the security protocols I've been using for many years. I've also found that reviewing the project for potential security holes with AI can help catch issues. I also recently deployed a "security agent" in one of my projects, who's sole responsibility is assessing security concerns, but this feels like a dangerous game. Nothing beats having a dev-ops friend who can review and help.
Good luck and I look forward to hearing how your experiments progress!
Re UI first: I found it is quite possible to preload in instructions to a vibecoding tool, like lovable -- about the data model, the structure of the app how you would like it to be created and componetized - it's not perfect, but if you have some understanding of the good development modularity, and areas you would want to have flexibility, you can drive a lot of that in vibecoding, especially if that's your starting prompt. I think about the UI presentation to me is more a reflection of my level of thinking then truly UI first.
80/20: and vibecoding tools: I think there's a really neat, but yet not realized prototyping environment, where an initial prompt will get you 80% of the way there, and and then what we really need are direct manipulation tools, the ability to drag and drop a component that was created, the ability to stretch, or add an element to the screen, pulling it from a pallet. The most powerful vibecoding tool will allow for near effortless fluidity between direct manipulation and natural language requests.
Agree with you on the need for more direct manipulation/refinement tooling. Currently, it's often in those refinement requests that things go sideways, which is likely due to pushing the model outside its known boundaries and into more hallucination space. But I think if you can compartmentalize problems, there's a greater potential for success.
I feel the UI first dynamic has a similar problem. If the intent is simply to use standard patterns, then the construction of UI around an idea is great. If the UI is novel, then the dance between product function and UI representation gets tricky. Again, compartmentalizing can help steer intent.
Thanks for the thoughts Stefan! I'd love to hear more about your experience using these tools.
love this: "Next, human ingenuity stems from our ability to explore genuine unknowns — spaces where concepts don't yet exist and language hasn't formed to describe them. While AI systems excel at recombining and extrapolating from existing human knowledge, they’re constrained by the linguistic and conceptual frameworks of their training data, offering a sophisticated but backwards-facing reflection of what humanity has already thought and expressed. True innovation often emerges from pre-linguistic intuition and embodied experience, from humans who can navigate uncharted territory where the vocabulary itself hasn't been invented yet. AI augmentation is already proving to have a negative impact on human cognition and critical thinking."
Thanks Stefan!
Ammon - excellent post. I, like many others have felt seen and heard when we saw the vibe-coding comic strip. Your point on energy being redirected to data centers vs humans hit home, especially ever since I read Altman’s comment on how much water usage each query costs.
I remain conflicted though. I have not been this energized in trying to experiment and build something in a very long time. How usable those apps are are another question altogether.
Here's what has been working so far for me. I'm not super technical, but know the basics and learn as I go.
1. I write my idea of the app that I want to build. Explicitly ask Gemini/ChatGPT/Claude not to build it, but give an idea of how it would build an app for personal use. I’ll explicitly state that I need all the tech stack for each step and ask it to give me multiple options. Give me reasoning on why an option is recommended vs another and ask it to tell me what order I need to build it.
Side note: I feel like with V0s and Lovables of the world, we have been sold UI first, when my experience (from coding years ago) has been the opposite. Define your data model, your API and then you can beautify your code using V0/Lovable. So I have gone back and rightly so. Otherwise you’re stuck making the UI pretty but it doesn’t really do much (which might be fine for demos). I also feel that being UI first you get junk data when I want to use real ones.
2. Gemini 2.5 Pro is a good starting point for actual code. Gemini Flash, ChatGPT and Claude have been good for providing reasoning in thinking and basic bug fixes.
3. Here’s my tech stack: React/next.js via V0, Flask/Python for the API and Supabase for the scripts and edge functions.
4. Since the only paid plans I have are with V0 and Perplexity Pro, I am bound by companies’ free tiers. So, I must switch between each model’s suggestions and run it against the other once the limit is reached. It does have me confused the next day– like who gave me what suggestion! But it’s also a way to test their reasoning and make sure that I’m being led on the right path.
For context, I used data.gov to do a tariff tracking app. It’s about 80% complete since I still need to figure out how to get the “web agent/cron job” working to get the latest JSON when a new version lands.
My evaluation is this:
I am not getting anything built and shipped over a weekend with my free plans and my skill set – yet.
I feel that I can create personal projects, but not something that others can use - yet.
How do I make it secure and not have vulnerability attacks? That is still a gnarly issue for me. I’m sure there are other tools that do it for you, but I do miss a human code reviewer!
I continue to remain hopeful of forward progress..
Thank you for sharing your approach Kalpana! I find it incredibly interesting to learn about what's working and what's not. I also feel many who are sharing their successes are not being transparent about their challenges. When I speak openly with skilled engineers with deep knowledge of AI tech stacks, I generally find a great deal of frustration along with their success stories. My friends who are technical project managers with a background in coding seem to be the most successful, as they understand how to plan and orchestrate the execution — along with a sharp eye for catching divergences and the technical language to clearly articulate needs.
Re. UI-first... I'm seeing an interesting shift towards UI first, or at least parallel path. I'm not sure I like the v0 or Lovable approach, as it doesn't have enough control for me. But I'm finding my design side is much more engaged in the prototyping phase, where I can sketch ideas in Figma, then quickly bring those ideas to life. The visual clarity is another layer of clear instruction for the AI.
Re. security... yes, this is a huge potential problem for many playing in this space. I'm sure there are thousands of bots trolling Vercel, looking for vulnerabilities. I try to fall back on the security protocols I've been using for many years. I've also found that reviewing the project for potential security holes with AI can help catch issues. I also recently deployed a "security agent" in one of my projects, who's sole responsibility is assessing security concerns, but this feels like a dangerous game. Nothing beats having a dev-ops friend who can review and help.
Good luck and I look forward to hearing how your experiments progress!
Re UI first: I found it is quite possible to preload in instructions to a vibecoding tool, like lovable -- about the data model, the structure of the app how you would like it to be created and componetized - it's not perfect, but if you have some understanding of the good development modularity, and areas you would want to have flexibility, you can drive a lot of that in vibecoding, especially if that's your starting prompt. I think about the UI presentation to me is more a reflection of my level of thinking then truly UI first.
80/20: and vibecoding tools: I think there's a really neat, but yet not realized prototyping environment, where an initial prompt will get you 80% of the way there, and and then what we really need are direct manipulation tools, the ability to drag and drop a component that was created, the ability to stretch, or add an element to the screen, pulling it from a pallet. The most powerful vibecoding tool will allow for near effortless fluidity between direct manipulation and natural language requests.
Agree with you on the need for more direct manipulation/refinement tooling. Currently, it's often in those refinement requests that things go sideways, which is likely due to pushing the model outside its known boundaries and into more hallucination space. But I think if you can compartmentalize problems, there's a greater potential for success.
I feel the UI first dynamic has a similar problem. If the intent is simply to use standard patterns, then the construction of UI around an idea is great. If the UI is novel, then the dance between product function and UI representation gets tricky. Again, compartmentalizing can help steer intent.
Thanks for the thoughts Stefan! I'd love to hear more about your experience using these tools.