WebUI Shenanigans

That looks pretty dang good already.

How do you feel about that screen overall? The price is okay, is it worth the effort? The new webui3 is doing much better with the recent merge, is it worth replicating something like that here or keep tweaking the webuiv3?

1 Like

Lol. I’m mostly doing this for fun and as something to play around with AI. I don’t need another pendant of which I don’t need one at all. The screen itself is ok. The resolution and colors look fine. It does have a narrow viewing angle though. Having tabs without scrolling feels good.

1 Like

How are you using a.I.?

I’m using it as a coding assistant. It’s actually written most of the code so far. I’m using Visual Studio Code with Github Copilot (with a Pro trial) with the Claude Sonnet 4.5 model (pretty new) in agent mode (which means it can update code in files directly and use/read the terminal with approval).

This is how it started with a lot of iteration after that.

Start

#new I want to create a new project called FluidTouch that uses LVGL with a Elecrow CrowPanel 7 inch display

I’ll create a new LVGL project workspace for your FluidTouch project with the Elecrow CrowPanel 7-inch display.

Generating plan to create a new workspace

Now I’ll set up the FluidTouch project workspace systematically. Let me start by creating the instruction file and tracking progress.

Created 6 todos

Created copilot-instructions.md

Starting (2/6) Scaffold LVGL project structure

Created platformio.ini

Created main.cpp

Created lv_conf.h

Created User_Setup.h

Created README.md

Starting (5/6) Install required extensions

PlatformIO IDE extension is already installed

Completed (5/6) Install required extensions

Perfect! Your FluidTouch project is now set up and ready for development! :tada:

What’s Been Created

:white_check_mark: Project Structure

:white_check_mark: Features Included

  • LVGL v8.3 graphics library
  • TFT_eSPI display driver for 800x480 resolution
  • Touch input support
  • Sample UI with label and button
  • PSRAM support enabled

:white_check_mark: Extensions

  • PlatformIO IDE (already installed)

Next Steps

  1. Build the project: Click the PlatformIO icon in the sidebar, then click “Build”
  2. Connect your CrowPanel: Plug in your Elecrow CrowPanel 7" display via USB
  3. Upload: Click “Upload” in PlatformIO to flash the firmware
  4. Monitor: Use Serial Monitor to see debug output

Important Notes

:warning: Pin Configuration: The pin assignments in User_Setup.h are typical for CrowPanel, but verify them against your specific model’s documentation.

:warning: Touch Calibration: You may need to adjust the touch calibration values in main.cpp for accurate touch input.

:warning: Display Driver: If the display doesn’t work, try uncommenting the ST7796_DRIVER line in User_Setup.h instead of ILI9488_DRIVER.

The project is ready to build and deploy to your Elecrow CrowPanel 7" display!

It got a bunch of things wrong that we chatted through but it wasn’t a terrible start. Sometimes it started going in circles at which point I just fixed it myself. At one point, I pointed it at an Elecrow example project which it was able to parse all the pin mappings from. Sometimes it was easier for me to fix myself so I did.

Once I got to using LVGL to actually try and create a UI, it did quite well. It was interesting how it would remember context. For example, I gave it the $SS output from my pen plotter to show as the example text for the terminal. Later, I told it to create a grid of buttons for the macro tab with reasonable names. It picked up out of the terminal output that it was a pen plotter, so it created Pen Up and Pen Down macros.

For the Files tab, I told it that I wanted a list of gcode files with a play button to run each file and a drop down to select Flash or SD. At first, the alignment of the filenames was wrong so I told it, and it fixed it in one try.

It’s pretty wild. While I don’t think you can just “vibe code” your way to a quality solution, with a bit of development background, it’s a useful tool.

I should probably talk with some humans for a bit now.

7 Likes

This is fun. It’s interesting to try and think through what you’d want in a UI. I think I’ll probably end up with a working pendant here, but that is probably months away. Nothing is talking to FluidNC yet and I don’t plan on doing that until I settle on the UI first. Some of this may translate to the WebUI as well.

AI is proving to be a super useful tool. This is far better than I think I would have made without it. Here’s a screenshot (which AI helped me make a web server for and capture) of the work in progress jog screen. I still plan on having a joystick jog but sometimes I don’t want that.

This is nowhere near useful at the moment, but if anyone is curious, I did put what I have so far up on Github in a dev branch.

7 Likes

That is a good problem to think about. UI design is not easy. Calibration mode, typical mode, production mode.

So this will 100% independent and does not use webui?

It doesn’t need the WebUI.

The protocol built-in, like was used for the pendant, is all that is necessary to build any pendant you want.

1 Like

I have a 6" screen, can you make it work with that?

I like my tabs to be on the right instead of the left, can you make a custom version for me?

I like gSender, can you change the jog controls to be like that?!?!?

6 Likes

It’s nice being the developer on a hobby project because while I’m open to suggestions, ultimately my opinion wins and I can tell other people no. I know people are wildly opinionated on UIs.

3 Likes

Highest value approach is to document how to do an implementation (“Pave the road”).

Then let users write their own and let those then compete for mindshare/users.

Keep your own hobby project true to what you want out of it.

5 Likes

I’ve done several things to improve maintainability/readability that would help someone if they wanted to tweak it to their liking. AI would likely be a big help. There is a copilot instructions file that describes some details for onboarding an AI agent. When I first started, the AI agent put like 95% of the code into main.cpp. Now it’s broken out into separate files for each tab and subtab so it’s well modularized. Tonight, I had it get rid of all the hardcoded colors and put them in a theme file. Using something common like LVGL means there’s a lot of documentation and expertise.

Switching to a different display of the same resolution (800x480) probably would not be difficult. If the resolution is different, it’s quite a bit more involved. Elecrow makes an “Advance” version of the 7 inch display with an IPS screen that probably wouldn’t be too hard. Now they have an even newer 7 inch version with a 1024x600 display using an ESP32-P4 + ESP32-C6. That would take some work.

The other thing I want to do with this is support using it as a wireless or wired pendant. I’ll mainly use it wireless, but I don’t think it’s that big of a leap to support both.

I’m not saying I wouldn’t help someone make changes, but I also know if it became popular, it could become a burden so I need to know where to draw the line. The FluidNC project has this problem. I figure if I’m still having fun, I’m on the correct side of the line.

3 Likes

Really interesting. Thanks for going into so much detail on your process.

Gives me an idea….

2 Likes

Ok, I’m getting a bit excited. Today I came to the realization that wirelessly, it would be nice to use for all of my FluidNC devices of which I currently have 3 (LR3, pen plotter, eggbot). So, I’m working on a machine selection page. Each machine could be wired/wireless. Each wireless machine might connect to a different wireless network.

I’m pretty happy with the look of the status tab and status bar (which always shows). I think it’s time to link it up to a machine and see how well it performs updating those values in real time.

12 Likes

Would be awesome to have a 10-13” touchscreen working :drooling_face:. Ms surfaces 3 pro is less than 70$ on ebay. Webui v2 friendly…. Nvm

I’m curious if this is a better way to do the communication on the Cyd pendants to make them wireless without BT or espnow

I think so yes. I have a FluidDial and I looked into it awhile ago but if I recall correctly the code that parses the status strings and such is tightly integrated with the UART code.

Making great progress…

4 Likes

Here’s a clip of current functionality. Probably getting rid of the terminal tab since it was hurting performance.

While I used a lot of words, AI has still written 95% of the actual code. I told it to “wire up” the screens and it made a whole FluidNC client and connected the buttons to the appropriate commands. I had to have it tweak a bunch of things but wildly impressed at how close it got. I would have made no where near this progress without it.

Still a bunch of things to do, but it’s already reached a useful state.

3 Likes

So are you saying that ai knows what fluidnc is and how to connect to it?

I told it to connect via a Websocket which it generally knew how to do. I also had it read some FluidNC wiki pages. But yes, it know some things about FluidNC and Grbl and CNC machines in general. I also told it how FluidNC does some things.

It feels weird that I don’t know how much of the credit is mine for creating this. On one hand, I have a bunch of knowledge about programming and FluidNC and opinions on UIs, etc that I told it and gave instructions based on. I told it a lot of specific things to do and how to fix certain things. On the other hand, I didn’t create much of the code. It definitely couldn’t create this without me and I’m not sure it would have come out as well and certainly not as quickly without it.

This is a joke (I think? maybe?) but it’s almost like it’s psychologically manipulating me into liking it. It’s constantly telling me how my ideas are great and how much they’ve improved this pendant.

2 Likes