Fucking hell, I'm going Plus now
https://www.youtube.com/v/N4sDzidCudQ
I'd be more impressed if it spun up a Time-series Database (Say InfluxDB) and created a k6 dashboard instance (it knows you're using AWS) using the default k6 dashboard you can get from the marketplace.
When you run k6 you can output to InfluxDB and then add that as a Data Source in Grafana.
Not sure why it's doing the sleep either, but then again I suppose it's not performance testing, but why wait?
The is probably great for creating basic frameworks, but the real complexity and hard work comes with changing requirements, interfaces, external requirements, internal requirements and business change.
Not sure how you can replace that at present as there is a lot of stuff that might seem 'right' but won't solve the issues being created externally. Plenty of examples of stuff that shifts scope/changes/is affected by external changes and even stuff like business models, legal modles, country laws, regional laws, change of purpose and the like.
I guess you'd have to plug in all that kind of stuff which is outside the scope of IT.
Then you have the different areas involved; not just devs, but support, QA, Testing, Data Analysis, Platform, Business and the like.
Interesting though, but any good book could give you an example of that terraform, k6 and the app.
The other thing it would need for best practice would be the ability to scrape the latest specs of changing models in Cloud Providers or other Providers (like github, circle, docker etc.) and implement the automatic maintenance of those areas. When you have an outdated model (Maybe due to new ideas in the arena or security concerns etc.) then the real challenge would be to implement the whole new paradigm without breaking the original intention of the original code. That could be exponential and, indeed, is a concern with any IT organisation - would this be able to get involved with auto deployments and what is the effect of it breaking code - it's AI ability is only as good as the information it receives and there are a ton of different inputs - over the last 15 years I've probably used around 200+ tools and use 40-50 on a weekly basis. It would have to keep up with all that and more importantly the interactions between the toolsets, not to mention errors in code and misconfiguration
This again could be exponential if the tool sets you are using are also being updated by AIs.
Going to be bonkers supporting anything in the future. You'd probably need a seperate use-case problem solving AI just to handle the mismatches and wrongly handled protocols.