close
Skip to content

Latest commit

 

History

History

README.md

AutoBE Generated Backend Server

AutoBE Logo

A backend repository generated by @autobe.

This backend program was automatically generated using @autobe, the AI vibe coding agent for backend servers of below stack.

  • TypeScript
  • NestJS / Nestia
  • Prisma
  • Postgres
flowchart
subgraph "Backend Coding Agent"
  coder("Facade Controller")
end
subgraph "Functional Agents"
  coder --"Requirements Analysis"--> analyze("✅ Analyze")
  coder --"ERD"--> database("✅ Database")
  coder --"API Design"--> interface("✅ Interface")
  coder --"Test Codes" --> test("✅ Test")
  coder --"Main Program" --> realize("✅ Realize")
end
subgraph "Compiler Feedback"
  database --"validates" --> prismaCompiler("Prisma Compiler")
  interface --"validates" --> openapiValidator("OpenAPI Validator")
  interface --"generates" --> tsCompiler("TypeScript Compiler")
  test --"validates" --> tsCompiler("TypeScript Compiler")
  realize --"validates" --> tsCompiler("TypeScript Compiler")
end
Loading

Also, this backend application was built following @autobe's waterfall development model, where each specialized AI agent handles a specific phase of development. The process ensures 100% working code through continuous compiler feedback and validation at every stage.

Each agent receives input from previous phases and produces validated output that becomes the foundation for the next development stage. The Facade Controller orchestrates the entire process, while Functional Agents handle specialized tasks with built-in Compiler Feedback ensuring code quality and correctness.

Below table shows the mapping between waterfall phases, corresponding @autobe agents, and the actual deliverables you can find in this repository:

Waterfall Model AutoBe Agent Result
Requirements ✅ Facade Conversation History
Analysis ✅ Analyze Requirement Analysis Report
Design ✅ Prisma Entity Relationship Diagram / Prisma Schema
Design ✅ Interface API Controllers / DTO Structures
Development ✅ Realize API Provider Functions
Testing ✅ Test E2E Test Functions
Maintenance - Use Claude Code like AI coding tool please

Project Structure

This template project has categorized directories like below.

As you can see from the below, all of the Backend source files are placed into the src directory. When you build the TypeScript source files, compiled files would be placed into the lib directory following the tsconfig.json configuration. Otherwise you build client SDK library for npm publishing and their compiled files would be placed into the packages directory.

NPM Run Commands

List of the run commands defined in the package.json are like below:

  • Test
    • test: Run test automation program
    • benchmark: Run performance benchmark program
  • Build
    • build: Build everything
    • build:main: Build main program (src directory)
    • build:test Build test automation program (test directory)
    • build:sdk: Build SDK into main program only
    • build:swagger: Build Swagger Documents
    • dev: Incremental build for development (test program)
  • Deploy
    • package:api: Build and deploy the SDK library to the NPM
    • start: Start the backend server
    • start:dev: Start the backend server with incremental build and reload
  • Webpack
    • webpack: Run webpack bundler
    • webpack:start: Start the backend server built by webpack
    • webpack:test: Run test program to the webpack built

Specialization

Transform this template project to be yours.

When you've created a new backend project through this template project, you can specialize it to be suitable for you by changing some words. Replace below words through IDE specific function like Edit > Replace in Files (Ctrl + Shift + H), who've been supported by the VSCode.

Before After
ORGANIZATION Your account or corporation name
PROJECT Your own project name
AUTHOR Author name
https://github.com/samchon/nestia-start Your repository URL

Benchmark

Aggregate

Phase Generated FCSR Token Consumption Elapsed Time
✅ analyze actors: 4, documents: 6 98.64 % 4,187,991 4302 sec
✅ database namespaces: 10, models: 53 89.87 % 4,325,708 1124 sec
✅ interface operations: 176, schemas: 200 77.61 % 150,396,697 10089 sec
✅ test functions: 504 92.11 % 55,486,945 7769 sec
✅ realize functions: 267 75.89 % 51,164,188 10045 sec

This table shows the comprehensive metrics for each phase of the AutoBE generation pipeline. For each phase (Analyze, Database, Interface, Test, Realize), it tracks:

  • Phase: The pipeline phase with success (✅) or failure (❌) indicator
  • Generated: Count of artifacts produced (e.g., actors, documents, namespaces, models, operations, schemas, functions)
  • FCSR: Function calling success rate
  • Token Consumption: Total number of LLM tokens consumed during the phase
  • Elapsed Time: Wall-clock time taken to complete the phase, including all AI agent operations and compiler feedback loops

These aggregate metrics provide visibility into the computational cost and time requirements of the entire generation process, helping identify resource-intensive phases and overall pipeline efficiency.

Function Calling

Type Trial Validation Failure JSON Parse Error Success Success Rate
total 5,341 976 0 4,358 81.60 %
analyzeScenario 5 0 0 5 100.00 %
analyzeWriteUnit 11 0 0 11 100.00 %
analyzeWriteSection 262 3 0 259 98.85 %
analyzeSectionReview 17 1 0 16 94.12 %
databaseGroup 2 0 0 2 100.00 %
databaseAuthorization 3 0 0 3 100.00 %
databaseComponent 20 2 0 18 90.00 %
databaseSchema 132 14 0 118 89.39 %
databaseCorrect 1 0 0 1 100.00 %
interfaceGroup 3 0 0 3 100.00 %
interfaceAuthorization 14 4 0 10 71.43 %
interfaceEndpoint 40 0 0 40 100.00 %
interfaceOperation 432 11 0 419 96.99 %
interfaceSchemaRename 20 0 0 20 100.00 %
interfaceSchema 361 15 0 346 95.84 %
interfaceSchemaRefine 614 303 0 311 50.65 %
interfaceSchemaReview 512 187 0 325 63.48 %
interfaceSchemaComplement 23 1 0 22 95.65 %
interfacePrerequisite 357 8 0 348 97.48 %
testScenario 451 19 0 430 95.34 %
testWrite 525 28 0 497 94.67 %
testCorrect 101 36 0 65 64.36 %
realizeAuthorizationWrite 12 0 0 12 100.00 %
realizeAuthorizationCorrect 21 1 0 20 95.24 %
realizePlan 228 1 0 227 99.56 %
realizeWrite 992 267 0 723 72.88 %
realizeCorrect 182 75 0 107 58.79 %

This table shows the reliability and quality metrics for AI agent function calling operations across all phases. Each row represents a specific operation type (e.g., analyzeScenario, prismaSchema, realizeWrite), tracking:

  • Type: The AI agent operation name
  • Trial: Total number of function calling attempts made by the agent
  • Validation Failure: Calls that produced valid JSON but failed type validation
  • JSON Parse Error: Calls that produced malformed JSON that couldn't be parsed
  • Success: Calls that completed successfully with valid, validated responses
  • Success Rate: Percentage of successful calls out of total attempts

These metrics reveal the effectiveness of AutoBE's validation feedback strategy powered by typia.llm.application<Class, Model>(). When function calls fail type validation, detailed error messages are fed back to the AI agent, enabling iterative correction through self-healing spiral loops.

Success rates vary based on model size and capability - smaller models may have lower initial success rates. However, validation feedback enables even weaker models to achieve high success rates through automatic correction cycles, demonstrating the power of compiler-driven development.

License

AutoBE is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). If you modify AutoBE itself or offer it as a network service, you must make your source code available under the same license.

However, backend applications generated by AutoBE can be relicensed under any license you choose, such as MIT. This means you can freely use AutoBE-generated code in commercial projects without open source obligations, similar to how other code generation tools work.