close
Skip to content

Latest commit

 

History

History

README.md

AutoBE Generated Backend Server

AutoBE Logo

A backend repository generated by @autobe.

This backend program was automatically generated using @autobe, the AI vibe coding agent for backend servers of below stack.

  • TypeScript
  • NestJS / Nestia
  • Prisma
  • Postgres
flowchart
subgraph "Backend Coding Agent"
  coder("Facade Controller")
end
subgraph "Functional Agents"
  coder --"Requirements Analysis"--> analyze("✅ Analyze")
  coder --"ERD"--> database("✅ Database")
  coder --"API Design"--> interface("✅ Interface")
  coder --"Test Codes" --> test("✅ Test")
  coder --"Main Program" --> realize("✅ Realize")
end
subgraph "Compiler Feedback"
  database --"validates" --> prismaCompiler("Prisma Compiler")
  interface --"validates" --> openapiValidator("OpenAPI Validator")
  interface --"generates" --> tsCompiler("TypeScript Compiler")
  test --"validates" --> tsCompiler("TypeScript Compiler")
  realize --"validates" --> tsCompiler("TypeScript Compiler")
end
Loading

Also, this backend application was built following @autobe's waterfall development model, where each specialized AI agent handles a specific phase of development. The process ensures 100% working code through continuous compiler feedback and validation at every stage.

Each agent receives input from previous phases and produces validated output that becomes the foundation for the next development stage. The Facade Controller orchestrates the entire process, while Functional Agents handle specialized tasks with built-in Compiler Feedback ensuring code quality and correctness.

Below table shows the mapping between waterfall phases, corresponding @autobe agents, and the actual deliverables you can find in this repository:

Waterfall Model AutoBe Agent Result
Requirements ✅ Facade Conversation History
Analysis ✅ Analyze Requirement Analysis Report
Design ✅ Prisma Entity Relationship Diagram / Prisma Schema
Design ✅ Interface API Controllers / DTO Structures
Development ✅ Realize API Provider Functions
Testing ✅ Test E2E Test Functions
Maintenance - Use Claude Code like AI coding tool please

Project Structure

This template project has categorized directories like below.

As you can see from the below, all of the Backend source files are placed into the src directory. When you build the TypeScript source files, compiled files would be placed into the lib directory following the tsconfig.json configuration. Otherwise you build client SDK library for npm publishing and their compiled files would be placed into the packages directory.

NPM Run Commands

List of the run commands defined in the package.json are like below:

  • Test
    • test: Run test automation program
    • benchmark: Run performance benchmark program
  • Build
    • build: Build everything
    • build:main: Build main program (src directory)
    • build:test Build test automation program (test directory)
    • build:sdk: Build SDK into main program only
    • build:swagger: Build Swagger Documents
    • dev: Incremental build for development (test program)
  • Deploy
    • package:api: Build and deploy the SDK library to the NPM
    • start: Start the backend server
    • start:dev: Start the backend server with incremental build and reload
  • Webpack
    • webpack: Run webpack bundler
    • webpack:start: Start the backend server built by webpack
    • webpack:test: Run test program to the webpack built

Specialization

Transform this template project to be yours.

When you've created a new backend project through this template project, you can specialize it to be suitable for you by changing some words. Replace below words through IDE specific function like Edit > Replace in Files (Ctrl + Shift + H), who've been supported by the VSCode.

Before After
ORGANIZATION Your account or corporation name
PROJECT Your own project name
AUTHOR Author name
https://github.com/samchon/nestia-start Your repository URL

Benchmark

Aggregate

Phase Generated FCSR Token Consumption Elapsed Time
✅ analyze actors: 2, documents: 6 99.50 % 2,597,089 3399 sec
✅ database namespaces: 5, models: 27 88.03 % 2,611,177 758 sec
✅ interface operations: 101, schemas: 139 70.61 % 89,182,654 7480 sec
✅ test functions: 284 83.76 % 26,713,192 4145 sec
✅ realize functions: 163 82.96 % 27,283,037 5788 sec

This table shows the comprehensive metrics for each phase of the AutoBE generation pipeline. For each phase (Analyze, Database, Interface, Test, Realize), it tracks:

  • Phase: The pipeline phase with success (✅) or failure (❌) indicator
  • Generated: Count of artifacts produced (e.g., actors, documents, namespaces, models, operations, schemas, functions)
  • FCSR: Function calling success rate
  • Token Consumption: Total number of LLM tokens consumed during the phase
  • Elapsed Time: Wall-clock time taken to complete the phase, including all AI agent operations and compiler feedback loops

These aggregate metrics provide visibility into the computational cost and time requirements of the entire generation process, helping identify resource-intensive phases and overall pipeline efficiency.

Function Calling

Type Trial Validation Failure JSON Parse Error Success Success Rate
total 3,543 755 0 2,777 78.38 %
analyzeScenario 5 0 0 5 100.00 %
analyzeWriteUnit 10 0 0 10 100.00 %
analyzeWriteSection 176 1 0 175 99.43 %
analyzeSectionReview 8 0 0 8 100.00 %
databaseGroup 6 2 0 4 66.67 %
databaseAuthorization 3 0 0 3 100.00 %
databaseComponent 8 0 0 8 100.00 %
databaseSchema 98 11 0 86 87.76 %
databaseCorrect 2 0 0 2 100.00 %
interfaceGroup 2 0 0 2 100.00 %
interfaceAuthorization 8 2 0 6 75.00 %
interfaceEndpoint 22 0 0 22 100.00 %
interfaceOperation 333 17 0 308 92.49 %
interfaceSchemaRename 14 0 0 14 100.00 %
interfaceSchema 237 8 0 229 96.62 %
interfaceSchemaRefine 470 293 0 177 37.66 %
interfaceSchemaReview 368 159 0 209 56.79 %
interfaceSchemaComplement 25 4 0 21 84.00 %
interfacePrerequisite 195 1 0 194 99.49 %
testScenario 232 21 0 209 90.09 %
testWrite 300 19 0 281 93.67 %
testCorrect 176 73 0 103 58.52 %
realizeAuthorizationWrite 5 0 0 5 100.00 %
realizeAuthorizationCorrect 25 2 0 23 92.00 %
realizePlan 154 4 0 150 97.40 %
realizeWrite 523 100 0 423 80.88 %
realizeCorrect 138 38 0 100 72.46 %

This table shows the reliability and quality metrics for AI agent function calling operations across all phases. Each row represents a specific operation type (e.g., analyzeScenario, prismaSchema, realizeWrite), tracking:

  • Type: The AI agent operation name
  • Trial: Total number of function calling attempts made by the agent
  • Validation Failure: Calls that produced valid JSON but failed type validation
  • JSON Parse Error: Calls that produced malformed JSON that couldn't be parsed
  • Success: Calls that completed successfully with valid, validated responses
  • Success Rate: Percentage of successful calls out of total attempts

These metrics reveal the effectiveness of AutoBE's validation feedback strategy powered by typia.llm.application<Class, Model>(). When function calls fail type validation, detailed error messages are fed back to the AI agent, enabling iterative correction through self-healing spiral loops.

Success rates vary based on model size and capability - smaller models may have lower initial success rates. However, validation feedback enables even weaker models to achieve high success rates through automatic correction cycles, demonstrating the power of compiler-driven development.

License

AutoBE is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). If you modify AutoBE itself or offer it as a network service, you must make your source code available under the same license.

However, backend applications generated by AutoBE can be relicensed under any license you choose, such as MIT. This means you can freely use AutoBE-generated code in commercial projects without open source obligations, similar to how other code generation tools work.