Projects with this topic
-
AIlin is a tool that connects AI services, such as Perplexity.ai, with your local computer.
Updated -
A local agentic testing pipeline consisting of 3 agents:
The first one creates test cases based on the test object specification
The second one executes the test cases one by one
The third one creates a test report based on the test results
Updated -
🤖 nGPT: A Swiss army knife for LLMs: powerful CLI and interactive chatbot in one package. Seamlessly work with OpenAI, Ollama, Groq, Claude, Gemini, or any OpenAI-compatible API to generate code, craft git commits, rewrite text, and execute shell commands. Fast, lightweight, and designed for both casual users and developers.Updated -
KiM Explorer is a two-stage RAG application for transport policy research publications from the KiM Netherlands Institute for Transport Policy Analysis. Users perform semantic search to identify relevant documents, manually select publications, then interact with an LLM using full document context rather than chunks. Built with Python/NiceGUI/OpenAI API, featuring citation generation, conversation history, filtering, and web/CLI interfaces. https://kim-explorer.quan.cat/
Updated -
cli llm chat client written in nim with support of ollama and openai
Updated -
This is the repository associated with the publication, Automated Reproducible Malware Analysis: A Standardized Testbed for Prompt-Driven LLMs.
Updated -
Terminal Chat Completion client for Google's Gemini AI models written in Go
Updated -
Simple C++ interface for the Mistral Language Models. Uses the OpenAPI interface provided by Mistral.
Updated -
C++ LLM Client Using OpenRouter API
This project demonstrates how to integrate Large Language Models (LLMs) into native C++ applications using the OpenRouter API.
Key FeaturesOpenRouter API Integration Connect to a wide range of AI models via OpenRouter's unified API endpoint.
C++ Implementation Written in modern C++ for portability and efficiency.
Command-Line Interface Simple text-based interface for interacting with AI models.
Easy Configuration Set your API key and preferred model in a config.json file: api_key, url, model. Example:
{ "api_key": "", "url": "https://openrouter.ai/api/v1/chat/completions", "model": "deepseek/deepseek-r1-0528-qwen3-8b:free" }
DependenciesC++11-compliant and forward-compatible
libcurl (for HTTP requests)
nlohmann/json (for JSON parsing)
Educational ValueLearn how to integrate third-party APIs in C++
Use C++ to build a minimal conversational AI interface
Serve as a starting point for more advanced native AI applications
Updated -
MultiNativQA is Multilingual Native question-answering (QA) dataset consisting of 64k QA pairs in seven extremely low to high resource languages, covering 18 different topics from nine different regions. Paper: https://arxiv.org/pdf/2407.09823. Project: https://nativqa.gitlab.io
Updated -
-
BDS LLM Chat - Hugging Face API Wrapper
A Flask-based chat application that serves as a wrapper for the LLaMA 3.2 model from Hugging Face. This application provides a user-friendly web interface and API for interacting with the model.
Features Modern responsive UI with dark/light mode support Persistent chat history in browser localStorage Markdown rendering for model responses REST API compatible with OpenAI-style endpoints Centralized model worker for efficient inference Redis-backed queue for robust request handling Queue-based architecture for handling multiple requests Admin dashboard for user and API key management External API access with API key authentication MongoDB-based persistence for chats, users, and API keysUpdated -
A natural language API that delivers podcast info for the Oremi Personal Assistant.
Updated -
Experiment is an experiment is an experiment is an experiment is an experiment is an e̴x̷p̶e̶r̶i̶m̸e̸n̸t̴ ̷i̵s̴ ̷a̵n̷ è̷̜x̴̝͝p̵̨̐e̴̯̐r̴͔̍ì̸̻m̴̛͎e̵̥̔n̶̠̎t̷̠͝ ̶̼̳̕ǐ̷̞͍͂s̷͍̈́ ̶̫̀a̵̠͌n̵̲͊ ̶̣̼̆ḛ̸̀x̵̰͋p̵͉̺̎e̶̛͈̮ř̸̜̜̅ì̵̜̠͗ṃ̴̼͆ė̴̮n̶̪̈́t̸̢͖͋͂
Updated -
An offline chatbot powered by WebLLM. Some LLMs available for loading are Llama-3.2, Mistral v0.3 and Stablelm 2. No MLOps required for deployment. Runs in-brower using WebGPU. Demo at: https://vite-webllm.onrender.com
Updated -
-
forge-llm is an Emacs package that integrates Large Language Models (LLMs) with Forge, enhancing the pull request (PR) workflow for GitHub and GitLab users. It automates PR description generation by analyzing git diffs and leveraging existing PR templates, helping developers create clear, structured, and high-quality descriptions effortlessly.
This repository is a mirror, original one in https://git.rogs.me/rogs/forge-llm
Updated -
-