Instructions to use HappyAIUser/AtmaSiddhiGPTv10-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="HappyAIUser/AtmaSiddhiGPTv10-gguf", filename="model-f16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Use Docker
docker model run hf.co/HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HappyAIUser/AtmaSiddhiGPTv10-gguf" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HappyAIUser/AtmaSiddhiGPTv10-gguf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
- Ollama
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with Ollama:
ollama run hf.co/HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
- Unsloth Studio new
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for HappyAIUser/AtmaSiddhiGPTv10-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for HappyAIUser/AtmaSiddhiGPTv10-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for HappyAIUser/AtmaSiddhiGPTv10-gguf to start chatting
- Pi new
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with Docker Model Runner:
docker model run hf.co/HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
- Lemonade
How to use HappyAIUser/AtmaSiddhiGPTv10-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull HappyAIUser/AtmaSiddhiGPTv10-gguf:Q4_K_M
Run and chat with the model
lemonade run user.AtmaSiddhiGPTv10-gguf-Q4_K_M
List all available models
lemonade list
AtmasiddhiGPTv9-gguf
AtmasiddhiGPTv9-gguf is a fine-tuned version of the LLaMA 3.2 3B Instruct model, designed to provide conversational insights and spiritual guidance based on the teachings of the Atmasiddhi Shastra, a revered Jain text by Shrimad Rajchandra. This model is specially aligned with contemporary interpretations by Shri Gurudevshri Rakeshbhai, making it a valuable tool for exploring the themes of self-realization, soul, and liberation in Jain philosophy.
Model Details
- Model Name: AtmasiddhiGPTv9-gguf
- Base Model: LLaMA 3.2 3B Instruct (Meta)
- Model Type: Language Model (GGUF format)
- Language: English
- Intended Use: Spiritual guidance, philosophical inquiry, Jain studies, self-reflection
- Alignment: Based on the recent commentaries and teachings of Shri Gurudevshri Rakeshbhai on the Atmasiddhi Shastra
- Recommended Platforms: LM Studio, Jan (support GGUF models)
- License: Apache 2.0
- Framework: GGUF-compatible
Model Scope and Purpose
AtmasiddhiGPTv9-gguf is designed to serve as an interactive tool for individuals seeking a deeper understanding of Jain spiritual concepts, guided by the most recent teachings of Shri Gurudevshri Rakeshbhai. This model uses the philosophical foundation of the Atmasiddhi Shastra while adopting the conversational style of the LLaMA 3.2 3B Instruct model, ensuring responses are both spiritually aligned and easily understandable.
Key Philosophical Themes
The model focuses on interpreting key themes of the Atmasiddhi Shastra, particularly as presented in Shri Gurudevshri Rakeshbhaiโs teachings. These include:
- The Nature of the Soul (Atma): Exploring the soul's inherent qualities, permanence, and its distinction from physical existence.
- Path to Liberation (Moksha): Insights into the steps and virtues needed to achieve liberation from the cycle of birth and death.
- Karma and Its Impact: Explanations of karmic law, the effects of accumulated karma, and how it shapes the soulโs journey.
- Self-Realization: Encouraging self-inquiry to unveil true self-identity and transcend ego-driven life.
- Discernment and Detachment (Vairagya): Offering practical advice on embracing detachment, renouncing material attachments, and cultivating spiritual insight.
The model seeks to convey these themes with the depth and clarity characteristic of Shri Gurudevshriโs teachings, while maintaining the conversational ease provided by the LLaMA 3.2 3B Instruct model architecture.
Recommended Platforms: LM Studio and Jan
AtmasiddhiGPTv9-gguf is optimized for use with GGUF-compatible applications like LM Studio and Jan, which allow local, offline interactions with the model.
LM Studio
LM Studio is a free application supporting GGUF-formatted models, ideal for downloading and running large language models offline.
How to Use AtmasiddhiGPTv9-gguf with LM Studio:
- Download LM Studio: Visit the LM Studio download page and choose your operating system.
- Install and Launch: Follow the installation instructions provided.
- Load the Model:
- Search for "AtmasiddhiGPTv9-gguf" in the model catalog, or import it manually if previously downloaded.
- Interact with the model via LM Studioโs chat interface or set up a local API server for integration into applications.
For additional guidance, refer to the LM Studio Documentation.
Jan
Jan is an open-source application that supports GGUF models, allowing users to interact with models entirely offline.
How to Use AtmasiddhiGPTv9-gguf with Jan:
- Download Jan: Access the Jan download page.
- Install and Launch Jan: Follow the setup instructions.
- Import the Model:
- Use Janโs model management section to add the AtmasiddhiGPTv9-gguf model.
- Engage with the model via Janโs conversational interface.
Refer to Jan Documentation for more details.
Example Code for Local Use
To load AtmasiddhiGPTv9-gguf with compatible libraries (if supported) or GGUF-compatible applications, you can use this sample code:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Model path
model_name = "HappyAIUser/AtmasiddhiGPTv9-gguf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Sample input
input_text = "What insights does Atmasiddhi offer about liberation?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
- Downloads last month
- 67
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit