Skip to main content


Showing posts from September, 2023

Serving Llama-2 7B using llama.cpp with NVIDIA CUDA on Ubuntu 22.04

This blog post is a step-by-step guide for running Llama-2 7B model using llama.cpp, with NVIDIA CUDA and Ubuntu 22.04.  llama.cpp is an C/C++ library for the inference of Llama/Llama-2 models. It has grown insanely popular along with the booming of large language model applications. Throughout this guide, we assume the user home directory (usually /home/username) is the working directory. Install NVIDIA CUDA To start, let's install NVIDIA CUDA on Ubuntu 22.04. The guide presented here is the same as the CUDA Toolkit download page provided by NVIDIA, but I deviate a little bit by installing CUDA 11.8 instead of the latest version. At the time of writing, PyTorch 2.0 stable is released for CUDA 11.8 and I find it convenient to keep my deployed CUDA version in sync with that. $ wget $ sudo dpkg -i cuda-keyring_1.1-1_all.deb $ sudo apt update $ sudo apt install cuda-11-8 After

A Perplexity Benchmark of llama.cpp

Without further ado, here are the results (explanations and discussions later): Table 1: Perplexity on wikitext-2 test set.   Model \ Quantization q4_0 q4_1 q5_0 q5_1 q8_0 fp16 llama-7b 6.157 6.0915 5.9846 5.948 5.9063 5.68 llama-13b 5.385 5.3608 5.285 5.2702 5.2547 5.09 llama-30b 4.2707 - - - - 4.1 alpaca-30b 4.4521 - - - - - llama-2-7b 5.9675 6.0398 5.8328 5.8435 5.7897 - llama-2-7b-chat 7.7641 7.7853 7.5055 7.5392 7.5014 - llama-2-13b 5.2172 5.2115 5.1343 5.1289 5.1005 - llama-2-13b-chat 6.62

The SmileyFace Dream: Everyone can share the dividends of AI era.

  I have decided to take a break from money-making careers for the next 6 months, and focus on one thing: build a platform for decentralized AI serving. It has become obvious to me that we are technologically ready to change our economy such that common people, instead of being consistently exploited by the big AI players for both their data and their money, can be compensated in some ways and share the dividends of the AI era. The practical way to do it now is to lower the participation bar for AI serving as much as possible, which has become increasingly possible because of the awesome open source development in the AI field (e.g., llama.cpp), and the permissive licensing from companies like Meta (e.g., LLaMa-2). They have enabled consumer computing devices to serve large generative models. The key in this is a platform that connects people who needs AI inference to people who have spare computing power. If you knew cryptocurrency, this is like a mining pool, but instead of making pe