Posts

Generative Control Inference

Chinese version:  生成式控制论的推理算法 - 知乎 (zhihu.com) In the previous post , I gave a peek into generative control, a new idea that can control some system by learning its intrinsic characteristics as a generative world model. It can avoid the over-shoot and over-expoloration problem from PID control and reinforcement learning. Rewrite the formuation a bit, we have \[[v,w,x] = g(z), \quad z \sim \mathbb{D},\] in which \(v\) is the collection of control objectives to be minimized, \(w\) contains the output control signals, and \(x\) represents the input signals that are thought to offer information to the control problem. During training, we are presented with a dataset of \([v,w,x]\) triplets, each representing the control objective \(v\) achieved under the output and input condition \([w,x]\). We have an algorithm that can produce \(g\), which is a generative model that can disambiguate the uncertainty of multiple acceptable \(v\), \(w\) or \(x\)'s, each under the condition of the other

A Peek into Generative Control

Chinese version:  一窥生成式控制论 - 知乎 (zhihu.com) As a peek into some of my most recent work in applying generative modeling into various industries, I'm proudly presenting an idea that illustrates how powerful generative models can be for industrial controling. My team and I are working to rapidly expand this idea into many different areas, and we still haven't seen its limit yet. A General Formulation of Generative Models \[y = g(z), \quad z \sim \mathbb{D}\] where \(y\) is a sample from data, \(g\) is a generator neural network written as a function, and \(z\) follows a pre-defined distribution \(\mathbb{D}\). It is easy to identify that generative adversarial networks (GANs) naturally result in models in the above fashion. For autoregressive language models (transformers or otherwise), \(z\) is the concatenation of all the random sampling variables used during the decoding process. For stable diffusion, \(z\) is the concatenation of all the noise added in the diffusion process. A

Thoughts on AIGC for Non-AI Industries

1. AIGC is a paradigm shift from goal-oriented problem solving to free-form interactive engineering. It is time to expand our imagination to products that can talk and draw with the customers, on top of being able to completing its own tasks with these interactions. Your fridge can help to order groceries when asked, but can also answer generic questions like ChatGPT does. No reason to limit the AI to do what its shell product is designed to do. For manufacturers of these products, it means better customer stickiness. 2. The entire AIGC economy is in its infancy because right now the paying customers are the tech-savvy people who can afford a few tens of bucks in subscription fees every month. To make it really ubiquitous in every product and every place, the AI model serving cost must be reduced by multiples of thousands of times. When that is achieved, products like GitHub copilots might just be free like Bing search. At that time, every product that is capable of accessing the Inter

Serving Llama-2 7B using llama.cpp with NVIDIA CUDA on Ubuntu 22.04

This blog post is a step-by-step guide for running Llama-2 7B model using llama.cpp, with NVIDIA CUDA and Ubuntu 22.04.  llama.cpp is an C/C++ library for the inference of Llama/Llama-2 models. It has grown insanely popular along with the booming of large language model applications. Throughout this guide, we assume the user home directory (usually /home/username) is the working directory. Install NVIDIA CUDA To start, let's install NVIDIA CUDA on Ubuntu 22.04. The guide presented here is the same as the CUDA Toolkit download page provided by NVIDIA, but I deviate a little bit by installing CUDA 11.8 instead of the latest version. At the time of writing, PyTorch 2.0 stable is released for CUDA 11.8 and I find it convenient to keep my deployed CUDA version in sync with that. $ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb $ sudo dpkg -i cuda-keyring_1.1-1_all.deb $ sudo apt update $ sudo apt install cuda-11-8 After

A Perplexity Benchmark of llama.cpp

Without further ado, here are the results (explanations and discussions later): Table 1: Perplexity on wikitext-2 test set.   Model \ Quantization q4_0 q4_1 q5_0 q5_1 q8_0 fp16 llama-7b 6.157 6.0915 5.9846 5.948 5.9063 5.68 llama-13b 5.385 5.3608 5.285 5.2702 5.2547 5.09 llama-30b 4.2707 - - - - 4.1 alpaca-30b 4.4521 - - - - - llama-2-7b 5.9675 6.0398 5.8328 5.8435 5.7897 - llama-2-7b-chat 7.7641 7.7853 7.5055 7.5392 7.5014 - llama-2-13b 5.2172 5.2115 5.1343 5.1289 5.1005 - llama-2-13b-chat 6.62