The RWKV-4 "Raven"-series models are PyTorch models that have been trained on the RWKV Language Model. These models have been finetuned on various datasets including Alpaca, CodeAlpaca, Guanaco, GPT4All, ShareGPT, and more. They range in size from 1.5 billion parameters to 14 billion parameters.

These models are designed for text generation and are capable of generating high-quality responses. They have been trained on a diverse range of data, making them suitable for various tasks and domains. The models are available for inference on both CUDA and CPU platforms.

There are different versions of the RWKV-4 "Raven"-series models, each with different language ratios. For example, the RWKV-4-Raven-Eng model is primarily trained on English data, while the RWKV-4-Raven-EngAndMore model includes a mix of English, Chinese, Japanese, and Multilanguage data.

The models are released under the Apache 2.0 license, which allows for their free use and modification

---language- entags- pytorch- text-generation- causal-lm- rwkvlicense apache-20datasets- the_pile---# RWKV-4 Raven-series ModelsUPDATE Try RWKV-4-World httpshuggingfacecoBlinkDLrwkv-4-world for genera

原文地址: http://www.cveoy.top/t/topic/iPhf 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录