💾 Archived View for station.martinrue.com › bengunn › 9afda7095b114e718b23bfc7ea600db3 captured on 2024-07-09 at 03:09:14. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-07-09)

➡️ Next capture (2024-08-18)

-=-=-=-=-=-=-

👽 bengunn

Llamafile para testear en local LLMs

1. Download llava-v1.5-7b-q4-server.llamafile (3.97 GB).

2. Open your computer's terminal.

3. If you're using macOS, Linux, or BSD, you'll need to grant permission for your computer to execute this new file; chmod +x llava-v1.5-7b-q4-server.llamafile

4. If you're on Windows, rename the file by adding ".exe" on the end.

5.Run the llamafile. e.g.: ./llava-v1.5-7b-q4-server.llamafile

6. Your browser should open automatically and display a chat interface. (If it doesn't, just open your browser and point it at https://localhost:8080.)

Fuentes: https://github.com/Mozilla-Ocho/llamafile

pin: victorchk (gemini://station.martinrue.com/victorhck)

7 months ago · 👍 calimero

Links

https://localhost:8080

https://github.com/Mozilla-Ocho/llamafile

gemini://station.martinrue.com/victorhck

Actions

👋 Join Station