aidapal-space / README.md
ejschwartz's picture
Try to remove markdown
33dfc83
|
raw
history blame
844 Bytes
---
title: Aidapal Space
emoji: 😻
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 5.25.2
app_file: app.py
pinned: false
---
# Aidapal Space
This is a space to try out the
[Aidapal](https://huggingface.co/AverageBusinessUser/aidapal) model, which
attempts to infer a function name, comment/description, and suitable variable
names, when given the output of Hex-Rays decompiler output of a function. More information is available in this [blog post](https://www.atredis.com/blog/2024/6/3/how-to-train-your-large-language-model).
## TODO / Issues
* We currently use `transformers` which de-quantizes the gguf. This is easy but inefficient. Can we use llama.cpp or Ollama with zerogpu?
* Model returns the markdown json prefix often. Is this something I am doing wrong? Currently we remove it in present to enable JSON parsing.