Spaces:
Sleeping
Sleeping
metadata
title: Aidapal Space
emoji: 😻
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 5.25.2
app_file: app.py
pinned: false
Aidapal Space
This is a space to try out the Aidapal model, which attempts to infer a function name, comment/description, and suitable variable names, when given the output of Hex-Rays decompiler output of a function. More information is available in this blog post.
TODO
- We currently use
transformers
which de-quantizes the gguf. This is easy but inefficient. Can we use llama.cpp or Ollama with zerogpu?