import { useEffect, useState, useRef } from "react"; import Chat from "./components/Chat"; import ArrowRightIcon from "./components/icons/ArrowRightIcon"; import StopIcon from "./components/icons/StopIcon"; import Progress from "./components/Progress"; import ImageIcon from "./components/icons/ImageIcon"; import ImagePreview from "./components/ImagePreview"; const IS_WEBGPU_AVAILABLE = !!navigator.gpu; const STICKY_SCROLL_THRESHOLD = 120; const EXAMPLES = [ { display: "Generate an image of a cute baby fox.", prompt: "/imagine A cute and adorable baby fox with big brown eyes, autumn leaves in the background enchanting, immortal, fluffy, shiny mane, Petals, fairyism, unreal engine 5 and Octane Render, highly detailed, photorealistic, cinematic, natural colors.", }, { prompt: "Convert the formula into latex code.", image: "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/quadratic_formula.png", }, { prompt: "What is the difference between AI and ML?", }, { prompt: "Write python code to compute the nth fibonacci number.", }, ]; function App() { // Create a reference to the worker object. const worker = useRef(null); const textareaRef = useRef(null); const chatContainerRef = useRef(null); const imageUploadRef = useRef(null); // Model loading and progress const [status, setStatus] = useState(null); const [error, setError] = useState(null); const [loadingMessage, setLoadingMessage] = useState(""); const [progressItems, setProgressItems] = useState([]); const [isRunning, setIsRunning] = useState(false); // Inputs and outputs const [input, setInput] = useState(""); const [image, setImage] = useState(null); const [messages, setMessages] = useState([]); const [tps, setTps] = useState(null); const [numTokens, setNumTokens] = useState(null); const [imageProgress, setImageProgress] = useState(null); const [imageGenerationTime, setImageGenerationTime] = useState(null); function onEnter(message, img) { setMessages((prev) => [ ...prev, { role: "user", content: message, image: img ?? image }, ]); setTps(null); setIsRunning(true); setInput(""); setImage(null); setNumTokens(null); setImageProgress(null); setImageGenerationTime(null); } function onInterrupt() { // NOTE: We do not set isRunning to false here because the worker // will send a 'complete' message when it is done. worker.current.postMessage({ type: "interrupt" }); } function resizeInput() { if (!textareaRef.current) return; const target = textareaRef.current; target.style.height = "auto"; const newHeight = Math.min(Math.max(target.scrollHeight, 24), 200); target.style.height = `${newHeight}px`; } useEffect(() => { resizeInput(); }, [input]); // We use the `useEffect` hook to setup the worker as soon as the `App` component is mounted. useEffect(() => { // Create the worker if it does not yet exist. if (!worker.current) { worker.current = new Worker(new URL("./worker.js", import.meta.url), { type: "module", }); worker.current.postMessage({ type: "check" }); // Do a feature check } // Create a callback function for messages from the worker thread. const onMessageReceived = (e) => { switch (e.data.status) { // WebGPU feature checking case "success": setStatus("idle"); break; case "error": setError(e.data.data); break; case "loading": // Model file start load: add a new progress item to the list. setStatus("loading"); setLoadingMessage(e.data.data); break; case "initiate": setProgressItems((prev) => [...prev, e.data]); break; case "progress": // Model file progress: update one of the progress items. setProgressItems((prev) => prev.map((item) => { if (item.file === e.data.file) { return { ...item, ...e.data }; } return item; }), ); break; case "done": // Model file loaded: remove the progress item from the list. setProgressItems((prev) => prev.filter((item) => item.file !== e.data.file), ); break; case "ready": // Pipeline ready: the worker is ready to accept messages. setStatus("ready"); break; case "start": { // Start generation setMessages((prev) => [ ...prev, { role: "assistant", content: "" }, ]); } break; case "text-update": // Generation update: update the output text. // Parse messages const { output, tps, numTokens } = e.data; setTps(tps); setNumTokens(numTokens); setMessages((prev) => { const cloned = [...prev]; const last = cloned.at(-1); cloned[cloned.length - 1] = { ...last, content: last.content + output, }; return cloned; }); break; case "image-update": const { blob, progress, time } = e.data; if (blob) { // Add image to the last message const url = URL.createObjectURL(blob); setMessages((prev) => { const cloned = [...prev]; const last = cloned.at(-1); cloned[cloned.length - 1] = { ...last, image: url, }; return cloned; }); } else { setImageProgress(progress); setImageGenerationTime(time); } break; case "complete": // Generation complete: re-enable the "Generate" button setIsRunning(false); break; } }; const onErrorReceived = (e) => { console.error("Worker error:", e); }; // Attach the callback function as an event listener. worker.current.addEventListener("message", onMessageReceived); worker.current.addEventListener("error", onErrorReceived); // Define a cleanup function for when the component is unmounted. return () => { worker.current.removeEventListener("message", onMessageReceived); worker.current.removeEventListener("error", onErrorReceived); }; }, []); // Send the messages to the worker thread whenever the `messages` state changes. useEffect(() => { if (messages.filter((x) => x.role === "user").length === 0) { // No user messages yet: do nothing. return; } if (messages.at(-1).role === "assistant") { // Do not update if the last message is from the assistant return; } setTps(null); worker.current.postMessage({ type: "generate", data: messages }); }, [messages, isRunning]); useEffect(() => { if (!chatContainerRef.current || !isRunning) return; const element = chatContainerRef.current; if ( element.scrollHeight - element.scrollTop - element.clientHeight < STICKY_SCROLL_THRESHOLD ) { element.scrollTop = element.scrollHeight; } }, [messages, isRunning]); return IS_WEBGPU_AVAILABLE ? (
You are about to load{" "}
Janus-1.3B
, a multimodal vision-language model that is optimized for
inference on the web. Everything runs 100% locally in your browser
with{" "}
🤗 Transformers.js
{" "}
and ONNX Runtime Web, meaning no data is sent to a server. Once
the model has loaded, it can even be used offline. The source code
for the demo can be found on{" "}
GitHub
.
Unable to load model due to the following error:
{error}
{loadingMessage}
{progressItems.map(({ file, progress, total }, i) => ( ))}{messages.length > 0 && ( <> {tps ? ( <> {!isRunning && ( Generated {numTokens} tokens in{" "} {(numTokens / tps).toFixed(2)} seconds ( )} {tps.toFixed(2)} tokens/second {!isRunning && ).} > ) : ( imageProgress && ( <> {isRunning ? ( <> Generating image... ( {(imageProgress * 100).toFixed(2)}% ) > ) : ( Generated image in{" "} {(imageGenerationTime / 1000).toFixed(2)}{" "} seconds. )} > ) )} {!isRunning && ( setMessages([])} > Reset )} > )}
Disclaimer: Generated content may be inaccurate or false.