Text Generation
Transformers
Safetensors
English
qwen3_moe
programming
code generation
code
coding
coder
chat
brainstorm
qwen
qwen3
qwencoder
brainstorm 20x
creative
all uses cases
Jan-V1
horror
science fiction
fantasy
Star Trek
The Next Generation
TNG
Philip K. Dick
Deckard
finetune
thinking
reasoning
unsloth
Mixture of Experts
mixture of experts
Merge
conversational
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
base_model:
|
| 4 |
+
- DavidAU/TNG-MOE1-128R-2ep
|
| 5 |
+
- DavidAU/Qwen3-Deckard-Large-Almost-Human-6B-II
|
| 6 |
+
datasets:
|
| 7 |
+
- DavidAU/TNG-ALL
|
| 8 |
+
- DavidAU/PKD-all
|
| 9 |
+
language:
|
| 10 |
+
- en
|
| 11 |
+
pipeline_tag: text-generation
|
| 12 |
+
tags:
|
| 13 |
+
- programming
|
| 14 |
+
- code generation
|
| 15 |
+
- code
|
| 16 |
+
- coding
|
| 17 |
+
- coder
|
| 18 |
+
- chat
|
| 19 |
+
- code
|
| 20 |
+
- chat
|
| 21 |
+
- brainstorm
|
| 22 |
+
- qwen
|
| 23 |
+
- qwen3
|
| 24 |
+
- qwencoder
|
| 25 |
+
- brainstorm 20x
|
| 26 |
+
- creative
|
| 27 |
+
- all uses cases
|
| 28 |
+
- Jan-V1
|
| 29 |
+
- horror
|
| 30 |
+
- science fiction
|
| 31 |
+
- fantasy
|
| 32 |
+
- Star Trek
|
| 33 |
+
- The Next Generation
|
| 34 |
+
- TNG
|
| 35 |
+
- Philip K. Dick
|
| 36 |
+
- Deckard
|
| 37 |
+
- finetune
|
| 38 |
+
- thinking
|
| 39 |
+
- reasoning
|
| 40 |
+
- unsloth
|
| 41 |
+
- moe
|
| 42 |
+
- mixture of experts
|
| 43 |
+
- merge
|
| 44 |
+
library_name: transformers
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
<small><font color="red">Caution:</font> The training for this model is intense enough to alter "real world facts", and bring them in part into the ST/TNG Universe. </small>
|
| 48 |
+
|
| 49 |
+
<h2>Qwen3-2x6B-TNG-Deckard-Alpha-III-12B</h2>
|
| 50 |
+
|
| 51 |
+
<img src="battle-tng.gif" style="float:right; width:300px; height:300px; padding:10px;">
|
| 52 |
+
|
| 53 |
+
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
|
| 54 |
+
The source code can also be used directly.
|
| 55 |
+
|
| 56 |
+
This model is specifically for TNG / Star Trek, science fiction, story generation (all genres) but also does coding and general tasks too.
|
| 57 |
+
(model #1)
|
| 58 |
+
|
| 59 |
+
AND
|
| 60 |
+
|
| 61 |
+
All things Philip K Dick (Almost Human Version) (model #2)
|
| 62 |
+
|
| 63 |
+
These two models have been "moe'd" together in a MOE - mixture of experts - config. In this case 2x6B - 12B parameters.
|
| 64 |
+
With compression this creates a model of 10.4B - all the power of 12B in 10.4B package.
|
| 65 |
+
|
| 66 |
+
This MOE drastically upscales the components of both models.
|
| 67 |
+
|
| 68 |
+
This model can also be used for Role play.
|
| 69 |
+
|
| 70 |
+
Example generations at the bottom of this page.
|
| 71 |
+
|
| 72 |
+
This is a far stronger fine tune, taking you deeper into the ST-TNG universe, than version 1 and the 32 bit precision version of II.
|
| 73 |
+
|
| 74 |
+
This is a Star Trek The Next Generation And Deckard models were created using Unsloth, using in house generated datasets
|
| 75 |
+
and implanted/tuned on 6B model base (a 4B model + Brainstorm 20x adapter):
|
| 76 |
+
|
| 77 |
+
https://huggingface.co/DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
|
| 78 |
+
|
| 79 |
+
Information on the ORG Jan V1 4B (model info below), followed by Brainstorm 20x adapter (by DavidAU) and then a complete help
|
| 80 |
+
section for running LLM / AI models.
|
| 81 |
+
|
| 82 |
+
This model has 55 layers, and 667 tensors [moe config].
|
| 83 |
+
|
| 84 |
+
The Brainstorm adapter improves creavity, code generation, and unique code solving abilities.
|
| 85 |
+
|
| 86 |
+
The fine tuning alters the prose generation and general creative abilities the "TNG / Deckard-PKD" Universes.
|
| 87 |
+
|
| 88 |
+
The fine tuning (using Unsloth for Win 11) also affects the Brainstorm adapter too.
|
| 89 |
+
|
| 90 |
+
Model's thinking / reasoning are not affected either - they are fully intact.
|
| 91 |
+
|
| 92 |
+
For creative uses: Increases depth, detail and general "there" in the prose.
|
| 93 |
+
|
| 94 |
+
Example for creative at bottom of the page.
|
| 95 |
+
|
| 96 |
+
This model requires:
|
| 97 |
+
- Jinja (embedded) or CHATML template
|
| 98 |
+
- Max context of 256k.
|
| 99 |
+
|
| 100 |
+
Settings used for testing (suggested):
|
| 101 |
+
- Temp .3 to .7 (but .8 to 1.5 for creative)
|
| 102 |
+
- Rep pen 1.05 to 1.1
|
| 103 |
+
- Topp .8 , minp .05
|
| 104 |
+
- Topk 20
|
| 105 |
+
- Min context of 8k for thinking / output.
|
| 106 |
+
- No system prompt.
|
| 107 |
+
|
| 108 |
+
This model will respond well to both detailed instructions and step by step refinement and additions to code.
|
| 109 |
+
|
| 110 |
+
Likewise for creative use cases.
|
| 111 |
+
|
| 112 |
+
Here is a review of this model's operations:
|
| 113 |
+
|
| 114 |
+
https://www.linkedin.com/posts/gchesler_nightmediaqwen3-jan-v1-256k-ctx-6b-brainstorm20x-q6-activity-7364301711529709570-CiAn
|
| 115 |
+
|
| 116 |
+
As this is an instruct model, it will also benefit from a detailed system prompt too.
|
| 117 |
+
|
| 118 |
+
For simpler coding problems, lower quants will work well; but for complex/multi-step problem solving suggest Q6 or Q8.
|
| 119 |
+
|
| 120 |
+
---
|
| 121 |
+
|
| 122 |
+
<B>QUANTS:</b>
|
| 123 |
+
|
| 124 |
+
---
|
| 125 |
+
|
| 126 |
+
GGUF? GGUF Imatrix? Other?
|
| 127 |
+
|
| 128 |
+
Special thanks to Team Mradermacher, Team Nightmedia and other quanters!
|
| 129 |
+
|
| 130 |
+
See under "model tree", upper right and click on "quantizations".
|
| 131 |
+
|
| 132 |
+
New quants will automatically appear.
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
<h2>About Jan V1</h2>
|
| 137 |
+
|
| 138 |
+
---
|
| 139 |
+
|
| 140 |
+
# Jan-v1: Advanced Agentic Language Model
|
| 141 |
+
|
| 142 |
+
[](https://github.com/menloresearch/deep-research)
|
| 143 |
+
[](https://opensource.org/licenses/Apache-2.0)
|
| 144 |
+
[](https://jan.ai/)
|
| 145 |
+
|
| 146 |
+
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
|
| 147 |
+
<!--  -->
|
| 148 |
+
|
| 149 |
+
## Overview
|
| 150 |
+
|
| 151 |
+
**Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling.
|
| 152 |
+
|
| 153 |
+
Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
|
| 154 |
+
|
| 155 |
+
## Performance
|
| 156 |
+
|
| 157 |
+
### Question Answering (SimpleQA)
|
| 158 |
+
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
|
| 162 |
+
*The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.*
|
| 163 |
+
|
| 164 |
+
### Chat Benchmarks
|
| 165 |
+
|
| 166 |
+
These benchmarks evaluate the model's conversational and instructional capabilities.
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
|
| 170 |
+
## Quick Start
|
| 171 |
+
|
| 172 |
+
### Integration with Jan App
|
| 173 |
+
|
| 174 |
+
Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+
### Local Deployment
|
| 179 |
+
|
| 180 |
+
**Using vLLM:**
|
| 181 |
+
```bash
|
| 182 |
+
vllm serve janhq/Jan-v1-4B \
|
| 183 |
+
--host 0.0.0.0 \
|
| 184 |
+
--port 1234 \
|
| 185 |
+
--enable-auto-tool-choice \
|
| 186 |
+
--tool-call-parser hermes
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
**Using llama.cpp:**
|
| 190 |
+
```bash
|
| 191 |
+
llama-server --model jan-v1.gguf \
|
| 192 |
+
--host 0.0.0.0 \
|
| 193 |
+
--port 1234 \
|
| 194 |
+
--jinja \
|
| 195 |
+
--no-context-shift
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
### Recommended Parameters
|
| 199 |
+
|
| 200 |
+
```yaml
|
| 201 |
+
temperature: 0.6
|
| 202 |
+
top_p: 0.95
|
| 203 |
+
top_k: 20
|
| 204 |
+
min_p: 0.0
|
| 205 |
+
max_tokens: 2048
|
| 206 |
+
```
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
## 🤝 Community & Support
|
| 210 |
+
|
| 211 |
+
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions) <!-- Update with your HF model ID -->
|
| 212 |
+
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
|
| 213 |
+
|
| 214 |
+
## (*) Note
|
| 215 |
+
By default we have system prompt in chat template, this is to make sure the model having the same performance with the benchmark result. You can also use the vanilla chat template without system prompt in the file [chat_template_raw.jinja](https://huggingface.co/janhq/Jan-v1-4B/blob/main/chat_template_raw.jinja).
|
| 216 |
+
|
| 217 |
+
See more here:
|
| 218 |
+
|
| 219 |
+
https://huggingface.co/janhq/Jan-v1-4B-GGUF
|
| 220 |
+
|
| 221 |
+
---
|
| 222 |
+
|
| 223 |
+
<H2>What is Brainstorm?</H2>
|
| 224 |
+
|
| 225 |
+
---
|
| 226 |
+
|
| 227 |
+
<B>Brainstorm 20x</B>
|
| 228 |
+
|
| 229 |
+
The BRAINSTORM process was developed by David_AU.
|
| 230 |
+
|
| 231 |
+
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
|
| 232 |
+
scientific paper : Progressive LLaMA with Block Expansion </a>.
|
| 233 |
+
|
| 234 |
+
However I went in a completely different direction from what was outlined in this paper.
|
| 235 |
+
|
| 236 |
+
What is "Brainstorm" ?
|
| 237 |
+
|
| 238 |
+
The reasoning center of an LLM is taken apart, reassembled, and expanded.
|
| 239 |
+
|
| 240 |
+
In this case for this model: 20 times
|
| 241 |
+
|
| 242 |
+
Then these centers are individually calibrated. These "centers" also interact with each other.
|
| 243 |
+
This introduces subtle changes into the reasoning process.
|
| 244 |
+
The calibrations further adjust - dial up or down - these "changes" further.
|
| 245 |
+
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
|
| 246 |
+
|
| 247 |
+
The core aim of this process is to increase the model's detail, concept and connection to the "world",
|
| 248 |
+
general concept connections, prose quality and prose length without affecting instruction following.
|
| 249 |
+
|
| 250 |
+
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
|
| 251 |
+
|
| 252 |
+
Here are some of the enhancements this process brings to the model's performance:
|
| 253 |
+
|
| 254 |
+
- Prose generation seems more focused on the moment to moment.
|
| 255 |
+
- Sometimes there will be "preamble" and/or foreshadowing present.
|
| 256 |
+
- Fewer or no "cliches"
|
| 257 |
+
- Better overall prose and/or more complex / nuanced prose.
|
| 258 |
+
- A greater sense of nuance on all levels.
|
| 259 |
+
- Coherence is stronger.
|
| 260 |
+
- Description is more detailed, and connected closer to the content.
|
| 261 |
+
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
|
| 262 |
+
- Sense of "there" / in the moment is enhanced.
|
| 263 |
+
- Details are more vivid, and there are more of them.
|
| 264 |
+
- Prose generation length can be long to extreme.
|
| 265 |
+
- Emotional engagement is stronger.
|
| 266 |
+
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
|
| 267 |
+
- The MORE instructions and/or details you provide the more strongly the model will respond.
|
| 268 |
+
- Depending on the model "voice" may be more "human" vs original model's "voice".
|
| 269 |
+
|
| 270 |
+
Other "lab" observations:
|
| 271 |
+
|
| 272 |
+
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
|
| 273 |
+
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
|
| 274 |
+
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
|
| 275 |
+
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
|
| 276 |
+
|
| 277 |
+
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
|
| 278 |
+
|
| 279 |
+
---
|
| 280 |
+
|
| 281 |
+
For more information / other Qwen/Mistral Coders / additional settings see:
|
| 282 |
+
|
| 283 |
+
[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]
|
| 284 |
+
|
| 285 |
+
---
|
| 286 |
+
|
| 287 |
+
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
|
| 288 |
+
|
| 289 |
+
---
|
| 290 |
+
|
| 291 |
+
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
|
| 292 |
+
|
| 293 |
+
See this document:
|
| 294 |
+
|
| 295 |
+
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
|
| 296 |
+
|
| 297 |
+
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
|
| 298 |
+
|
| 299 |
+
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
|
| 300 |
+
|
| 301 |
+
Set the "Smoothing_factor" to 1.5
|
| 302 |
+
|
| 303 |
+
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
|
| 304 |
+
|
| 305 |
+
: in text-generation-webui -> parameters -> lower right.
|
| 306 |
+
|
| 307 |
+
: In Silly Tavern this is called: "Smoothing"
|
| 308 |
+
|
| 309 |
+
|
| 310 |
+
NOTE: For "text-generation-webui"
|
| 311 |
+
|
| 312 |
+
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
|
| 313 |
+
|
| 314 |
+
Source versions (and config files) of my models are here:
|
| 315 |
+
|
| 316 |
+
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
|
| 317 |
+
|
| 318 |
+
OTHER OPTIONS:
|
| 319 |
+
|
| 320 |
+
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
|
| 321 |
+
|
| 322 |
+
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
| 323 |
+
|
| 324 |
+
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
|
| 325 |
+
|
| 326 |
+
This a "Class 1" model:
|
| 327 |
+
|
| 328 |
+
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
|
| 329 |
+
|
| 330 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
| 331 |
+
|
| 332 |
+
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
|
| 333 |
+
|
| 334 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
| 335 |
+
|
| 336 |
+
---
|
| 337 |
+
|
| 338 |
+
<H2>Examples, Q4_K_S, Temp .8</H2>
|
| 339 |
+
|
| 340 |
+
This will be low to mid-range quality, expect better at higher quants / imatrix quants.
|
| 341 |
+
|
| 342 |
+
Some formatting will be lost on copy/paste ; also the model prefers single spacing.
|
| 343 |
+
|
| 344 |
+
---
|