Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
digiqmb
/
agent
like
0
Follow
DRL project: DigiQ Model Based
5
License:
mit
Model card
Files
Files and versions
xet
Community
2
main
agent
/
model
6.72 GB
1 contributor
History:
1 commit
LorenzLorentz
Upload model/agent.pth with huggingface_hub
7118bc2
verified
7 months ago
agent.pth
pickle
Detected Pickle imports (58)
"torch.LongStorage"
,
"torch.nn.modules.container.Sequential"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.models.roberta.modeling_roberta.RobertaEncoder"
,
"torch._utils._rebuild_tensor_v2"
,
"digiq.models.agent.CrossLayer"
,
"transformers.models.qwen3.configuration_qwen3.Qwen3Config"
,
"transformers.models.roberta.modeling_roberta.RobertaModel"
,
"torch.nn.modules.activation.ReLU"
,
"transformers.models.roberta.modeling_roberta.RobertaSdpaSelfAttention"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"digiq.models.encoder.ActionEncoder"
,
"tokenizers.AddedToken"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3MLP"
,
"torch.BFloat16Storage"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3RotaryEmbedding"
,
"transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3ForCausalLM"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3RMSNorm"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3Attention"
,
"__builtin__.set"
,
"transformers.models.roberta.modeling_roberta.RobertaAttention"
,
"digiq.models.encoder.GoalEncoder"
,
"transformers.modeling_rope_utils._compute_default_rope_parameters"
,
"torch._utils._rebuild_parameter"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.models.roberta.modeling_roberta.RobertaIntermediate"
,
"transformers.models.roberta.modeling_roberta.RobertaSelfOutput"
,
"torch.nn.modules.container.ParameterList"
,
"torch.bfloat16"
,
"collections.OrderedDict"
,
"torch._C._nn.gelu"
,
"transformers.models.roberta.configuration_roberta.RobertaConfig"
,
"transformers.activations.GELUActivation"
,
"digiq.models.agent.Agent"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3DecoderLayer"
,
"transformers.models.roberta.modeling_roberta.RobertaLayer"
,
"tokenizers.Tokenizer"
,
"digiq.models.agent.AttentionBlock"
,
"torch.nn.modules.activation.SiLU"
,
"torch.nn.modules.normalization.LayerNorm"
,
"torch.nn.modules.linear.Linear"
,
"torch.FloatStorage"
,
"digiq.models.agent.CrossAttentionMLPModel"
,
"torch.nn.modules.dropout.Dropout"
,
"digiq.models.agent.MLP"
,
"transformers.models.roberta.modeling_roberta.RobertaOutput"
,
"transformers.models.qwen3.modeling_qwen3.Qwen3Model"
,
"transformers.models.roberta.modeling_roberta.RobertaPooler"
,
"torch.nn.modules.activation.MultiheadAttention"
,
"torch.nn.modules.linear.NonDynamicallyQuantizableLinear"
,
"torch.nn.modules.activation.Tanh"
,
"_codecs.encode"
,
"transformers.models.roberta.modeling_roberta.RobertaEmbeddings"
,
"transformers.models.qwen2.tokenization_qwen2_fast.Qwen2TokenizerFast"
,
"tokenizers.models.Model"
,
"digiq.models.agent.FeatureSelfAttention"
,
"torch.float32"
How to fix it?
6.72 GB
xet
Upload model/agent.pth with huggingface_hub
7 months ago