id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
784834515
|
Swiftmailer 6.2.5 serialize for asyncSending doesn't work anymore
Q
A
Bug report?
yes
Feature request?
no
RFC?
no
How used?
Standalone
Swiftmailer version
6.2.5
PHP version
7.4.9
Observed behaviour
after the Swiftmailer 6.2.5 update the PHP-function serialize cannot be used anymore the write the mail in a text file for sending later by unserialize the content of the text file.
Expected behaviour
__sleep() und __wakeup() shouldn't throw new \BadMethodCallException('Cannot serialize '.__CLASS__);
Solution
revert the change or suggest/implement another solution for serializing an email mit swiftmailer and send it later async (mail queue).
/cc @nicolas-grekas @jderusse
/cc @nicolas-grekas @jderusse
@markusramsak could you please tell us the name of the class that throw the exception?
@markusramsak could you please tell us the name of the class that throw the exception?
vendor/swiftmailer/swiftmailer/lib/classes/Swift/Transport/AbstractSmtpTransport.php
and
vendor/swiftmailer/swiftmailer/lib/classes/Swift/ByteStream/TemporaryFileByteStream.php
vendor/swiftmailer/swiftmailer/lib/classes/Swift/Transport/AbstractSmtpTransport.php
and
vendor/swiftmailer/swiftmailer/lib/classes/Swift/ByteStream/TemporaryFileByteStream.php
Hmm, I don't see why php is trying to serialize a class that belongs to the Transport, while it should only serialize the Message.
Same for TemporaryFileByteStream this class create a temporary file and delete it on __destruct, serializing it does not make sens.
Could you please provide me a reproducer?
Hmm, I don't see why php is trying to serialize a class that belongs to the Transport, while it should only serialize the Message.
Same for TemporaryFileByteStream this class create a temporary file and delete it on __destruct, serializing it does not make sens.
Could you please provide me a reproducer?
I serialize the whole mailer class with transport and message so I don't have to know anything when unserialize the Swiftmailer.
class ImdSwiftMail
{
protected $_sender = 'default';
protected $_config = [];
protected $_transportConfig = [];
protected $_layout = 'default';
protected $_viewVars = [];
protected $_client = 'localhost';
protected array $_dkimConfig = [];
protected ?Swift_Mailer $_mailer = null;
protected ?Swift_Message $_message = null;
protected array $_ignoreHeaders = [];
/**
* @param string $template Email template
* @param string $subject Betreff
* @param string|array $to Empfänger
* @param array $params sender|viewVars|layout|client|cc|bcc
*
* @return \App\Mailer\ImdSwiftMail
*/
public static function init(string $template, string $subject, $to, $params = [])
{
return new ImdSwiftMail($template, $subject, $to, $params);
}
public function __construct(string $template, string $subject, $to, $params = [])
{
TableRegistry::getTableLocator()->get('MailAccounts')->setMailConfig();
$this->_sender = 'default';
if (!empty($params['sender'])) {
$this->_sender = $params['sender'];
}
$this->_config = Mailer::getConfig($this->_sender);
$this->_transportConfig = TransportFactory::getConfig($this->_config['transport']);
$this->_viewVars = [];
if (!empty($params['viewVars'])) {
$this->_viewVars = $params['viewVars'];
}
$this->_layout = 'default';
if (!empty($params['layout'])) {
$this->_layout = $params['layout'];
}
$this->_client = 'localhost';
if (!IS_LOCAL) {
if (isset($this->_transportConfig['client'])) {
$this->_client = $this->_transportConfig['client'];
} else {
$httpHost = env('HTTP_HOST');
if ($httpHost) {
[$this->_client] = explode(':', $httpHost);
}
}
}
$viewClass = 'App\View\AppView';
$templateFolderPath = 'email' . DS;
$helpers = ['Html', 'HtmlEmail'];
$encryption = $this->_transportConfig['tls'] ? 'tls' : null;
if (strpos($this->_transportConfig['host'], 'ssl://') === 0) {
$encryption = 'ssl';
$this->_transportConfig['host'] = substr($this->_transportConfig['host'], strlen('ssl://'));
}
$transport = (new Swift_SmtpTransport())
->setHost($this->_transportConfig['host'])
->setPort($this->_transportConfig['port'])
->setEncryption($encryption)
->setTimeout($this->_transportConfig['timeout'])
->setLocalDomain($this->_client)
->setUsername($this->_transportConfig['username'])
->setPassword($this->_transportConfig['password']);
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'html')
->setTemplatePath($templateFolderPath . 'html');
$view = $viewBuilder->build($this->_viewVars);
$html = $view->render();
$html = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $html));
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'text')
->setTemplatePath($templateFolderPath . 'text');
$view = $viewBuilder->build($this->_viewVars);
$plain = $view->render();
$plain = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $plain));
$this->_mailer = new Swift_Mailer($transport);
$this->_message = new Swift_Message();
$this->_message
->setSubject($subject)
->setFrom($this->_config['from'])
->setTo($to)
->setBoundary('wowDAS-Mail-' . date('Y-m-d_H_i_s'))
->setBody($html, 'text/html')
->addPart($plain, 'text/plain');
if (!empty($params['messageId'])) {
$this->setMessageId($params['messageId']);
}
if (!empty($params['cc'])) {
$this->setCc($params['cc']);
}
if (!empty($params['bcc'])) {
$this->setBcc($params['bcc']);
}
$this->_ignoreHeaders = ['Subject', 'To', 'Content-Type'];
if (!empty($params['headers'])) {
foreach ($params['headers'] as $name => $value) {
$this->addTextHeader($name, $value);
}
}
if (isset($this->_config['headers'])) {
foreach ($this->_config['headers'] as $name => $value) {
if (empty($params['headers']) or !isset($params['headers'][$name])) {
// Header noch nicht hinzugefügt
$this->addTextHeader($name, $value);
}
}
}
if (!empty($params['requestReadReceipt'])) {
$this->_message->setReadReceiptTo($this->_config['from']);
$this->addTextHeader('X-Confirm-reading-to', key($this->_config['from']));
$this->addTextHeader('Return-Receipt-To', key($this->_config['from']));
}
if (!empty($params['signMessage'])) {
$this->sign();
}
}
/**
* @return $this
* @throws \Swift_SwiftException
*/
public function sign()
{
$domain = trimWhitespace(file_get_contents(PRIVATE_PATH . 'dkim' . DS . 'current' . DS . 'domain.txt'));
$privateKey = trimWhitespace(file_get_contents(PRIVATE_PATH . 'dkim' . DS . 'current' . DS . 'private.key'));
$selector = trimWhitespace(file_get_contents(PRIVATE_PATH . 'dkim' . DS . 'current' . DS . 'selector.txt'));
$this->_dkimConfig = [
'domain' => $domain,
'privateKey' => $privateKey,
'selector' => $selector,
];
$signer = (new Swift_Signers_DKIMSigner($this->_dkimConfig['privateKey'], $this->_dkimConfig['domain'], $this->_dkimConfig['selector']))
->setHashAlgorithm('rsa-sha256')
->setBodyCanon('relaxed')
->setHeaderCanon('relaxed');
foreach ($this->_ignoreHeaders as $ignoreHeader) {
$signer->ignoreHeader($ignoreHeader);
}
$this->_message->attachSigner($signer);
return $this;
}
public function sendAsync()
{
(new ImdSwiftMailQueue())->addMail($this);
ImdConfig::callAsyncCronjob('process_mailqueue');
}
/**
* @return int
* @throws \Exception
*/
public function send()
{
$failedRecipients = [];
$sent = $this->_mailer->send($this->_message, $failedRecipients);
if ($sent === 0) {
imdLogDebug($failedRecipients);
throw new Exception('E-Mail konnte nicht versendet werden.');
}
if (!empty($failedRecipients)) {
imdLogDebug($failedRecipients);
throw new Exception('E-Mail konnte nicht an alle erfolgreich versendet werden.');
}
return $sent;
}
}
I serialize the whole mailer class with transport and message so I don't have to know anything when unserialize the Swiftmailer.
class ImdSwiftMail
{
protected $_sender = 'default';
protected $_config = [];
protected $_transportConfig = [];
protected $_layout = 'default';
protected $_viewVars = [];
protected $_client = 'localhost';
protected array $_dkimConfig = [];
protected ?Swift_Mailer $_mailer = null;
protected ?Swift_Message $_message = null;
protected array $_ignoreHeaders = [];
/**
* @param string $template Email template
* @param string $subject Betreff
* @param string|array $to Empfänger
* @param array $params sender|viewVars|layout|client|cc|bcc
*
* @return \App\Mailer\ImdSwiftMail
*/
public static function init(string $template, string $subject, $to, $params = [])
{
return new ImdSwiftMail($template, $subject, $to, $params);
}
public function __construct(string $template, string $subject, $to, $params = [])
{
TableRegistry::getTableLocator()->get('MailAccounts')->setMailConfig();
$this->_sender = 'default';
if (!empty($params['sender'])) {
$this->_sender = $params['sender'];
}
$this->_config = Mailer::getConfig($this->_sender);
$this->_transportConfig = TransportFactory::getConfig($this->_config['transport']);
$this->_viewVars = [];
if (!empty($params['viewVars'])) {
$this->_viewVars = $params['viewVars'];
}
$this->_layout = 'default';
if (!empty($params['layout'])) {
$this->_layout = $params['layout'];
}
$this->_client = 'localhost';
if (!IS_LOCAL) {
if (isset($this->_transportConfig['client'])) {
$this->_client = $this->_transportConfig['client'];
} else {
$httpHost = env('HTTP_HOST');
if ($httpHost) {
[$this->_client] = explode(':', $httpHost);
}
}
}
$viewClass = 'App\View\AppView';
$templateFolderPath = 'email' . DS;
$helpers = ['Html', 'HtmlEmail'];
$encryption = $this->_transportConfig['tls'] ? 'tls' : null;
if (strpos($this->_transportConfig['host'], 'ssl://') === 0) {
$encryption = 'ssl';
$this->_transportConfig['host'] = substr($this->_transportConfig['host'], strlen('ssl://'));
}
$transport = (new Swift_SmtpTransport())
->setHost($this->_transportConfig['host'])
->setPort($this->_transportConfig['port'])
->setEncryption($encryption)
->setTimeout($this->_transportConfig['timeout'])
->setLocalDomain($this->_client)
->setUsername($this->_transportConfig['username'])
->setPassword($this->_transportConfig['password']);
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'html')
->setTemplatePath($templateFolderPath . 'html');
$view = $viewBuilder->build($this->_viewVars);
$html = $view->render();
$html = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $html));
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'text')
->setTemplatePath($templateFolderPath . 'text');
$view = $viewBuilder->build($this->_viewVars);
$plain = $view->render();
$plain = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $plain));
$this->_mailer = new Swift_Mailer($transport);
$this->_message = new Swift_Message();
$this->_message
->setSubject($subject)
->setFrom($this->_config['from'])
->setTo($to)
->setBoundary('wowDAS-Mail-' . date('Y-m-d_H_i_s'))
->setBody($html, 'text/html')
->addPart($plain, 'text/plain');
if (!empty($params['messageId'])) {
$this->setMessageId($params['messageId']);
}
if (!empty($params['cc'])) {
$this->setCc($params['cc']);
}
if (!empty($params['bcc'])) {
$this->setBcc($params['bcc']);
}
$this->_ignoreHeaders = ['Subject', 'To', 'Content-Type'];
if (!empty($params['headers'])) {
foreach ($params['headers'] as $name => $value) {
$this->addTextHeader($name, $value);
}
}
if (isset($this->_config['headers'])) {
foreach ($this->_config['headers'] as $name => $value) {
if (empty($params['headers']) or !isset($params['headers'][$name])) {
// Header noch nicht hinzugefügt
$this->addTextHeader($name, $value);
}
}
}
if (!empty($params['requestReadReceipt'])) {
$this->_message->setReadReceiptTo($this->_config['from']);
$this->addTextHeader('X-Confirm-reading-to', key($this->_config['from']));
$this->addTextHeader('Return-Receipt-To', key($this->_config['from']));
}
if (!empty($params['signMessage'])) {
$this->sign();
}
}
/**
* @return $this
* @throws \Swift_SwiftException
*/
public function sign()
{
$domain = trimWhitespace(file_get_contents(PRIVATE_PATH . 'dkim' . DS . 'current' . DS . 'domain.txt'));
$privateKey = trimWhitespace(file_get_contents(PRIVATE_PATH . 'dkim' . DS . 'current' . DS . 'private.key'));
$selector = trimWhitespace(file_get_contents(PRIVATE_PATH . 'dkim' . DS . 'current' . DS . 'selector.txt'));
$this->_dkimConfig = [
'domain' => $domain,
'privateKey' => $privateKey,
'selector' => $selector,
];
$signer = (new Swift_Signers_DKIMSigner($this->_dkimConfig['privateKey'], $this->_dkimConfig['domain'], $this->_dkimConfig['selector']))
->setHashAlgorithm('rsa-sha256')
->setBodyCanon('relaxed')
->setHeaderCanon('relaxed');
foreach ($this->_ignoreHeaders as $ignoreHeader) {
$signer->ignoreHeader($ignoreHeader);
}
$this->_message->attachSigner($signer);
return $this;
}
public function sendAsync()
{
(new ImdSwiftMailQueue())->addMail($this);
ImdConfig::callAsyncCronjob('process_mailqueue');
}
/**
* @return int
* @throws \Exception
*/
public function send()
{
$failedRecipients = [];
$sent = $this->_mailer->send($this->_message, $failedRecipients);
if ($sent === 0) {
imdLogDebug($failedRecipients);
throw new Exception('E-Mail konnte nicht versendet werden.');
}
if (!empty($failedRecipients)) {
imdLogDebug($failedRecipients);
throw new Exception('E-Mail konnte nicht an alle erfolgreich versendet werden.');
}
return $sent;
}
}
Thank you for the reproducer.
You have the issue because the class contains the Mailer (which contains the transport).
You should not serialize this instance, this is not needed (the class does not contains contextual information), but most of all: by serializing it, you'll have unexpected behavior (see the TemporaryFileByteStream that delete generated files on __destruct).
Instead, I recommend you injecting the Mailer in the service in charge of unserializing contextual data, and sending the Email.
IMHO the exception \BadMethodCallException('Cannot serialize '.__CLASS__) is legit here. You cannot/shouldnt serialize this class.
I'm voting for closing the issue.
But if needed (we wan't to avoid the BC Break), I can provide a patch that trigger deprecation (without compromising the security)
Thank you for the reproducer.
You have the issue because the class contains the Mailer (which contains the transport).
You should not serialize this instance, this is not needed (the class does not contains contextual information), but most of all: by serializing it, you'll have unexpected behavior (see the TemporaryFileByteStream that delete generated files on __destruct).
Instead, I recommend you injecting the Mailer in the service in charge of unserializing contextual data, and sending the Email.
IMHO the exception \BadMethodCallException('Cannot serialize '.__CLASS__) is legit here. You cannot/shouldnt serialize this class.
I'm voting for closing the issue.
But if needed (we wan't to avoid the BC Break), I can provide a patch that trigger deprecation (without compromising the security)
@jderusse I know that serializing the message only would work but this is a regression to the previous version where it is possible. I serialize the object and deserialize it some seconds or minutes later to send the mail via cronjob and not directly only for performance reasons.
@jderusse I know that serializing the message only would work but this is a regression to the previous version where it is possible. I serialize the object and deserialize it some seconds or minutes later to send the mail via cronjob and not directly only for performance reasons.
My updated mailer works now with the newest version:
namespace App\Mailer;
use Cake\Http\Exception\BadRequestException;
use Cake\Mailer\Mailer;
use Cake\Mailer\TransportFactory;
use Cake\ORM\TableRegistry;
use Cake\View\ViewBuilder;
use Exception;
use Swift_Attachment;
use Swift_Mailer;
use Swift_Message;
use Swift_Signers_DKIMSigner;
use Swift_SmtpTransport;
class ImdSwiftMail
{
protected string $_sender = 'default';
protected array $_config = [];
protected array $_transportConfig = [];
protected string $_encryption = '';
protected string $_layout = 'default';
protected array $_viewVars = [];
protected string $_client = 'localhost';
protected array $_ignoreHeaders = [];
protected ?Swift_Message $_message = null;
/**
* @param string $template Email template
* @param string $subject Betreff
* @param string|array $to Empfänger
* @param array $params sender|viewVars|layout|client|cc|bcc
*
* @return \App\Mailer\ImdSwiftMail
* @throws \Swift_RfcComplianceException
* @throws \Swift_SwiftException
*/
public static function init(string $template, string $subject, $to, array $params = []) : ImdSwiftMail
{
return new ImdSwiftMail($template, $subject, $to, $params);
}
/**
* ImdSwiftMail constructor.
*
* @param string $template Email template
* @param string $subject Betreff
* @param string|array $to Empfänger
* @param array $params sender|viewVars|layout|client|cc|bcc
*
* @throws \Swift_RfcComplianceException
* @throws \Swift_SwiftException
*/
public function __construct(string $template, string $subject, $to, array $params = [])
{
$this->_sender = 'default';
if (!empty($params['sender'])) {
$this->_sender = $params['sender'];
}
$config = Mailer::getConfig($this->_sender);
if (!is_array($config)) {
throw new BadRequestException('E-Mail-Konfiguration ' . $this->_sender . ' nicht gefunden.');
}
$this->_config = $config;
$transportConfig = TransportFactory::getConfig($this->_config['transport']);
$this->_transportConfig = $transportConfig;
$this->_viewVars = [];
if (!empty($params['viewVars'])) {
$this->_viewVars = $params['viewVars'];
}
$this->_layout = 'default';
if (!empty($params['layout'])) {
$this->_layout = $params['layout'];
}
$this->_client = 'localhost';
if (!IS_LOCAL) {
if (isset($this->_transportConfig['client'])) {
$this->_client = $this->_transportConfig['client'];
} else {
$httpHost = env('HTTP_HOST');
if ($httpHost) {
[$this->_client] = explode(':', $httpHost);
}
}
}
$viewClass = 'App\View\AppView';
$templateFolderPath = 'email' . DS;
$helpers = ['Html', 'HtmlEmail'];
$this->_encryption = $this->_transportConfig['tls'] ? 'tls' : '';
if (strpos($this->_transportConfig['host'], 'ssl://') === 0) {
$this->_encryption = 'ssl';
$this->_transportConfig['host'] = substr($this->_transportConfig['host'], strlen('ssl://'));
}
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'html')
->setTemplatePath($templateFolderPath . 'html');
$view = $viewBuilder->build($this->_viewVars);
$html = $view->render();
$html = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $html));
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'text')
->setTemplatePath($templateFolderPath . 'text');
$view = $viewBuilder->build($this->_viewVars);
$plain = $view->render();
$plain = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $plain));
$this->_message = new Swift_Message();
$this->_message
->setSubject($subject)
->setFrom($this->_config['from'])
->setTo($to)
->setBoundary('Mail-' . date('Y-m-d_H_i_s'))
->setBody($html, 'text/html')
->addPart($plain, 'text/plain');
if (!empty($params['messageId'])) {
$this->setMessageId($params['messageId']);
}
if (!empty($params['cc'])) {
$this->setCc($params['cc']);
}
if (!empty($params['bcc'])) {
$this->setBcc($params['bcc']);
}
$this->_ignoreHeaders = ['Subject', 'To', 'Content-Type'];
if (!empty($params['headers'])) {
foreach ($params['headers'] as $name => $value) {
$this->addTextHeader($name, $value);
}
}
if (isset($this->_config['headers'])) {
foreach ($this->_config['headers'] as $name => $value) {
if (empty($params['headers']) or !isset($params['headers'][$name])) {
// Header noch nicht hinzugefügt
$this->addTextHeader($name, $value);
}
}
}
}
public function sendAsync() : void
{
(new ImdSwiftMailQueue())->addMail($this);
callAsyncCronjob('process_mailqueue');
}
/**
* @return int
* @throws \Exception
*/
public function send() : int
{
$failedRecipients = [];
$transport = (new Swift_SmtpTransport())
->setHost($this->_transportConfig['host'])
->setPort($this->_transportConfig['port'])
->setEncryption($this->_encryption)
->setTimeout($this->_transportConfig['timeout'])
->setLocalDomain($this->_client)
->setUsername($this->_transportConfig['username'])
->setPassword($this->_transportConfig['password']);
$mailer = new Swift_Mailer($transport);
$sent = $mailer->send($this->_message, $failedRecipients);
return $sent;
}
}
My updated mailer works now with the newest version:
namespace App\Mailer;
use Cake\Http\Exception\BadRequestException;
use Cake\Mailer\Mailer;
use Cake\Mailer\TransportFactory;
use Cake\ORM\TableRegistry;
use Cake\View\ViewBuilder;
use Exception;
use Swift_Attachment;
use Swift_Mailer;
use Swift_Message;
use Swift_Signers_DKIMSigner;
use Swift_SmtpTransport;
class ImdSwiftMail
{
protected string $_sender = 'default';
protected array $_config = [];
protected array $_transportConfig = [];
protected string $_encryption = '';
protected string $_layout = 'default';
protected array $_viewVars = [];
protected string $_client = 'localhost';
protected array $_ignoreHeaders = [];
protected ?Swift_Message $_message = null;
/**
* @param string $template Email template
* @param string $subject Betreff
* @param string|array $to Empfänger
* @param array $params sender|viewVars|layout|client|cc|bcc
*
* @return \App\Mailer\ImdSwiftMail
* @throws \Swift_RfcComplianceException
* @throws \Swift_SwiftException
*/
public static function init(string $template, string $subject, $to, array $params = []) : ImdSwiftMail
{
return new ImdSwiftMail($template, $subject, $to, $params);
}
/**
* ImdSwiftMail constructor.
*
* @param string $template Email template
* @param string $subject Betreff
* @param string|array $to Empfänger
* @param array $params sender|viewVars|layout|client|cc|bcc
*
* @throws \Swift_RfcComplianceException
* @throws \Swift_SwiftException
*/
public function __construct(string $template, string $subject, $to, array $params = [])
{
$this->_sender = 'default';
if (!empty($params['sender'])) {
$this->_sender = $params['sender'];
}
$config = Mailer::getConfig($this->_sender);
if (!is_array($config)) {
throw new BadRequestException('E-Mail-Konfiguration ' . $this->_sender . ' nicht gefunden.');
}
$this->_config = $config;
$transportConfig = TransportFactory::getConfig($this->_config['transport']);
$this->_transportConfig = $transportConfig;
$this->_viewVars = [];
if (!empty($params['viewVars'])) {
$this->_viewVars = $params['viewVars'];
}
$this->_layout = 'default';
if (!empty($params['layout'])) {
$this->_layout = $params['layout'];
}
$this->_client = 'localhost';
if (!IS_LOCAL) {
if (isset($this->_transportConfig['client'])) {
$this->_client = $this->_transportConfig['client'];
} else {
$httpHost = env('HTTP_HOST');
if ($httpHost) {
[$this->_client] = explode(':', $httpHost);
}
}
}
$viewClass = 'App\View\AppView';
$templateFolderPath = 'email' . DS;
$helpers = ['Html', 'HtmlEmail'];
$this->_encryption = $this->_transportConfig['tls'] ? 'tls' : '';
if (strpos($this->_transportConfig['host'], 'ssl://') === 0) {
$this->_encryption = 'ssl';
$this->_transportConfig['host'] = substr($this->_transportConfig['host'], strlen('ssl://'));
}
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'html')
->setTemplatePath($templateFolderPath . 'html');
$view = $viewBuilder->build($this->_viewVars);
$html = $view->render();
$html = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $html));
$viewBuilder = new ViewBuilder();
$viewBuilder->setClassName($viewClass)
->setHelpers($helpers)
->setTemplate($template)
->setLayout($this->_layout)
->setLayoutPath($templateFolderPath . 'text')
->setTemplatePath($templateFolderPath . 'text');
$view = $viewBuilder->build($this->_viewVars);
$plain = $view->render();
$plain = trimWhitespace(str_replace(["\r\n", "\r"], "\n", $plain));
$this->_message = new Swift_Message();
$this->_message
->setSubject($subject)
->setFrom($this->_config['from'])
->setTo($to)
->setBoundary('Mail-' . date('Y-m-d_H_i_s'))
->setBody($html, 'text/html')
->addPart($plain, 'text/plain');
if (!empty($params['messageId'])) {
$this->setMessageId($params['messageId']);
}
if (!empty($params['cc'])) {
$this->setCc($params['cc']);
}
if (!empty($params['bcc'])) {
$this->setBcc($params['bcc']);
}
$this->_ignoreHeaders = ['Subject', 'To', 'Content-Type'];
if (!empty($params['headers'])) {
foreach ($params['headers'] as $name => $value) {
$this->addTextHeader($name, $value);
}
}
if (isset($this->_config['headers'])) {
foreach ($this->_config['headers'] as $name => $value) {
if (empty($params['headers']) or !isset($params['headers'][$name])) {
// Header noch nicht hinzugefügt
$this->addTextHeader($name, $value);
}
}
}
}
public function sendAsync() : void
{
(new ImdSwiftMailQueue())->addMail($this);
callAsyncCronjob('process_mailqueue');
}
/**
* @return int
* @throws \Exception
*/
public function send() : int
{
$failedRecipients = [];
$transport = (new Swift_SmtpTransport())
->setHost($this->_transportConfig['host'])
->setPort($this->_transportConfig['port'])
->setEncryption($this->_encryption)
->setTimeout($this->_transportConfig['timeout'])
->setLocalDomain($this->_client)
->setUsername($this->_transportConfig['username'])
->setPassword($this->_transportConfig['password']);
$mailer = new Swift_Mailer($transport);
$sent = $mailer->send($this->_message, $failedRecipients);
return $sent;
}
}
|
gharchive/issue
| 2021-01-13T06:39:24 |
2025-04-01T06:40:32.255884
|
{
"authors": [
"fabpot",
"jderusse",
"markusramsak"
],
"repo": "swiftmailer/swiftmailer",
"url": "https://github.com/swiftmailer/swiftmailer/issues/1321",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1727559068
|
hope to support lyrics
my music folder like this:
/music/oldsong/song1-singer.mp3
/music/lightsong/songb-singer.mp3
/lyrics/oldsong/song1-singer.lrc
/lyrics/lightsong/songb-singer.lrc
I hope to be able to display the lyrics on the web interface
Hello @chainofhonor
The development of this project is currently on hold. (see pinned issue). However, lyric support is a planned feature.
Hello @chainofhonor
Lyrics supported has been added with v1.4.0 release.
Go To Release Page
I would really appreciate it if support for lyrics embedded in the music files could be provided.
@tokisak1kurum1
That does not work? I thought we had that. Maybe it's broken or something. I'll investigate that.
|
gharchive/issue
| 2023-05-26T12:44:33 |
2025-04-01T06:40:32.276875
|
{
"authors": [
"chainofhonor",
"cwilvx",
"mungai-njoroge",
"tokisak1kurum1"
],
"repo": "swing-opensource/swingmusic",
"url": "https://github.com/swing-opensource/swingmusic/issues/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
98225005
|
add user option to list passed along to exec call
Parameter is "user" with the value equal to the in-container username.
Example:
docker_obj = Docker::Container.get("mycontainer")
cmd = { "/bin/sh", "ls", "/" }
opts = { :user => "nobody" }
docker_obj.exec(cmd,opts)
The Docker engine will use the default value for user if the parameter provided is an empty string, so this should be safe. Tested against Docker 1.7.1 (boot2docker).
@y3ddet this has been released in v1.22.2
|
gharchive/pull-request
| 2015-07-30T17:48:53 |
2025-04-01T06:40:32.279481
|
{
"authors": [
"tlunter",
"y3ddet"
],
"repo": "swipely/docker-api",
"url": "https://github.com/swipely/docker-api/pull/299",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2590239450
|
Initial version
Includes editor and runner
Todo:
[ ] Documentation
[ ] Checks
Should we have ./gradlew check run on Github Actions or another runner?
|
gharchive/pull-request
| 2024-10-16T00:24:26 |
2025-04-01T06:40:32.282222
|
{
"authors": [
"SgiobairOg",
"sabberworm"
],
"repo": "swisscom/JCR-Hopper",
"url": "https://github.com/swisscom/JCR-Hopper/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2256703291
|
Spinner never stops in overlay of breadcrumb-buttons
When clicking on a breadcrumb-button ("Helps", "Contact"), a spinner is shown, but it has no condition to automatically be removed.
@imagoiq please re-check
|
gharchive/issue
| 2024-04-22T14:33:36 |
2025-04-01T06:40:32.285147
|
{
"authors": [
"gfellerph",
"imagoiq"
],
"repo": "swisspost/design-system",
"url": "https://github.com/swisspost/design-system/issues/2990",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
89816101
|
Bring component integration test in line with ember-cli
This PR to ember-cli will likely be done soon and has a few differences from our implementation:
It generates a component integration test by default when generating a new component (and allows generating a unit test by specifying --unit)
It has a different argument (they use --test-type)
availableOptions: [
{
name: 'test-type',
type: ['integration', 'unit'],
default: 'integration',
aliases:[
{ 'i': 'integration'},
{ 'u': 'unit'},
{ 'integration': 'integration' },
{ 'unit': 'unit' }
]
}
]
Once the ember-cli PR is merged I propose we bring this project back in sync. I will gladly submit a PR for this change.
Thanks for keeping track and working on this @blimmer!
:+1:
@blimmer Thanks, and sorry for any crossed wires!
No problem. Happy to see this taking priority!
The dependency branch was merged this morning. This can be worked whenever - I might be able to get to it this week(end)
@blimmer - Thank you!
|
gharchive/issue
| 2015-06-20T20:45:25 |
2025-04-01T06:40:32.294680
|
{
"authors": [
"blimmer",
"rwjblue",
"trabus"
],
"repo": "switchfly/ember-cli-mocha",
"url": "https://github.com/switchfly/ember-cli-mocha/issues/58",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
274468253
|
Added "keep image ratio" to Image plugin
Hi,
I believe this PR would help making the Image Plugin more usable.
Added "Keep original image ratio" to the Widget. When checked, the height is automatically adjust, both in "percent" and "pixel" mode. The user should only modify the Width whilst the Height is updated accordingly.
When changing the mode from "percent" to "pixel" and vice versa, the image size is preserved.
To be sure that no resize is introduced in step 2), I had to change the width and height type and QSpinBox to double and QDoubleSpinBox, and percend must have 1 decimal.
Cheers
https://vimeo.com/243268373
Does anyone at SWRI ever say "thanks for contributing"?
Some of us do at least: https://github.com/swri-robotics/mapviz/pull/539#issuecomment-345371749 https://github.com/swri-robotics/mapviz/pull/525#issuecomment-342520534
But regardless, thanks for contributing, we appreciate it and are glad you find the tool useful.
|
gharchive/pull-request
| 2017-11-16T10:36:40 |
2025-04-01T06:40:32.327643
|
{
"authors": [
"facontidavide",
"malban"
],
"repo": "swri-robotics/mapviz",
"url": "https://github.com/swri-robotics/mapviz/pull/543",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
594864229
|
boost::thread sleep not working
I update my ros driver, after that this driver not publishing any ros topics!
Then, I figure out that while loop in Spin() function execute only first time.
So I change the code:
(+) ros::Rate loop_rate(1000);
while(gps_.IsConnected() && ros::ok()) {
(-) boost::this_thread::sleep(boost::posix_time::microseconds(1));
(+) loop_rate.sleep();
}
Then it finally publish the topics again!
Please fix this errors.
When you say it only executes the first time, what happens after that? Is something throwing an exception, or is it blocking somewhere? Has anybody else seen this happen?
It seems especially odd that using ros::Rate there would make a difference; one of the differences in using a ROS timer is that it follows ROS's clock when running in a simulation, and, if anything, I would expect that to cause problems since the timer will stop running if the simulation is paused. Normally that's not an issue for hardware drivers since you don't usually hardware drivers in simulation, though.
In my case after enter the command like this,
$ sudo apt-get update && sudo apt-get upgrade
I can see same problem in all my laptop and mini PC(NUC)
When I debugging the code, there was no execption but the code is block at below position.
-> boost::this_thread::sleep(boost::posix_time::microseconds(1));
Then, I launch the driver in real situation not with simulation.
My computer enviornment is: Ubuntu 18.04 && ROS Melodic
We also had this problem. I made that change and it fixed it.
I also got same problem.
My computer environment is: Ubuntu 18.04 && ROS Melodic
How strange, I had never seen this problem before, but I tested it on a different computer and now I see it there. Well, I merged in a change that should fix it everywhere. Thanks!
|
gharchive/issue
| 2020-04-06T07:18:23 |
2025-04-01T06:40:32.332903
|
{
"authors": [
"Kyungpyo-Kim",
"cbradyas",
"pjreed",
"zang09"
],
"repo": "swri-robotics/novatel_gps_driver",
"url": "https://github.com/swri-robotics/novatel_gps_driver/issues/84",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2083685196
|
feat: add option to enter directory on open command
This is another nice feature to have in my opinion, and it's something that almost any TUI file manager has.
That's intentional. See https://yazi-rs.github.io/docs/faq#why-cant-open-and-enter-be-a-single-command
You can use this Smart enter tip to impl it.
hat's intentional. See https://yazi-rs.github.io/docs/faq#why-cant-open-and-enter-be-a-single-command
You can use this Smart enter tip to impl it.
Oh, I see, I did not know that. But why can't we just add this simple flag to achieve it?
I want to keep it concise - one command should only do one thing.
Opening and entering are not essentially the same, and if we add it, we must also consider whether "entering should support opening" or "opening should support entering". In the end, we might need to add them for both commands.
Opening and entering are not essentially the same, and if we add it, we must also consider whether "entering should support opening" (many people want l to support opening, but l is by default bound to the enter command) or "opening should support entering". In the end, we might need to add them for both commands.
Well, I actually tried to implement enter --or-open before, but since enter is a member of Tab it was really unpractical to access open, which is a Manager member, so I did it the other way around.
As for the l binding, we could add a note in enter's documentation to use open --or-enter-dir for that use case...
Anyways, I understand your point here. Feel free to close PR if you don't see any value in this addition.
Okay let me close it, sorry.
|
gharchive/pull-request
| 2024-01-16T11:16:10 |
2025-04-01T06:40:32.353170
|
{
"authors": [
"Akmadan23",
"sxyazi"
],
"repo": "sxyazi/yazi",
"url": "https://github.com/sxyazi/yazi/pull/523",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2622120878
|
Add WOL support
Add Wake-On-LAN support for VPin Studio hosts that go into sleep / stand by mode on a LAN.
Should work both from Windows and macOS Vpin Studio clients.
Should work for hosts configured in the connection launcher or with previously discovered clients via. broadcast.
nice idea!
I'd love to see Wake-on-LAN functionality added to VPin Studio in a future update.
My cabinet is located in my garage, and it would be incredibly convenient to be able to wake it up remotely using VPin Studio itself. Currently, I have to rely on a separate program to do this, which adds an extra step to my setup.
Integrating Wake-on-LAN would streamline the process and make VPin Studio an even more comprehensive solution for managing my virtual pinball setup.
Thanks for considering this request!
@karlsnyder0 I close this issue as fixed.
Let's create new tickets if there are follow-ups.
|
gharchive/issue
| 2024-10-29T19:16:37 |
2025-04-01T06:40:32.358919
|
{
"authors": [
"Kongedam",
"Ltek",
"karlsnyder0",
"syd711"
],
"repo": "syd711/vpin-studio",
"url": "https://github.com/syd711/vpin-studio/issues/574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1908829990
|
🛑 The Great Gardens is down
In 59a0d6a, The Great Gardens (https://thegreatgardens.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: The Great Gardens is back up in 9cf8772 after 20 minutes.
|
gharchive/issue
| 2023-09-22T12:37:21 |
2025-04-01T06:40:32.420463
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/12846",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1920326323
|
🛑 Superior Surface is down
In c692561, Superior Surface (https://superior-surface.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Superior Surface is back up in 7ef5fe4 after 20 minutes.
|
gharchive/issue
| 2023-09-30T16:40:04 |
2025-04-01T06:40:32.422894
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/15687",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1921182950
|
🛑 AZ Brick Frames is down
In df1095c, AZ Brick Frames (https://azbrickframes.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AZ Brick Frames is back up in 91644bf after 31 minutes.
|
gharchive/issue
| 2023-10-02T03:24:14 |
2025-04-01T06:40:32.425290
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/16225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1922082474
|
🛑 Absolute Landscape Solutions is down
In a8b4506, Absolute Landscape Solutions (https://absolutelandscapesolutions.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Absolute Landscape Solutions is back up in 21d45f6 after 52 minutes.
|
gharchive/issue
| 2023-10-02T14:53:04 |
2025-04-01T06:40:32.427951
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/16433",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924202404
|
🛑 AZ Brick Frames is down
In 308f802, AZ Brick Frames (https://azbrickframes.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AZ Brick Frames is back up in a72fa9c after 1 hour, 3 minutes.
|
gharchive/issue
| 2023-10-03T13:51:43 |
2025-04-01T06:40:32.430292
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/16802",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924837396
|
🛑 Hard Stack Pro is down
In 28f81e1, Hard Stack Pro (https://hardstackpro.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Hard Stack Pro is back up in a4cb7c7 after 29 minutes.
|
gharchive/issue
| 2023-10-03T19:50:47 |
2025-04-01T06:40:32.432663
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/16898",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1932071941
|
🛑 Pest Scout Pros is down
In 5b1847e, Pest Scout Pros (https://pestscoutpros.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Pest Scout Pros is back up in e1c53d2 after 58 minutes.
|
gharchive/issue
| 2023-10-08T21:22:12 |
2025-04-01T06:40:32.435088
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/19179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1932777119
|
🛑 Absolute Landscape Solutions is down
In e25c5b8, Absolute Landscape Solutions (https://absolutelandscapesolutions.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Absolute Landscape Solutions is back up in e107a08 after 47 minutes.
|
gharchive/issue
| 2023-10-09T10:39:11 |
2025-04-01T06:40:32.437463
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/19764",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1933545547
|
🛑 The Great Gardens is down
In 4bc63f3, The Great Gardens (https://thegreatgardens.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: The Great Gardens is back up in 00fed07 after 25 minutes.
|
gharchive/issue
| 2023-10-09T17:59:49 |
2025-04-01T06:40:32.440005
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/20153",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1940959921
|
🛑 Pest Scout Pros is down
In f10b0fd, Pest Scout Pros (https://pestscoutpros.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Pest Scout Pros is back up in d251cc7 after 10 minutes.
|
gharchive/issue
| 2023-10-12T23:35:18 |
2025-04-01T06:40:32.442414
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/23599",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1955934263
|
🛑 Arborcare Ohio is down
In eabc7c5, Arborcare Ohio (https://arborcareohio.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Arborcare Ohio is back up in cba6d57 after 38 minutes.
|
gharchive/issue
| 2023-10-22T14:59:54 |
2025-04-01T06:40:32.444749
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/31129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1958871694
|
🛑 Garden Gate Fence is down
In 957e796, Garden Gate Fence (https://gardengate-fence.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Garden Gate Fence is back up in 39ea6b5 after 10 minutes.
|
gharchive/issue
| 2023-10-24T09:35:14 |
2025-04-01T06:40:32.447159
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/32390",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1959150587
|
🛑 Pest Scout Pros is down
In d603a5b, Pest Scout Pros (https://pestscoutpros.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Pest Scout Pros is back up in 2a49da5 after 42 minutes.
|
gharchive/issue
| 2023-10-24T12:29:53 |
2025-04-01T06:40:32.449684
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/32507",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966396843
|
🛑 Tidy Cleaners is down
In 940c4fe, Tidy Cleaners (https://tidycleanerstx.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tidy Cleaners is back up in e0abedf after 1 hour, 44 minutes.
|
gharchive/issue
| 2023-10-28T03:40:28 |
2025-04-01T06:40:32.452207
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/35256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966444685
|
🛑 Well Repairman is down
In 02921b5, Well Repairman (https://wellrepairman.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Well Repairman is back up in 97f4ff9 after 1 hour, 5 minutes.
|
gharchive/issue
| 2023-10-28T05:35:11 |
2025-04-01T06:40:32.454592
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/35315",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1967625020
|
🛑 Cleanwagon Ohio is down
In 5c12497, Cleanwagon Ohio (https://cleanwagonohio.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cleanwagon Ohio is back up in 407af42 after 10 minutes.
|
gharchive/issue
| 2023-10-30T06:46:42 |
2025-04-01T06:40:32.456907
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/36932",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1105774307
|
🛑 Precise Word AZ is down
In ac8ceec, Precise Word AZ (https://precisewordaz.com/) was down:
HTTP code: 500
Response time: 399 ms
Resolved: Precise Word AZ is back up in 4c16df7.
|
gharchive/issue
| 2022-01-17T12:07:36 |
2025-04-01T06:40:32.459223
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/413",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1879241444
|
🛑 Alpha Landscaping is down
In c45a5ed, Alpha Landscaping (https://alpha-landscaping.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Alpha Landscaping is back up in 9dd25b2 after 20 minutes.
|
gharchive/issue
| 2023-09-03T21:33:29 |
2025-04-01T06:40:32.461599
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/7083",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1889991828
|
🛑 AZ Brick Frames is down
In c3dabe5, AZ Brick Frames (https://azbrickframes.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AZ Brick Frames is back up in 2f915c1 after 1 hour, 10 minutes.
|
gharchive/issue
| 2023-09-11T08:44:45 |
2025-04-01T06:40:32.464193
|
{
"authors": [
"symapex"
],
"repo": "symapex/upsite",
"url": "https://github.com/symapex/upsite/issues/9198",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
55005506
|
Finish missing translations
Files remaining for translation process:
[x] sl_SI/Mage_Oauth.csv
[x] sl_SI/Mage_Eav.csv
[x] sl_SI/Mage_Install.csv
[x] sl_SI/Mage_GoogleCheckout.csv
[x] sl_SI/Mage_SalesRule.csv
[x] sl_SI/Mage_Newsletter.csv
[x] sl_SI/Mage_Payment.csv
[x] sl_SI/Mage_Tax.csv
[x] sl_SI/Mage_Api2.csv
[x] sl_SI/Mage_Checkout.csv
[x] sl_SI/Mage_Usa.csv
[x] sl_SI/Mage_Core.csv
[x] sl_SI/Mage_Customer.csv
[x] sl_SI/Mage_Paypal.csv
[x] sl_SI/Mage_Catalog.csv
[x] sl_SI/Mage_Sales.csv
[x] sl_SI/Mage_XmlConnect.csv
[x] sl_SI/Mage_Adminhtml.csv
Closing. Added via f1fe29d13764da1508565b503aa067c73e7a237b
|
gharchive/issue
| 2015-01-21T11:10:40 |
2025-04-01T06:40:32.471236
|
{
"authors": [
"peterkokot"
],
"repo": "symfony-si/magento-sl_SI",
"url": "https://github.com/symfony-si/magento-sl_SI/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
944839634
|
Missing dependencies [easy-coding-standard]
Hi,
I'm having issues using version 9.4. I get that it was changed to prefixed version, here's the conflict:
And this is what I get running the command:
PHP Fatal error: Uncaught ECSPrefix20210712\Symfony\Component\OptionsResolver\Exception\UndefinedOptionsException: The option "ensure_fully_multiline" does not exist. Defined options are: "after_heredoc", "keep_multiple_spaces_after_comma", "on_multiline". in /path/vendor/symplify/easy-coding-standard/vendor/symfony/options-resolver/OptionsResolver.php:799
Stack trace:
#0 /path/vendor/symplify/easy-coding-standard/vendor/friendsofphp/php-cs-fixer/src/FixerConfiguration/FixerConfigurationResolver.php(92): ECSPrefix20210712\Symfony\Component\OptionsResolver\OptionsResolver->resolve()
#1 /path/vendor/symplify/easy-coding-standard/vendor/friendsofphp/php-cs-fixer/src/AbstractFixer.php(120): PhpCsFixer\FixerConfiguration\FixerConfigurationResolver->resolve()
#2 /path/vendor/symplify/easy-coding-standard/vendor/friendsofphp/php-cs-fixer/src/Fixer/FunctionNotation/MethodArgumentSpaceFixer.php(69): PhpCsFixer\AbstractFixer->configure()
#3 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(1239): PhpCsFixer\Fixer\FunctionNotation\MethodArgumentSpaceFixer->configure()
#4 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(963): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getFixerFileProcessorService()
#5 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(973): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getFileProcessorCollectorService()
#6 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(953): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getSingleFileProcessorService()
#7 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(1055): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getEasyCodingStandardApplicationService()
#8 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(1111): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getCheckCommandService()
#9 /path/vendor/symplify/easy-coding-standard/vendor/symfony/dependency-injection/Container.php(215): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getEasyCodingStandardConsoleApplicationService()
#10 /path/vendor/symplify/easy-coding-standard/vendor/symfony/dependency-injection/Container.php(198): ECSPrefix20210712\Symfony\Component\DependencyInjection\Container->make()
#11 /path/vendor/symplify/easy-coding-standard/bin/ecs.php(36): ECSPrefix20210712\Symfony\Component\DependencyInjection\Container->get()
#12 /path/vendor/symplify/easy-coding-standard/bin/ecs(5): require('...')
#13 {main}
Next PhpCsFixer\ConfigurationException\InvalidFixerConfigurationException: [method_argument_space] Invalid configuration: The option "ensure_fully_multiline" does not exist. Defined options are: "after_heredoc", "keep_multiple_spaces_after_comma", "on_multiline". in /path/vendor/symplify/easy-coding-standard/vendor/friendsofphp/php-cs-fixer/src/AbstractFixer.php:126
Stack trace:
#0 /path/vendor/symplify/easy-coding-standard/vendor/friendsofphp/php-cs-fixer/src/Fixer/FunctionNotation/MethodArgumentSpaceFixer.php(69): PhpCsFixer\AbstractFixer->configure()
#1 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(1239): PhpCsFixer\Fixer\FunctionNotation\MethodArgumentSpaceFixer->configure()
#2 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(963): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getFixerFileProcessorService()
#3 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(973): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getFileProcessorCollectorService()
#4 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(953): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getSingleFileProcessorService()
#5 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(1055): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getEasyCodingStandardApplicationService()
#6 /tmp/ecs_rodolfo/ContainerM5s19N4/Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container.php(1111): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getCheckCommandService()
#7 /path/vendor/symplify/easy-coding-standard/vendor/symfony/dependency-injection/Container.php(215): ContainerM5s19N4\Symplify_EasyCodingStandard_HttpKernel_EasyCodingStandardKernelProd_1d4f78d199b6cf5816074fafdff73e26ceba3020Container->getEasyCodingStandardConsoleApplicationService()
#8 /path/vendor/symplify/easy-coding-standard/vendor/symfony/dependency-injection/Container.php(198): ECSPrefix20210712\Symfony\Component\DependencyInjection\Container->make()
#9 /path/vendor/symplify/easy-coding-standard/bin/ecs.php(36): ECSPrefix20210712\Symfony\Component\DependencyInjection\Container->get()
#10 /path/vendor/symplify/easy-coding-standard/bin/ecs(5): require('...')
#11 {main}
thrown in /path/vendor/symplify/easy-coding-standard/vendor/friendsofphp/php-cs-fixer/src/AbstractFixer.php on line 126
Hi,
thanks for reporting. The first file is internal, it's correct as these classes exist and are in ecs own scoped and downgraded vendor. I wrote about it here to explain it better.
The 2nd and 3rd is related to option name in php-cs-fixer. I would look for ensure_fully_multiline in strings in /vendor/easy-coding-standard/config.
Will change of the value in config to on_multiline help?
$services->set(MethodArgumentSpaceFixer::class)
->call('configure', [[
- 'on_multiline' => 'ensure_fully_multiline',
+ 'on_multiline' => 'on_multiline',
]]);
Thank you, I noticed that the issue was missing to update some definitions.
|
gharchive/issue
| 2021-07-14T22:20:44 |
2025-04-01T06:40:32.700121
|
{
"authors": [
"TomasVotruba",
"rodber"
],
"repo": "symplify/symplify",
"url": "https://github.com/symplify/symplify/issues/3416",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
724214768
|
Add --daemon flag / pid file
Would be nice if udp-proxy-2020 supported running as a daemon and writing a pid file to /var/run/udp-proxy-2020.pid or other user defined path.
I successfully build your site-to-site version for the Unifiy UDM and it works very well 👍 . A daemonized version as you suggested would be a really nice feature
FWIW, I have no idea how the UDM starts services on boot, but pretty much everything nowadays (upstart, systemd) don't require a "daemon" mode. If you do figure it out, i'd be curious to hear how you did it so I can document it for others.
Quick note: Apparently daemonizing go apps is non-traditional because of how go works. But this library seems the most popular way of doing so: https://github.com/sevlyar/go-daemon
As far as I know there's no easy way to run a persistent daemon on the UDM/UDMP besides a more or less hacky way abusing the way package installations are handled on these machines. But for my purpose it would be sufficient enough to start the daemon by hand every time the UDM/UDMP reboots.
So my idea: ssh into the UDM/UDMP and start your tool as a daemon :)
Unfortunately I have no experience in programming with go
Sadly the UDM series is really limited right now. That said, it is just linux and so you should be able to write a startup script to do it for you.
Yeah, got it running. Thanks again for the great tool :)
|
gharchive/issue
| 2020-10-19T02:42:49 |
2025-04-01T06:40:32.795851
|
{
"authors": [
"synfinatic",
"tiehfood"
],
"repo": "synfinatic/udp-proxy-2020",
"url": "https://github.com/synfinatic/udp-proxy-2020/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1055502834
|
"Local" / "Remote" is switched from the perspective of the buyer.
Hey there, just bought a channel on testnet using this API.
It seems like local_balance and remote_balance should be switched: right now it's from the perspective of the server, it makes more sense to have it from the POV of the client IMO.
Hey, yeah this makes sense. We'll probably update this server side and post back here once done. Thanks!
Did this change get implemented yet?
Seems like local_balance refers to Synonym's node. Is this expected to change, or should we assume it will stay as-is?
we will change it eventually, most likely we will use spending and receiving as the words
|
gharchive/issue
| 2021-11-16T22:44:06 |
2025-04-01T06:40:32.803693
|
{
"authors": [
"Jasonvdb",
"john-zaprite",
"kiwiidb",
"rbndg"
],
"repo": "synonymdev/blocktank-client",
"url": "https://github.com/synonymdev/blocktank-client/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1060689653
|
roundtrippping content causes definition to disappear
Initial checklist
[X] I read the support docs
[X] I read the contributing guide
[X] I agree to follow the code of conduct
[X] I searched issues and couldn’t find anything (or linked relevant results below)
Affected packages and versions
1.2.4
Link to runnable example
https://stackblitz.com/edit/node-st1b3g?file=index.js
Steps to reproduce
Run
import { remark as remarFactory } from 'remark';
import { visit } from 'unist-util-visit';
const remark = remarFactory();
// clean extra attributes that make it hard to see issue
function scrubber(tree) {
visit(tree, function (node) {
node.value = undefined;
node.position = undefined;
node.spread = undefined;
node.lang = undefined;
node.identifier = undefined;
node.label = undefined;
node.title = undefined;
node.url = undefined;
});
}
const content = `[a]: `;
const originalAst = remark.parse(content);
const newContent = remark.stringify(originalAst);
const newAst = remark.parse(newContent);
scrubber(originalAst);
scrubber(newAst);
console.log(JSON.stringify(originalAst, null, 4));
console.log(JSON.stringify(newAst, null, 4));
result is:
{
"type": "root",
"children": [
{
"type": "definition"
}
]
}
then
{
"type": "root",
"children": [
{
"type": "paragraph",
"children": [
{
"type": "text"
}
]
}
]
}
and the output text is
[a]:
Expected behavior
definition content is preserved
Actual behavior
definition content disapepars
Runtime
Node v16
Package manager
npm v7
OS
Linux
Build and bundle tools
No response
also happens with
[a]
[a]: 
https://spec.commonmark.org/dingus/?text=[a]
[a]%3A %26%23xc%3B
released!
|
gharchive/issue
| 2021-11-22T23:29:44 |
2025-04-01T06:40:32.813530
|
{
"authors": [
"ChristianMurphy",
"wooorm"
],
"repo": "syntax-tree/mdast-util-to-markdown",
"url": "https://github.com/syntax-tree/mdast-util-to-markdown/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1582152321
|
Create 'create-plugin-template' command
Closes #166
We should create a github template which includes the ci scripts.
The create-template command then is used to create a specific plugin template.
Superseded by #245
|
gharchive/pull-request
| 2023-02-13T11:13:37 |
2025-04-01T06:40:32.817099
|
{
"authors": [
"dstallenberg"
],
"repo": "syntest-framework/syntest-core",
"url": "https://github.com/syntest-framework/syntest-core/pull/210",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1183036046
|
Anzeige inaktiver Benutzer
Moin!
Wenn ein Mitarbeiter ausscheidet kann ich ihn auf "inaktiv" setzen. Er taucht dann aber in keiner Übersicht mehr auf, ich habe ihn jetzt über die ID gefunden.
Übersehe ich etwas? Oder wäre eine "Liste inaktiver Benutzer" eine sinnvolle Ergänzung?
Vielen Dank,
Merlin
Mon @2steuer,
du kannst dir die inaktiven Benutzer:innen in der Benutzerübersicht anzeigen lassen.
siehe
|
gharchive/issue
| 2022-03-28T07:51:01 |
2025-04-01T06:40:32.826408
|
{
"authors": [
"2steuer",
"derTobsch"
],
"repo": "synyx/urlaubsverwaltung",
"url": "https://github.com/synyx/urlaubsverwaltung/issues/2786",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2644790625
|
HetznerCluster does not react on changes in relevant secrets
/kind bug
What steps did you take and what happened:
The "HetznerCluster" object should listen to the changes in the secret that stores the Hetzner credentials. For example, because they have to be synced in the updated form to the workload clusters. This doesn't happen right now because the controller doesn't react on the relevant events.
In https://github.com/syself/cluster-api-provider-hetzner/blob/1904cabf66bdede229603836230edb7a5e29424b/controllers/hetznercluster_controller.go#L400 we "acquire" the secret but set "controlledByOwner" to false. This means that the hetznercluster-controller doesn't actually own the secret, so that the events are not shown, even though we set the event listener here: https://github.com/syself/cluster-api-provider-hetzner/blob/1904cabf66bdede229603836230edb7a5e29424b/controllers/hetznercluster_controller.go#L749
What did you expect to happen:
We should react on the events in the hetzner secret. Either by setting the hetznercluster controller as "controller" of the secret, or by changing the way we listen to events.
It would be obviously easier to set the hetznercluster as controller of the secret, that's just one value. I'm not aware if that has any drawbacks compared to the current state.
@janiskemper
if a controller owns a secret and you delete the controller, Kubernetes garbage collection (GC) will typically remove the secret.
Somehow I think an owneRef does not match. The user is responsible for that secret.
We can use an option for .Own():
From the Owns docstring:
// The default behavior reconciles only the first controller-type OwnerReference of the given type.
// Use Owns(object, builder.MatchEveryOwner) to reconcile all owners.
I suggest to use MatchEveryOwner
|
gharchive/issue
| 2024-11-08T18:11:20 |
2025-04-01T06:40:32.850373
|
{
"authors": [
"guettli",
"janiskemper"
],
"repo": "syself/cluster-api-provider-hetzner",
"url": "https://github.com/syself/cluster-api-provider-hetzner/issues/1508",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1044941728
|
HRDP and Reverse Proxy
HRDP and Reverse Proxy not found in form
The original author did not set poxy。。。
working on to add them by myself he was rude in his answers last time so better continue his work and develop it by myself i already added reverse proxy and i am working on hrdp :)
OK
|
gharchive/issue
| 2021-11-04T16:16:34 |
2025-04-01T06:40:32.894232
|
{
"authors": [
"DarkneSsDz",
"sysrom"
],
"repo": "sysrom/DcRatCHS",
"url": "https://github.com/sysrom/DcRatCHS/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
104560732
|
Support SystemJS builder dependency extraction without execution
We currently execute transpiled ES6 in the target environment to extract the dependencies, but we don't use the execute function at all, so this is unnecessary work.
In addition, not execution would allow transpiling to ES6 itself say just using the modules transformer when transpiling for browser environments that support more than Node does (previously discussed in https://github.com/jspm/jspm-cli/issues/884).
This can be done by having a truncated instantiate for the builder, and just applying a regex in the tracer to extract the System.register dependencies since we can rely on it being the first line of code after comments and meta strings.
This would be pretty awesome. Any updates? What parts of the code should potential contributors be looking at to help out?
This isn't a priority currently, and may be somewhat tricky for contributions. Such an intercept would be at https://github.com/systemjs/builder/blob/master/lib/trace.js#L284 skipping instantiate. It would be important to ensure any useful functions of instantiate are replicated in this case (such as concatting the dependencies with load.metadata.deps).
Released in 0.15.0.
|
gharchive/issue
| 2015-09-02T20:03:58 |
2025-04-01T06:40:33.246251
|
{
"authors": [
"awalGarg",
"guybedford"
],
"repo": "systemjs/builder",
"url": "https://github.com/systemjs/builder/issues/299",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
324615411
|
人狼ゲーム
https://ja.wikipedia.org/wiki/汝は人狼なりや%3F
https://jinrou.uhyohyo.net/manual/about
triage: 閉じます。実装したくなったら再openする感じでお願いします 🙏
|
gharchive/issue
| 2018-05-19T09:15:32 |
2025-04-01T06:40:33.262818
|
{
"authors": [
"Sayamame-beans",
"shironoir",
"syuilo"
],
"repo": "syuilo/misskey",
"url": "https://github.com/syuilo/misskey/issues/1606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
597960097
|
Integrate Lighthouse analysis
Lighthouse should be integrated into CI.
First, let's save the initial analysis report, then create tickets for optimization tasks.
Ideas:
Lazy loading
Lazy loading images
RAIL
etc.
Lighthouse is integrated
|
gharchive/issue
| 2020-04-10T15:54:27 |
2025-04-01T06:40:33.271810
|
{
"authors": [
"szgabsz91"
],
"repo": "szgabsz91/iit-szg",
"url": "https://github.com/szgabsz91/iit-szg/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2724788330
|
[Bug]: Climate unnecessarily converting fahrenheit
Version
3.0.0-alpha.51
Matter Controller
Apple Home
Steps to reproduce
I have an Ecobee connected to HomeAssistant via the HomeKitBridge connection, I am then trying to export it via the Matter Hub add-on. When it shows up in the Home App, all the temperatures are around 162 degrees. My system currently uses fahrenheit for its units. It looks like the add on is converting the fahrenheit units into celcius
State and attributes
hvac_modes: off, heat, cool, heat_cool
min_temp: 45
max_temp: 92
fan_modes: on, auto
friendly_name: Bedrooms
supported_features: 395
current_temperature: 74
temperature: 74
target_temp_high: null
target_temp_low: null
current_humidity: 38
fan_mode: auto
hvac_action: idle
Relevant log output
No response
Documentation & Issues
[X] I have reviewed the documentation and the linked troubleshooting guide.
[X] I have searched the issues for a similar problem.
https://github.com/t0bst4r/home-assistant-matter-hub/discussions/261
This will be handled as part of a larger refactoring of climates (see #261)
|
gharchive/issue
| 2024-12-07T19:46:12 |
2025-04-01T06:40:33.311893
|
{
"authors": [
"DMedina559",
"bassrock",
"t0bst4r"
],
"repo": "t0bst4r/home-assistant-matter-hub",
"url": "https://github.com/t0bst4r/home-assistant-matter-hub/issues/271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
855805164
|
[PCB] parse kicad_pcb file, and combine with the exported gerber files, to generate stack-up information.
When we use F, In1, In2, B as default PCB layers, the generated gerber files can be easily recognized by EMS to know its stack order. Such as:
-rw-r--r-- 1 yagamy staff 604348 Apr 12 12:29 tsp4-io-B_Cu.gbr
-rw-r--r-- 1 yagamy staff 7960 Apr 12 12:29 tsp4-io-B_Mask.gbr
-rw-r--r-- 1 yagamy staff 490 Apr 12 12:29 tsp4-io-B_Paste.gbr
-rw-r--r-- 1 yagamy staff 491 Apr 12 12:29 tsp4-io-B_Silkscreen.gbr
-rw-r--r-- 1 yagamy staff 720 Apr 12 12:29 tsp4-io-Edge_Cuts.gbr
-rw-r--r-- 1 yagamy staff 957406 Apr 12 12:29 tsp4-io-F_Cu.gbr
-rw-r--r-- 1 yagamy staff 25180 Apr 12 12:29 tsp4-io-F_Mask.gbr
-rw-r--r-- 1 yagamy staff 18117 Apr 12 12:29 tsp4-io-F_Paste.gbr
-rw-r--r-- 1 yagamy staff 224215 Apr 12 12:29 tsp4-io-F_Silkscreen.gbr
-rw-r--r-- 1 yagamy staff 581116 Apr 12 12:29 tsp4-io-In1_Cu.gbr
-rw-r--r-- 1 yagamy staff 597941 Apr 12 12:29 tsp4-io-In2_Cu.gbr
But, when the default signal name of the layer is changed, such as In1 is changed to PWR, then the output gerber files use those updated signal names as parts of filename. Then, EMS cannot easily know the stack order, needs extra efforts to double confirm with PCB's designer:
-rw-r--r-- 1 yagamy staff 992712 Apr 12 12:30 tsp4-mcu-B_Cu.gbr
-rw-r--r-- 1 yagamy staff 8068 Apr 12 12:30 tsp4-mcu-B_Mask.gbr
-rw-r--r-- 1 yagamy staff 491 Apr 12 12:30 tsp4-mcu-B_Paste.gbr
-rw-r--r-- 1 yagamy staff 492 Apr 12 12:30 tsp4-mcu-B_Silkscreen.gbr
-rw-r--r-- 1 yagamy staff 725 Apr 12 12:30 tsp4-mcu-Edge_Cuts.gbr
-rw-r--r-- 1 yagamy staff 222193 Apr 12 12:30 tsp4-mcu-F_Cu.gbr
-rw-r--r-- 1 yagamy staff 39987 Apr 12 12:30 tsp4-mcu-F_Mask.gbr
-rw-r--r-- 1 yagamy staff 32683 Apr 12 12:30 tsp4-mcu-F_Paste.gbr
-rw-r--r-- 1 yagamy staff 375510 Apr 12 12:30 tsp4-mcu-F_Silkscreen.gbr
-rw-r--r-- 1 yagamy staff 1002549 Apr 12 12:30 tsp4-mcu-GND_Cu.gbr
-rw-r--r-- 1 yagamy staff 1177205 Apr 12 12:30 tsp4-mcu-PWR_Cu.gbr
In fact, kicad_pcb source file already contains such stacked-order information:
(layers
(0 "F.Cu" signal)
(1 "In1.Cu" signal "GND.Cu")
(2 "In2.Cu" signal "PWR.Cu")
(31 "B.Cu" signal)
(32 "B.Adhes" user "B.Adhesive")
(33 "F.Adhes" user "F.Adhesive")
(34 "B.Paste" user)
(35 "F.Paste" user)
(36 "B.SilkS" user "B.Silkscreen")
(37 "F.SilkS" user "F.Silkscreen")
(38 "B.Mask" user)
(39 "F.Mask" user)
(40 "Dwgs.User" user "User.Drawings")
(41 "Cmts.User" user "User.Comments")
(42 "Eco1.User" user "User.Eco1")
(43 "Eco2.User" user "User.Eco2")
(44 "Edge.Cuts" user)
(45 "Margin" user)
(46 "B.CrtYd" user "B.Courtyard")
(47 "F.CrtYd" user "F.Courtyard")
(48 "B.Fab" user)
(49 "F.Fab" user)
)
(setup
(stackup
(layer "F.SilkS" (type "Top Silk Screen"))
(layer "F.Paste" (type "Top Solder Paste"))
(layer "F.Mask" (type "Top Solder Mask") (color "Green") (thickness 0.01))
(layer "F.Cu" (type "copper") (thickness 0.035))
(layer "dielectric 1" (type "core") (thickness 1.44) (material "FR4") (epsilon_r 4.5) (loss_tangent 0.02))
(layer "In1.Cu" (type "copper") (thickness 0.035))
(layer "dielectric 2" (type "prepreg") (thickness 1.44) (material "FR4") (epsilon_r 4.5) (loss_tangent 0.02))
(layer "In2.Cu" (type "copper") (thickness 0.035))
(layer "dielectric 3" (type "core") (thickness 1.44) (material "FR4") (epsilon_r 4.5) (loss_tangent 0.02))
(layer "B.Cu" (type "copper") (thickness 0.035))
(layer "B.Mask" (type "Bottom Solder Mask") (color "Green") (thickness 0.01))
(layer "B.Paste" (type "Bottom Solder Paste"))
(layer "B.SilkS" (type "Bottom Silk Screen"))
(copper_finish "None")
(dielectric_constraints no)
)
Then, combining with the generated gerber file, it could be easy to generate a stack table (with order) to all layer files.
The generated stack table could be a HTML or other format to easily click the link.
Comments from Lydia:
其實 KiCad 中也有一個功能叫 "add stackup table" 可以以後加在 PCB file 中, 但目前此功能還未完整.
若我用其它的 PCB 的 tools 時都會加這類東西,有是有些板廠好像看此table才放心,但有經驗的板廠在匯入時就決定了疊構了一般都了解.
其實用轉greber時可以用選 Protel ( Altium ) 的格式, 很多板廠都習慣了它的副檔名, 副檔名可以看的出來
yagamy:
像是這樣?
-rw-r--r-- 1 yagamy wheel 992712 Apr 12 18:02 /tmp/tsp4-mcu-B_Cu.gbl
-rw-r--r-- 1 yagamy wheel 8068 Apr 12 18:02 /tmp/tsp4-mcu-B_Mask.gbs
-rw-r--r-- 1 yagamy wheel 491 Apr 12 18:02 /tmp/tsp4-mcu-B_Paste.gbp
-rw-r--r-- 1 yagamy wheel 492 Apr 12 18:02 /tmp/tsp4-mcu-B_Silkscreen.gbo
-rw-r--r-- 1 yagamy wheel 725 Apr 12 18:02 /tmp/tsp4-mcu-Edge_Cuts.gm1
-rw-r--r-- 1 yagamy wheel 222193 Apr 12 18:02 /tmp/tsp4-mcu-F_Cu.gtl
-rw-r--r-- 1 yagamy wheel 39987 Apr 12 18:02 /tmp/tsp4-mcu-F_Mask.gts
-rw-r--r-- 1 yagamy wheel 32683 Apr 12 18:02 /tmp/tsp4-mcu-F_Paste.gtp
-rw-r--r-- 1 yagamy wheel 375510 Apr 12 18:02 /tmp/tsp4-mcu-F_Silkscreen.gto
-rw-r--r-- 1 yagamy wheel 1002549 Apr 12 18:02 /tmp/tsp4-mcu-GND_Cu.g2
-rw-r--r-- 1 yagamy wheel 1177205 Apr 12 18:02 /tmp/tsp4-mcu-PWR_Cu.g3
-rw-r--r-- 1 yagamy wheel 3742 Apr 12 18:02 /tmp/tsp4-mcu-job.gbrjob
Lydia: 對, 有帶t就是 top, 有帶 b 就是bot, g2,就是內第二層, g3 就是內3
|
gharchive/issue
| 2021-04-12T10:08:55 |
2025-04-01T06:40:33.465972
|
{
"authors": [
"yagamy4680"
],
"repo": "t2t-io/kicad-support-tools",
"url": "https://github.com/t2t-io/kicad-support-tools/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1462461589
|
Spine terms added but then mapped terms disappear
After adding spine terms and then save. Returning to the mapping shows spine terms saved but the mapping is gone.
...this prevents clicking of "Done Alignment"
@jbaird123 @excelsior Jim is mapping CEDS.
@jgoodell2 - I'm having trouble duplicating this problem. Can you provide exact instructions including the file you're uploading to duplicate the issue? A screencast would be helpful if you can do it.
@excelsior - The issue here is that the elements in the screenshot above were added as synthetic elements and automatically mapped as "Identical". After saving, the mapping was lost, but the spine term remains. We went back in and tried to manually map the elements, and they do not get saved.
Resolved. Please reopen if the issue persists.
Yes. I confirm that this issue is resolved.
|
gharchive/issue
| 2022-11-23T22:01:37 |
2025-04-01T06:40:33.468992
|
{
"authors": [
"jbaird123",
"jeannekitchens",
"jgoodell2"
],
"repo": "t3-innovation-network/desm",
"url": "https://github.com/t3-innovation-network/desm/issues/352",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1548439975
|
bug: Starting with create-t3-app and trying app leads to 404
Provide environment information
N/A
Describe the bug
Hey, just a heads up – there is currently an open bug in Next.js for folks trying out the app directory (beta) as follows: if you are using i18n configuration inside of next.config.js, and the app directory, you will see a 404 when routing to your index route.
This is a bug. The i18n routing configuration inside next.config.js is not being ported to app. You can view the documentation for i18n routing here. To make sure you can incrementally adopt from pages -> app, or to have them coexist for awhile, we will be fixing this bug.
But at least for now, I just wanted to make y'all aware.
Reproduction repo
N/A
To reproduce
Stated above.
Additional information
No response
Related issue: https://github.com/vercel/next.js/issues/41980
To make sure you can incrementally adopt from pages -> app, or to have them coexist for awhile, we will be fixing this bug.
Thanks for the heads-up @leerob. Do you know what would be the behaviour after the fix?
Would i18n have no effect on the routes inside the app directory? or
Would locale be mapped to the app/[locale] folder? or
Something entirely different?
This may help us who are facing this issue proceed development with correct assumptions while we wait for the fix.
18n will exist, but only affect routes inside of the pages directory 👍
|
gharchive/issue
| 2023-01-19T02:56:51 |
2025-04-01T06:40:33.475098
|
{
"authors": [
"leerob",
"nayaabkhan"
],
"repo": "t3-oss/create-t3-app",
"url": "https://github.com/t3-oss/create-t3-app/issues/1096",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
202001608
|
Show rows at gutter instead of as-editor's normal-text.
Pros
Easier to correspond narrowEditor's text to underlying editor's text.
When introduce inline-edit feature(Update bounded-editor's text by editing narrowEditor.)
Cons
When copying, no longer copy rows(possibly, sometimes useful to share row info to other person).
Changed my mind.
|
gharchive/issue
| 2017-01-19T23:03:50 |
2025-04-01T06:40:33.481014
|
{
"authors": [
"t9md"
],
"repo": "t9md/atom-narrow",
"url": "https://github.com/t9md/atom-narrow/issues/36",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
639412327
|
Handle restoring last closed tab
When a floating tab is closed, and the user presses Ctrl+Shift+T (Ctrl+Shift+N on Firefox) to restore the last closed tab, the floating tab reappears (as a popup), but it's not always on top and TabFloater doesn't know it is restored.
Catch the tab restore event (if possible), and either:
Enable the floating state
or convert the floating tab straight into a normal tab
For this, we'd need to somehow identify if the tab that's being restored is the same floating tab that was closed last time. This is not straightforward, as the tab IDs are different. I suppose it would be possible to compare window types and URLs, but it quickly becomes too complex. It also needs a lot of extra events, adding performance overhead for every tab open operation.
Closing as won't fix.
|
gharchive/issue
| 2020-06-16T06:41:50 |
2025-04-01T06:40:33.494471
|
{
"authors": [
"ba32107"
],
"repo": "tabfloater/tabfloater",
"url": "https://github.com/tabfloater/tabfloater/issues/102",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
536652721
|
Create basic options page
The user should be able to set:
Hotkeys
Default position of the floating tab
Tab size?
Positioning
Fixed position
Top left
Top right...
Smart positioning
Follow when scrolling
Follow when switching tabs
Hotkeys
Enable debugging
|
gharchive/issue
| 2019-12-11T22:36:39 |
2025-04-01T06:40:33.497209
|
{
"authors": [
"ba32107"
],
"repo": "tabfloater/tabfloater",
"url": "https://github.com/tabfloater/tabfloater/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1631909806
|
datasource item nullable project
Describe the bug
A clear and concise description of what the bug is.
I am unable to list all of the datasources on the server.
This looks like the same issue as https://github.com/tableau/server-client-python/pull/1028 except with DatasourceItem instead of WorkbookItem
Versions
Details of your environment, including:
Tableau Server version (or note if using Tableau Online) - Tableau Online
Python version: 3.9.12
TSC library version: 0.24
To Reproduce
Steps to reproduce the behaviour. Please include a code snippet where possible.
import tableauserverclient as TSC
# log in to tableau server
for datasource in TSC.Pager(server.datasources):
print(datasource)
The error can be fixed by removing one line from datasource_item.py:
class DatasourceItem(object):
...
@project_id.setter
#@property_not_nullable # TODO: remove this line
def project_id(self, value: str):
self._project_id = value
Results
What are the results or error messages received?
src/tableau_bot/refresh.py:142: in tableau_refresh
for datasource in Pager(server.datasources):
../../Library/Caches/pypoetry/virtualenvs/tableau-bot-sFRF_OxE-py3.9/lib/python3.9/site-packages/tableauserverclient/server/pager.py:40: in __iter__
current_item_list, last_pagination_item = self._endpoint(self._options)
../../Library/Caches/pypoetry/virtualenvs/tableau-bot-sFRF_OxE-py3.9/lib/python3.9/site-packages/tableauserverclient/server/endpoint/endpoint.py:205: in wrapper
return func(self, *args, **kwargs)
../../Library/Caches/pypoetry/virtualenvs/tableau-bot-sFRF_OxE-py3.9/lib/python3.9/site-packages/tableauserverclient/server/endpoint/datasources_endpoint.py:77: in get
all_datasource_items = DatasourceItem.from_response(server_response.content, self.parent_srv.namespace)
../../Library/Caches/pypoetry/virtualenvs/tableau-bot-sFRF_OxE-py3.9/lib/python3.9/site-packages/tableauserverclient/models/datasource_item.py:331: in from_response
datasource_item = cls(project_id)
../../Library/Caches/pypoetry/virtualenvs/tableau-bot-sFRF_OxE-py3.9/lib/python3.9/site-packages/tableauserverclient/models/datasource_item.py:58: in __init__
self.project_id = project_id
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tableauserverclient.models.datasource_item.DatasourceItem object at 0x10995cdf0>, value = None
@wraps(func)
def wrapper(self, value):
if value is None:
error = "{0} must be defined.".format(func.__name__)
> raise ValueError(error)
E ValueError: project_id must be defined.
NOTE: Be careful not to post user names, passwords, auth tokens or any other private or sensitive information.
yep - this should be simple enough to get in a new release very soon.
|
gharchive/issue
| 2023-03-20T11:23:08 |
2025-04-01T06:40:33.512995
|
{
"authors": [
"jacalata",
"peter-malcolm-bw"
],
"repo": "tableau/server-client-python",
"url": "https://github.com/tableau/server-client-python/issues/1210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
210933378
|
Update Contributing Guide
To include steps on how to branch off of development -- it'll avoid the merge conflicts from master we're seeing in PRs.
(Note: I'm not a git wizard, but my sequence of steps works every time :) )
@grbritz addressed this.
|
gharchive/issue
| 2017-02-28T23:23:05 |
2025-04-01T06:40:33.514379
|
{
"authors": [
"RussTheAerialist",
"t8y8"
],
"repo": "tableau/server-client-python",
"url": "https://github.com/tableau/server-client-python/issues/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2398370876
|
Localization
It would be helpful, if we could either manually localize the text or you offer localization.
This is now available with version 1.0.12!
The documentation can be found here: https://github.com/tableflowhq/csv-import?tab=readme-ov-file#internationalization
|
gharchive/issue
| 2024-07-09T14:24:22 |
2025-04-01T06:40:33.518096
|
{
"authors": [
"JariKonstantin",
"ciminelli"
],
"repo": "tableflowhq/csv-import",
"url": "https://github.com/tableflowhq/csv-import/issues/233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
196253365
|
Recent Components page, RSS feed
Hi folks.
This PR adds a new "Recent Components" page, and "Recent Components" RSS feed.
A couple more, small, build scripts were added -- check build.js
co(function* generator() {
yield componentsBuildList(options); // <- builds temporary components list (JSON)
yield componentsBuildIndex(options); // <- builds index pages (by category & most recent)
yield componentsBuildRSS(options); // <- builds RSS feed
yield componentsBuildPages(options); // <- comment to skip building pages
yield componentsBuildScreenshots(options); // <- comment to skip building screenshots
}).then(() => {
The temporary file that lists all components is now built by componentsBuildList,
The index pages are built componentsBuildIndex,
The RSS feed is built by componentsBuildRSS,
The rest is unchanged.
Note that componentsBuildList now uses the Git api (via nodegit) to find when a component was first committed to the repo. Took me some gitter/slack back and forth with the nodegit people, but I think this is robust to renaming and moving the component around. This introduces a new creationTime key in the component list (the temporary JSON file), which is leveraged later on. I also store a new author key, which is used in the RSS feed. Note that all this querying of the repo slows down this step, but it shouldn't be too bad (about 5 seconds on my laptop for all of the components).
The index pages still use the same template file, but new options allow for more flexibility. The list of index pages to build is in options.components.index:
components: {
index: { // list index pages to build
byCategory: { // components by category
title: 'Components',
path: 'components/index.html', // target location of index by category
sortAllBy: [['src'], ['asc']], // sort by file location will do
limitAll: false, // use all components
createSectionsBy: 'category', // create a section for each category
showSectionsTOC: true, // show Table of Contents (e.g. categories)
},
The byCategory key above describes the previously existing index page, the one listing all components by category. These options can be passed to a small function in components-build-sections that return a new list of components organized by sections. In the above example,
the list of components is first sorted by the src attribute (the path on disk),
all of the components are taken into account (limitAll is false),
sections are created by looking at the category key on each component,
there is a Table of Contents.
Now here is another index, and that's all it takes to describe the new "Recent Components" page:
mostRecent: { // most recent components
title: 'Recent Components',
path: 'components/recent.html', // target location of recent index
sortAllBy: [['creationTime'], ['desc']], // sort by most recent component first
limitAll: 50, // use the 50 most recent ones
createSectionsBy: creationTimeToYMD, // group by day
prettifySection: v => moment(v).format('LL'), // display as day
showSectionsTOC: false, // no need for Table of Contents
},
the list of components is first sorted by the creationTime attribute, most recent first,
only 50 components are on that page -- set to false if you prefer to list all of them,
sections could have been created using the creationTime key, but I thought it looked better when grouping all components created the same day together. The creationTimeToYMD callback takes a component and return whatever value you want to group components by (e.g. the section) -- in this case it converts the component creation time (milliseconds) to a YYYY-MM-DD format. Change it back to 'creationTime' if you'd prefer a section for each component.
prettifySection is a callback that will format the section for display on the page. Here, it converts the YYYY-MM-DD value to something more human-readable. Remove that property altogether if you end up switching createSectionsBy back to 'creationTime', since formatting the creationTime property is already handled by the options.components.prettify.creationTime callback.
I'm hiding the Table of Contents, which would only be a list of dates, didn't find it very useful.
The options.components.rss object should be self-explanatory:
rss: { // RSS feed
title: 'Tachyons Recent Components',
categories: ['CSS', 'Functional CSS'], // Categories this feed belongs to
ttl: 60, // Number of mins feed can be cached
path: 'components/rss.xml', // target location of feed (sync head.html)
count: 20, // how many in feed
},
I decided to only include 20 of the most recent components in that feed, that's a common value for feeds; feel free to customize.
Building a RSS feed that properly features the component screenshot was... tricky. I tried RSS enclosures, RSS custom elements -- no dice. I ended up looking at how http://unsplash.com does it. It seems to work fine. For RSS auto-discovery I added this to the templates/head.html:
<link rel="alternate" type="application/rss+xml" title="RSS Feed for Tachyons Recent Components" href="/components/rss.xml" />
This should hopefully let you put any pages from http://tachyons.io inside a RSS feed reader, and get the RSS feed directly.
What I leave up to you:
the new "Recent Components" page (aka components/recent.html) is not referenced from any other page right now. Feel free to add a link to it from the header? Making it look good is not my area of expertise :)
same for http://tachyons.io/components/rss.xml -- RSS auto-discovery should work, but feel free to explicitly mention the feed on the home page.
once this is up online, let's test the feed together -- I'll subscribe to it and hopefully you can add a component later on. If/when I see it in my reader I'll let you know that it all worked nicely.
Let me know if you have any questions, feedback, etc.
This is really amazing work.
Thanks guys. I'll keep an eye open, and once the RSS feed is up I'll let you know if it works as expected.
|
gharchive/pull-request
| 2016-12-18T01:57:04 |
2025-04-01T06:40:33.583053
|
{
"authors": [
"mrmrs",
"sebastienbarre"
],
"repo": "tachyons-css/tachyons-css.github.io",
"url": "https://github.com/tachyons-css/tachyons-css.github.io/pull/131",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1282395552
|
Set up Dependabot for signal-cli-rest-api
Dependabot currently doesn't support Docker Compose, a possible fork around is to create a Dockerfile that contains only the following line
FROM user/repo:version
… and reference that Dockerfile in the compose config.
@roschaefer do you think you might get to this this quarter or should we move it to a future one?
|
gharchive/issue
| 2022-06-23T13:18:29 |
2025-04-01T06:40:33.586363
|
{
"authors": [
"mattwr18",
"tillprochaska"
],
"repo": "tactilenews/100eyes",
"url": "https://github.com/tactilenews/100eyes/issues/1388",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1595760285
|
extension for stable-diffusion-webui?
Hello,
Not sure how this github works... sorry about that in advance!
I am just a user ...
Very popular is using https://github.com/AUTOMATIC1111/stable-diffusion-webui
I was just wondering if you can make body generation from a single portrait as extension to be added to the stable-diffusion-webui?
Thank you in advance!
Hi, sorry for the late reply.
I'm not the original authors of this works. I was just trying to test out how this model works.
This model uses two StyleGAN to generate separate images of a face and a body. These generated images then go through an optimization process to make them look more natural, which means it refines the finer details. So this model don't generate the whole body in one-shot fashion.
the interface of stable-diffusion-webui is basically gradio library which is not that hard to learn. But i don't have to much time to make this model to webui.
Thank you so much for the information!
|
gharchive/issue
| 2023-02-22T20:05:16 |
2025-04-01T06:40:33.594104
|
{
"authors": [
"HyperUpscale",
"tae-yeop"
],
"repo": "tae-yeop/insetgan",
"url": "https://github.com/tae-yeop/insetgan/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2306301094
|
SGX prove fails with: Can not serialize input for SGX io error
Describe the bug
BACKGROUND:
Node successfully generated and submitted proof for block 179947
But when it tried to prove follow yo blocks, e.g. 180,466 it gets error:
ERROR[05-20|14:49:17.718] Failed to request proof height=180,466 error="Can not serialize input for SGX io error: Broken pipe (os error 32), output is Ok(Output { status: ExitStatus(unix_wait_status(256)), stdout: \"Starting one shot mode\\nGlobal options: GlobalOpts { secrets_dir: \\\"/root/.config/raiko/secrets\\\", config_dir: \\\"/root/.config/raiko/config\\\" }, OneShot options: OneShotArgs { sgx_instance_id: 5466 }\\nmemory allocation of 184 bytes failed\\n\", stderr: \"Gramine is starting. Parsing TOML manifest file, this may take some time...\\n-----------------------------------------------------------------------------------------------------------------------\\nGramine detected the following insecure configurations:\\n\\n - loader.insecure__use_cmdline_argv = true (forwarding command-line args from untrusted host to the app)\\n - sys.insecure__allow_eventfd = true (host-based eventfd is enabled)\\n - sgx.allowed_files = [ ... ] (some files are passed through from untrusted host without verification)\\n\\nGramine will continue application execution, but this configuration must not be used in production!\\n-----------------------------------------------------------------------------------------------------------------------\\n\\n[P1:T4:sgx-guest] error: Out-of-memory in library OS\\n\" })" endpoint=http://51.77.21.107:8080
INFO [05-20|14:49:55.410] Proof generated height=180,466 time=1m4.972853124s producer=SGXProofProducer
SGX server is configured with Processor memory reserved: 512 MB:
Steps to reproduce
Steps to reproduce here.
Spam policy
[X] I verify that this issue is NOT SPAM and understand SPAM issues will be closed and reported to GitHub, resulting in ACCOUNT TERMINATION.
UPDATE:
SGX prover keeps throwing error messages, but at the same time keep generating proof:
INFO [05-20|22:54:26.674] Request proof from raiko-host service blockID=182,180 coinbase=0x44c7dB3ac68d92398f88Cb2BD98C118925080e11 height=182,180 hash=4d1881..b6ea37
ERROR[05-20|22:54:28.793] Failed to request proof height=182,180 error="Can not serialize input for SGX io error: Broken pipe (os error 32), output is Ok(Output { status: ExitStatus(unix_wait_status(256)), stdout: \"Starting one shot mode\\nGlobal options: GlobalOpts { secrets_dir: \\\"/root/.config/raiko/secrets\\\", config_dir: \\\"/root/.config/raiko/config\\\" }, OneShot options: OneShotArgs { sgx_instance_id: 5466 }\\nmemory allocation of 184 bytes failed\\n\", stderr: \"Gramine is starting. Parsing TOML manifest file, this may take some time...\\n-----------------------------------------------------------------------------------------------------------------------\\nGramine detected the following insecure configurations:\\n\\n - loader.insecure__use_cmdline_argv = true (forwarding command-line args from untrusted host to the app)\\n - sys.insecure__allow_eventfd = true (host-based eventfd is enabled)\\n - sgx.allowed_files = [ ... ] (some files are passed through from untrusted host without verification)\\n\\nGramine will continue application execution, but this configuration must not be used in production!\\n-----------------------------------------------------------------------------------------------------------------------\\n\\n[P1:T15:sgx-guest] error: Out-of-memory in library OS\\n\" })" endpoint=http://51.77.21.107:8080
INFO [05-20|22:54:38.324] Check synced L1 snippet from anchor blockID=182,180 l1Height=1,581,891
ERROR[05-20|22:54:40.855] Failed to request proof height=182,180 error="Can not serialize input for SGX io error: Broken pipe (os error 32), output is Ok(Output { status: ExitStatus(unix_wait_status(256)), stdout: \"Starting one shot mode\\nGlobal options: GlobalOpts { secrets_dir: \\\"/root/.config/raiko/secrets\\\", config_dir: \\\"/root/.config/raiko/config\\\" }, OneShot options: OneShotArgs { sgx_instance_id: 5466 }\\nmemory allocation of 184 bytes failed\\n\", stderr: \"Gramine is starting. Parsing TOML manifest file, this may take some time...\\n-----------------------------------------------------------------------------------------------------------------------\\nGramine detected the following insecure configurations:\\n\\n - loader.insecure__use_cmdline_argv = true (forwarding command-line args from untrusted host to the app)\\n - sys.insecure__allow_eventfd = true (host-based eventfd is enabled)\\n - sgx.allowed_files = [ ... ] (some files are passed through from untrusted host without verification)\\n\\nGramine will continue application execution, but this configuration must not be used in production!\\n-----------------------------------------------------------------------------------------------------------------------\\n\\n[P1:T13:sgx-guest] error: Out-of-memory in library OS\\n\" })" endpoint=http://51.77.21.107:8080
INFO [05-20|22:54:41.456] Check synced L1 snippet from anchor blockID=182,180 l1Height=1,581,891
INFO [05-20|22:54:53.298] Proof generated height=182,180 time=26.623564692s producer=SGXProofProducer
INFO [05-20|22:54:53.298] NewProofSubmitter block proof blockID=182,180 coinbase=0x44c7dB3ac68d92398f88Cb2BD98C118925080e11 parentHash=3ce44b..de1151 hash=4d1881..b6ea37 stateRoot=77c37b..e9d758 proof=0000155a717639a029c7e5db6eddd6561b101269a4a17ecabcd937853b3a5be3e8c0e0357f2f450fdf26da8fbf2505131cefea562f6e72f868081312ece07165295e92d4194f76b353e02c50f54c17e49db04877963ab4df1c tier=200
INFO [05-20|22:54:53.308] Build proof submission transaction blockID=182,180 gasLimit=0 guardian=false
INFO [05-20|22:54:53.328] Publishing transaction service=prover tx=a7edc7..fde19a nonce=72 gasTipCap=1,000,000,000 gasFeeCap=12,598,439,738 gasLimit=298,584
INFO [05-20|22:54:53.331] Transaction successfully published service=prover tx=a7edc7..fde19a nonce=72 gasTipCap=1,000,000,000 gasFeeCap=12,598,439,738 gasLimit=298,584
INFO [05-20|22:54:56.456] Check synced L1 snippet from anchor blockID=182,180 l1Height=1,581,891
INFO [05-20|22:55:04.988] Proof assignment request body feeToken=0x0000000000000000000000000000000000000000 expiry=1,716,247,504 tierFees="[{Tier:100 Fee:+1000000000} {Tier:200 Fee:+1000000000} {Tier:1000 Fee:+0}]" blobHash=012680..ba5bfc currentUsedCapacity=0
INFO [05-20|22:55:04.988] Prover's ETH balance balance=36.56251266 address=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC
INFO [05-20|22:55:04.989] Prover's Taiko token balance balance=0 address=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC
WARN [05-20|22:55:04.989] Prover does not have required on-chain Taiko token balance providedProver=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC taikoTokenBalance=0 minTaikoTokenBalance=0
INFO [05-20|22:55:04.990] Proof assignment request body feeToken=0x0000000000000000000000000000000000000000 expiry=1,716,247,504 tierFees="[{Tier:100 Fee:+1100000000} {Tier:200 Fee:+1100000000} {Tier:1000 Fee:+0}]" blobHash=012680..ba5bfc currentUsedCapacity=0
INFO [05-20|22:55:04.991] Prover's ETH balance balance=36.56251266 address=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC
INFO [05-20|22:55:04.992] Prover's Taiko token balance balance=0 address=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC
WARN [05-20|22:55:04.992] Prover does not have required on-chain Taiko token balance providedProver=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC taikoTokenBalance=0 minTaikoTokenBalance=0
INFO [05-20|22:55:04.993] Proof assignment request body feeToken=0x0000000000000000000000000000000000000000 expiry=1,716,247,504 tierFees="[{Tier:100 Fee:+1320000000} {Tier:200 Fee:+1320000000} {Tier:1000 Fee:+0}]" blobHash=012680..ba5bfc currentUsedCapacity=0
INFO [05-20|22:55:04.993] Prover's ETH balance balance=36.56251266 address=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC
INFO [05-20|22:55:04.994] Prover's Taiko token balance balance=0 address=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC
WARN [05-20|22:55:04.994] Prover does not have required on-chain Taiko token balance providedProver=0xfbfd4F6993BC0D3481B9bf61AD0892f817a2e7aC taikoTokenBalance=0 minTaikoTokenBalance=0
INFO [05-20|22:55:05.340] Transaction confirmed service=prover tx=a7edc7..fde19a block=07b0e5..730636:1581901 effectiveGasPrice=7,418,930,979
INFO [05-20|22:55:05.341] "💰 Your block proof was accepted" blockID=182,180 parentHash=3ce44b..de1151 hash=4d1881..b6ea37 stateRoot=77c37b..e9d758 txHash=a7edc7..fde19a tier=200 isContest=false
It's OOM, try increasing gramine memory related setting maybe helpful.
Reference: https://gramine.readthedocs.io/en/latest/manifest-syntax.html#enclave-size
With mainnet and SGX running there for a whole month, I believe this issue is outdated, so closing it for now. Feel free to comment otherwise and we'll reopen for investigation.
|
gharchive/issue
| 2024-05-20T16:02:34 |
2025-04-01T06:40:33.640327
|
{
"authors": [
"davaymne",
"mratsim",
"smtmfft"
],
"repo": "taikoxyz/raiko",
"url": "https://github.com/taikoxyz/raiko/issues/227",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1650314259
|
vscode hanging with some vue files
What version of prettier-plugin-tailwindcss are you using?
0.2.5
What version of Tailwind CSS are you using?
3.2.7
What version of Node.js are you using?
18.12.1
What package manager are you using?
yarn
What operating system are you using?
macOS
Describe your issue
We added prettier-plugin-tailwindcss a couple of days ago. It worked initially, but I noticed that vscode+prettier was hanging trying to save certain .vue files. The hangs were obvious because files would fail to save and vscode would complain that prettier was still running. This occurred even after restarting vscode, etc. I removed the package and vscode was able to save those files again.
Sorry about the vague bug report. We are not using any other prettier plugins. We love the idea and really want it for our project! Thanks for all your hard work.
Does prettier hang when you run npx prettier -w . in your project?
I cannot get it to happen with prettier on the CLI. It's intermittent in vscode, but definitely present and unrecoverable. Here's the shortest file I could make that triggers it occasionally:
Something.vue:
<template>
<div v-if="error" class="bg-error text-xs mt-1 bg-error-content py-1 px-2 inline-block rounded">
{{ error }}
</div>
</template>
<script setup lang="ts">
import { ref } from "vue";
const error = ref<string>();
</script>
$ code --list-extensions | rg "vue|prett"
esbenp.prettier-vscode
Vue.volar
Vue.vscode-typescript-vue-plugin
settings.json:
"editor.formatOnSave": true,
"[vue]": {
"editor.codeActionsOnSave": { "source.organizeImports": true },
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
.prettierrc:
{
"printWidth": 100
}
Are you by chance using the Tailwind CSS intellisense extension as well?
I am not using that. Anything else I should grep for? Thanks!!
Nope, I don't think so. I'll see what I can figure out.
@gurgeous So far I can't get the hang to happen. Any chance you could provide a project that you're able to reproduce it on? Also the full list of install extensions would be super helpful.
I can't share the project unfortunately. Maybe I can come up with a better repro. I'm also happy to pop open dev tools or mess around with vscode if you have any pointers. I've written some extensions too...
Here's the full list of installed extensions, apologies:
aki77.haml-lint alexcvzz.vscode-sqlite andersliu.insert-line-number astro-build.astro-vscode bierner.markdown-preview-github-styles BriteSnow.vscode-toggle-quotes bungcip.better-toml coolbear.systemd-unit-file cpylua.language-postcss csstools.postcss dbaeumer.vscode-eslint donjayamanne.githistory doublefint.pgsql earshinov.permute-lines EditorConfig.EditorConfig enkia.tokyo-night esbenp.prettier-vscode flesler.url-encode formulahendry.code-runner GitHub.copilot GitHub.vscode-pull-request-github golang.go Grumpydev.pico8vscodeeditor gurgeous.ruby-open-gem hogashi.crontab-syntax-highlight idleberg.applescript JuanBlanco.solidity kamikillerto.vscode-colorize karunamurti.haml Koihik.vscode-lua-format LoranKloeze.ruby-rubocop-revived mechatroner.rainbow-csv misogi.ruby-rubocop mrmlnc.vscode-duplicate ms-azuretools.vscode-docker ms-python.isort ms-python.python ms-python.vscode-pylance ms-vscode-remote.remote-ssh ms-vscode-remote.remote-ssh-edit ms-vscode.remote-explorer oderwat.indent-rainbow Orta.vscode-jest Prisma.prisma rebornix.ruby sgryjp.vscode-stable-sort Shopify.ruby-lsp skellock.just sldobri.gruvbox-5-stars sryze.uridecode stkb.rewrap stylelint.vscode-stylelint svelte.svelte-vscode syler.sass-indented tiehuis.zig Vue.volar Vue.vscode-typescript-vue-plugin william-voyek.vscode-nginx wingrunr21.vscode-ruby zkirkland.vscode-firstupper
Me too having the same issue on vscode. Had to removed this plugin to make saving vue file works again.
We've adopted a workaround for anyone interested:
Set pluginSearchDirs: false in .prettierrc. This fixes vscode.
Run prettier --plugin node_modules/prettier-plugin-tailwindcss from the CLI when you want to sort things.
We use just as our command runner and added a just sort to run the above. You could also use a pre-commit hook, etc.
Again, thanks for the great plugin! We love it (clearly) and I totally get that vscode/prettier are tough to debug. No worries and thanks for all your hard work.
For what it's worth, we're having the same issue with Next.js + TypeScript + TailwindCSS files. After spending over 4 hours trying to isolate the root cause, it was this plugin. Removing it solved the issue!
After days on a similar issue where one my CPU was stuck at 100% usage, I can confirm this comes from this plugin!
I'm using Nuxt 3.41 and I'm formatting Vue files.
On a 100-line Vue file:
Saving a file with Prettier only: 60ms and no CPU issue
Saving a file with Prettier + prettier-plugin-tailwind: 400ms, and minutes later, CPU usage raises, then my VS Code is completely stuck, and I have to reboot it.
Uninstalled :(
Too bad, it was useful to me.
In vue-cli based projects, downgrading Tailwind to 3.2.x resolves the issue.
Uninstall solve this :(
Any update on this? it seems pretty bad right now
You can run "Restart Extension Host" in VSCode to avoid rebooting the whole app and project — it seems to halt the runaway process and gets things working again. Sometimes I can just re-save the file afterwards without crash, but then a few more saves of the same file and it's crashing again.
Maybe that's a clue to what's happening with the plugin? I added pluginSearchDirs: false to prettier.config.js in my NextJS + Typescript project (as per @gurgeous' comment above) and I haven't had the issue so far. Though it's only been minutes, the one file that was crashing consistently seems not to be anymore ¯_(ツ)_/¯
👋 We're working towards a solution for this.
I've merged some (sometimes significant) performance and memory improvements in #153 that could potentially fix this problem. The fix is available to test via our insiders build: npm install prettier-plugin-tailwindcss@insiders
Could some of you give it a test and report back? It would be super helpful! You'll need to close/reopen VS Code after installing it so the prettier extension doesn't have the old version in memory.
@thecrypticace installing the plugin broke my Prettier, I have no formatting anymore when saving a file. Prettier logs when saving a file:
["INFO" - 3:35:55 PM] Formatting file:///workspaces/ppw/neuxt/pages/toolbox.vue
["INFO" - 3:35:55 PM] Using ignore file (if present) at /workspaces/ppw/.prettierignore
Notes:
I use Prettier for VS code extension
I have no .prettierignore file
I have no specific settings that could explain this.
I uninstalled the plugin and get a normal behaviour back, here are logs for a single save action:
["INFO" - 3:41:16 PM] Formatting file:///workspaces/ppw/neuxt/pages/toolbox.vue
["INFO" - 3:41:16 PM] Using ignore file (if present) at /workspaces/ppw/.prettierignore
["INFO" - 3:41:16 PM] File Info:
{
"ignored": false,
"inferredParser": "vue"
}
["INFO" - 3:41:16 PM] No local configuration (i.e. .prettierrc or .editorconfig) detected, falling back to VS Code configuration
["INFO" - 3:41:16 PM] Prettier Options:
{
"arrowParens": "always",
"bracketSpacing": true,
"endOfLine": "lf",
"htmlWhitespaceSensitivity": "css",
"insertPragma": false,
"singleAttributePerLine": true,
"bracketSameLine": false,
"jsxBracketSameLine": false,
"jsxSingleQuote": false,
"printWidth": 90,
"proseWrap": "preserve",
"quoteProps": "as-needed",
"requirePragma": false,
"semi": true,
"singleQuote": true,
"tabWidth": 2,
"trailingComma": "es5",
"useTabs": false,
"vueIndentScriptAndStyle": false,
"filepath": "/workspaces/ppw/neuxt/pages/toolbox.vue",
"parser": "vue"
}
["INFO" - 3:41:16 PM] Formatting completed in 69ms.
@ddahan There was a small hiccup that I just pushed a fix for a few minutes ago. Can you re-install the insiders build and give it one more test please?
The insiders version you'll need to test is 0.0.0-insiders.78bd35b (you can check your lock file to see if its the right one)
@thecrypticace it is definitely better than before!
Before, with the plugin activated: 400ms instead of 50-100ms to save a Vue file of 100 lines (and 100% CPU usage).
Now it seems to be almost the same.
However, it feels like if I wait for a few seconds to save the file, it takes longer (around 250ms). Then it's quick again. Is there anything that could explain this difference?
Yep so we removed object hashing to speed up config loading as for some config files it becomes very expensive but added an expiration timer so we can still detect changes to the config file (otherwise you'd have to reload VS Code every time the config changed). I'll see if there are other options for detecting config file changes to reduce the need for shorter expiration times.
Ok thanks! Not sure to understand deeply what's happening behind the scene, but the DX is a little weird because of that, because for the same file you have an almost instantaneous formatting, and then a laggy one. Anything that could improve this would be welcome imo. Thanks!
Still getting the same issue even with the insiders build 0.0.0-insiders.78bd35b. Disabling formatting on save with Prettier completely eliminates the issue, so it seems that the underlying issue is still there.
Not sure how to troubleshoot this further, but would be happy to provide any information that could be helpful here.
Still getting the same issue even with the insiders build 0.0.0-insiders.78bd35b. Disabling formatting on save with Prettier completely eliminates the problem, so it seems that the underlying cause is still there.
Not sure how to troubleshoot this further, but would be happy to provide any information that could be helpful here.
Same thing for me with 0.0.0-insiders.78bd35b.
@Doesntmeananything @valgeirb Could either one or both of you provide reproduction projects , details on VSCode plugins and VSCode version, and info on how you reproduce it? A video / screen recording might be helpful too so we can see what you're doing to repro it.
Additionally, specs on your computer could be useful (like CPU, memory, etc…)
I'm having the same issue and I'm rather curious what the common denominator is between people who have this.
I think this is related to a combination of installed NPM packages, because I've tried different VS Code versions, different Prettier extension versions, used different computers (one was a brand new Win11 installation), etc.
But there is just one project that is giving me issues while others work flawlessly.
Are any of you using DaisyUI for example?
Meanwhile I'm trying to create a minimal repro example.
@krisz094 Even if the repro isn't minimal it might still be useful as long as the issue is reproducible. Is that repo public by chance?
FWIW, I'm using Sveltekit 1.15.8 + Tailwindcss 3.2.4 + Typescript 5.0.4 + this plugin. The insiders version 0.0.0-insiders.d3f787d made a significant improvement. When VSCode upgraded to Typescript 5 last month, that's when we noticed the performance hit when autosaving. The workaround at the time was to downgrade VSCode to a patch before 1.77.
We are reproducing
https://github.com/verdie-g/crpg/tree/master/src/WebUI
can't we watch config file and update instead of expiring it after x seconds ?
@jd1378
can't we watch config file and update instead of expiring it after x seconds ?
We chose this router because, as far as I know, Prettier does not offer an API to do this. This means that it would need to be done via Node APIs — possibly using chokidar or some other file watcher. This also means that if we did that then we'd likely need to detect that we're running in VSCode because there's overhead to doing that and you wouldn't want to do it when running prettier from the command line.
that makes sense. but can't we detect and only do it when running in vscode ? so we don't get the overhead in cli, but get the benefits in vscode ? also it should be only one file I guess, so the overhead should be small. I mean if it means we don't reload the config for the rest of the life time of the extension, that would be a lot less work
I'm on a nuxt 3 project, it it not big yet, but still has ~ 50+ components already
here's without prettier-plugin-tailwindcss:
Rule | Time (ms) | Relative
:---------------------------------|----------:|--------:
prettier/prettier | 1735.189 | 50.0%
import/namespace | 1262.053 | 36.4%
@typescript-eslint/no-unused-vars | 70.073 | 2.0%
vue/attributes-order | 63.656 | 1.8%
n/no-deprecated-api | 25.548 | 0.7%
no-redeclare | 14.051 | 0.4%
import/order | 13.544 | 0.4%
unicorn/escape-case | 12.815 | 0.4%
vue/component-tags-order | 9.894 | 0.3%
spaced-comment | 9.673 | 0.3%
also as a thought: after squeezing as much as performance we can, can't we then create some web workers and offload the processing to web workers to process expressions in parallel ? assuming each expression must be done on it's own, wouldn't that help ?
@jd1378 While it would be amazing if that were possible — processing things in parallel in web workers would unfortunately require prettier integration. JS/TS expressions in non-JS/JSX/TS/TSX files are usually handled as embedded documents which means ultimately Prettier itself is responsible for calling parse in the appropriate plugins.
@or2e Since you're using WSL2 — are your projects files under /mnt/{drive-letter-here} (e.g. /mnt/c)? WSL2 has serious performance issues accessing files shared with Windows via /mnt/* and any access of the filesystem slows things down significantly. I took your project, updated it to the latest version of the prettier plugin (v0.3.0) and ran the CLI against one of the files. It takes roughly 5.4s to process ./src/pages/clans/\[id\]/index.vue under WebUI. About 4.7s of which is everything required to load your local copy of Tailwind, load and compile the config, and set up everything for processing. And the majority of this time is Node processing any require() calls.
If I move the repo so it's under /home/wsl the time drops significantly to 600ms total — with about 270ms of that being all the Tailwind, config loading, and setup. require() itself is slowed down a ton.
This might be something you can look at if you're still having performance problems.
@krisz094 Any chance you're using WSL as well?
@thecrypticace
1/ The repository has been cloned into the distro (not mounted), /root/WORK/crpg
2/
v0.2.8 - ~1500ms
v0.3.0 - 50-200ms
ty!
Fantastic!
Hey all — since we've released v0.3.0, with the discovery regarding WSL and windows file system sharing, and that there's been little movement on this issue we're going to close it as the issues should be mostly solved.
If you're still experiencing problems and can provide a reproduction please feel free to open a new issue that we can take a look at.
Thanks everyone for their input! ✨
Hi there. I'm curious to have extensive feedbacks from people who had the issue, an tested newer versions (0.3.0+) then. Thanks!
@gurgeous @kotaksempit @tjkohli @richardtallent-erm @Mattinton @Doesntmeananything @valgeirb @krisz094 @jd1378
|
gharchive/issue
| 2023-04-01T05:55:32 |
2025-04-01T06:40:33.721769
|
{
"authors": [
"AjayDevInfy",
"Doesntmeananything",
"aaronmw",
"ddahan",
"gurgeous",
"jd1378",
"kmcgurty",
"kotaksempit",
"krisz094",
"mingtheanlay",
"or2e",
"thecrypticace",
"tjkohli",
"tkat",
"valgeirb"
],
"repo": "tailwindlabs/prettier-plugin-tailwindcss",
"url": "https://github.com/tailwindlabs/prettier-plugin-tailwindcss/issues/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
953860545
|
如何获取个性签名
没看见获取个性签名的接口
tx 有相关 API 么(小声
好像有http接口,但是我找不到,如果有人能找到可以加到这里面
https://github.com/takayama-lily/oicq/blob/master/web-api.md
|
gharchive/issue
| 2021-07-27T13:02:56 |
2025-04-01T06:40:33.748077
|
{
"authors": [
"Stapxs",
"takayama-lily"
],
"repo": "takayama-lily/node-onebot",
"url": "https://github.com/takayama-lily/node-onebot/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2682673704
|
Fixes a bug when using llm chat is used with Ollama Model.
The error:
After the fix, this works.
Do you have an up-to-date version of ollama Python package installed?
Do you have an up-to-date version of ollama Python package installed?
Let me check.
No this is an issue with the llm-plugin.
with this commit. 678a50f11055e1ae8007134ef0c038df476f553e I get this error
Versions:
With the fix 0.8.0
Can you please check from your end.
This happens when we use
llm chat -m <some-ollama-model>
and do a muti turn.
Hm, so everything works fine with ollama==0.3.3. However, after I updated to the newest version (0.4.0)[https://github.com/ollama/ollama-python/releases/tag/v0.4.0] that was released today, I started to get errors:
no "name" attribute in model objects
no "model_info" attribute in in show response objects
I've made corresponding changes to use new names of these renamed attributes and then I was able to chat with Ollama models. This is strange:
Why do I not get the "attachment" error?
Why don't you get same attribute errors as I do?
Note that response objects should always have "attachments" attribute: https://github.com/simonw/llm/blob/335b3e635aa1439edafb13b0c2a225ce5840cc98/llm/models.py#L214
Hm, so everything works fine with ollama==0.3.3. However, after I updated to the newest version (0.4.0)[https://github.com/ollama/ollama-python/releases/tag/v0.4.0] that was released today, I started to get errors:
no "name" attribute in model objects
no "model_info" attribute in in show response objects
I've made corresponding changes to use new names of these renamed attributes and then I was able to chat with Ollama models. This is strange:
Why do I not get the "attachment" error?
Why don't you get same attribute errors as I do?
Strange.
I did get the attribute errors, in the previous update but then I started with a clean state on new venv with Python 3.9.12. What version of python are you testing this on?
And ollama version is 0.4.3 while Ollama python version is ollama 0.4.0
I'm testing on Python v3.12.3. I've pushed a bunch of updates that fix everything for me locally. Would you mind pulling latest master and creating a new venv from scratch?
Yes I will check in a bit. I think it will work. With an isolated env using uv I am getting the behaviour you are getting. This pr might not be needed. With ollama==0.3.3 and llm-ollama=0.7.0 it’s working like yours.
Only ollama==0.4.0 fixes are needed
I can confirm it's working.
versions
Thanks for confirming; I've tagged a new release: https://github.com/taketwo/llm-ollama/releases/tag/0.7.1.
|
gharchive/pull-request
| 2024-11-22T09:58:02 |
2025-04-01T06:40:33.760991
|
{
"authors": [
"sukhbinder",
"taketwo"
],
"repo": "taketwo/llm-ollama",
"url": "https://github.com/taketwo/llm-ollama/pull/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
72439855
|
Suggestion for good IDE setup
Hi,
can someone give advices on how to correctly setup an IDE to work on the project?
I tried Eclipse (& scala-ide), Netbeans & IDEA without lot of success.
In all 3 I can import the project, but in none of them I have a correct setup in which I have completion & navigation between Java, Scala & templates.
Of course using sbt on command line of course I build/run correctly the project
Thanks for any help.
PS: as you might have understand I am new to scala world
I recommend IntelliJ + Scala Plugin. You can simply open the /gitbucket directory as a SBT project.
Hi, thanks for the hint.
But as I told it is exactly what I did.
For example, opening the html templates does not lead in having Intellij interpreting them as "scala templates". Thus completion on contollers, routes, ... does not occur. Is there anything special to configure? I think IDEA see them as standard html files.
I come from pure java/maven world and was used to Netbeans & Eclipse. I perhaps missed something in IDEA setup but I doubt. I am using latest IDEA community edition (14.1.2) in which I activated scala support.
|
gharchive/issue
| 2015-05-01T14:10:52 |
2025-04-01T06:40:33.764544
|
{
"authors": [
"McFoggy",
"takezoe"
],
"repo": "takezoe/gitbucket",
"url": "https://github.com/takezoe/gitbucket/issues/737",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1310911857
|
Pump rust-lightning to v0.0.108
This is needed because recent versions of rust-lightning expose the BigSize struct (used in TLV encoding) which will be used in some messages in the tower's Lightning interface (check bolt13).
Related to #31
I guess we can bump this as part of your SoB main PR, or do you need it early for any other purpose?
@sr-gi
Not for a specific thing, just separation of concerns.
These diffs look quite unrelated between the others.
Right. I'll merge it after the plugin release then.
|
gharchive/pull-request
| 2022-07-20T11:39:42 |
2025-04-01T06:40:33.817560
|
{
"authors": [
"meryacine",
"sr-gi"
],
"repo": "talaia-labs/rust-teos",
"url": "https://github.com/talaia-labs/rust-teos/pull/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1270970156
|
epub格式的书籍重新打开重影
倒入epub格式书籍后,重新打开进行阅读,翻页后有重影,应该是第一次加载的页面内容没有清除导致。
版本: 3.5.9
这个问题我也发现了,有可能是新版的 Readium 造成的。有时候上一页的图片都会残留在页面上。
但这个问题目前是可以解决的,阅读时在 页面设置 里把 主题配色 改成 白底黑字 就可以解决了。
感谢 @NeedforGit 提供的方案,这种方式可以临时解决无法阅读的问题。但在翻页的时候还是能看到残影,期待有更完美的解决方案。
我也有这个问题
同,还以为自己搞崩了呢
不使用默认的电子书自带主题就没事
|
gharchive/issue
| 2022-06-14T15:18:40 |
2025-04-01T06:40:33.825722
|
{
"authors": [
"NeedforGit",
"fly2sky2000",
"ql-isaac",
"xuehuayous"
],
"repo": "talebook/talebook",
"url": "https://github.com/talebook/talebook/issues/197",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1805495509
|
npm create tamagui failing
Current Behavior
run npm create tamagui
error:
@tamagui/toast@npm:1.39.9: The remote server failed to provide the requested resource
➤ YN0035: │ Response Code: 404 (Not Found)
➤ YN0035: │ Request Method: GET
➤ YN0035: │ Request URL: https://registry.yarnpkg.com/@tamagui/toast/-/toast-1.39.9.tgz
➤ YN0035: │ @tamagui/web@npm:1.39.9: The remote server failed to provide the requested resource
➤ YN0035: │ Response Code: 404 (Not Found)
➤ YN0035: │ Request Method: GET
➤ YN0035: │ Request URL: https://registry.yarnpkg.com/@tamagui/web/-/web-1.39.9.tgz
Expected Behavior
npm create tamagui is successful
Tamagui Version
not applicable
Reproduction
not applicable
System Info
System:
OS: macOS 13.0
CPU: (12) arm64 Apple M2 Pro
Memory: 76.44 MB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 16.18.1 - ~/.nvm/versions/node/v16.18.1/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v16.18.1/bin/yarn
npm: 8.19.2 - ~/.nvm/versions/node/v16.18.1/bin/npm
Browsers:
Chrome: 114.0.5735.198
Safari: 16.1
npm was down
|
gharchive/issue
| 2023-07-14T20:07:22 |
2025-04-01T06:40:33.866812
|
{
"authors": [
"matthewhausman",
"natew"
],
"repo": "tamagui/tamagui",
"url": "https://github.com/tamagui/tamagui/issues/1423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
107529810
|
Add search overloads that take in CancellationToken
Once a search is fired off - there is no way to cancel the search process. I would suggest we add an overload to the search methods that takes in a cancellation token - which will stop the polling process in the background.
We're seeing a bunch of users that fire off wrong searches or decide to change their search mid-search. In order to save resources - I would like to be able to cancel the process.
Something along the lines of
Task<List<Itinerary>> QueryFlight(FlightQuerySettings flightQuerySettings, CancellationToken token);
Good idea @jochenvanwylick.
Off:
Sorry I was not too responsive lately on the other open issues. I owe you at least one fix for two weeks now. I'm on my way of implementing these.
@tamasvajk No worries !
|
gharchive/issue
| 2015-09-21T14:51:33 |
2025-04-01T06:40:33.872176
|
{
"authors": [
"jochenvanwylick",
"tamasvajk"
],
"repo": "tamasvajk/SkyScanner",
"url": "https://github.com/tamasvajk/SkyScanner/issues/18",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
63281822
|
Syrus GPS
Configure syrus y traccar no recibe la información pero con la aplicación para android si funciona.
En syrus desk dice que esta conectado al servidor.
Please provide logs (tracker-server.log file).
2015-03-20 11:31:38 INFO: Starting server...
2015-03-20 11:31:38 INFO: Operating System name: Linux version: 3.18.7-v7+ architecture: arm
2015-03-20 11:31:38 INFO: Java Runtime name: Java HotSpot(TM) Client VM vendor: Oracle Corporation version: 25.0-b70
2015-03-20 11:31:38 INFO: Memory Limit heap: 224mb non-heap: 0mb
2015-03-20 11:31:38 INFO: Version: 2.11-SNAPSHOT
2015-03-20 12:08:49 DEBUG: [5031 <- 190.91.240.224] - HEX: 3e52584152543b322e312e34313b49443d3335363631323032363336323034383c0d0a
2015-03-20 12:08:52 DEBUG: [5031 -> 190.91.240.224] - HEX: 333536363132303236333632303438
2015-03-20 14:10:19 DEBUG: [5005 <- 78.24.50.202] - HEX: 474554202f20485454502f312e300d0a557365722d4167656e743a204f706572612f392e383020285831313b204c696e7578207838365f3634292050726573746f2f322e31322e3338382056657273696f6e2f31322e31360d0a0d0a
2015-03-20 14:10:19 WARN: String index out of range: 0 - java.lang.StringIndexOutOfBoundsException (String.java:646)
2015-03-20 14:10:19 INFO: Closing connection by exception
2015-03-20 14:10:19 INFO: Closing connection by disconnect
2015-03-20 14:46:32 DEBUG: [5031 <- 190.91.240.218] - HEX: 3e52584152543b322e312e34313b49443d3335363631323032363336323034383c0d0a
2015-03-20 14:46:32 DEBUG: [5031 -> 190.91.240.218] - HEX: 333536363132303236333632303438
2015-03-20 15:00:00 DEBUG: [5031 <- 190.91.240.196] - HEX: 3e52584152543b322e312e34383b49443d3335363631323032363336323034383c0d0a
2015-03-20 15:00:00 DEBUG: [5031 -> 190.91.240.196] - HEX: 333536363132303236333632303438
2015-03-20 15:15:14 DEBUG: [5031 <- 190.91.240.196] - HEX: 3e52584152543b322e312e34383b49443d3335363631323032363336323034383c0d0a
2015-03-20 15:15:14 DEBUG: [5031 -> 190.91.240.196] - HEX: 333536363132303236333632303438
2015-03-20 15:21:44 DEBUG: [5031 <- 190.91.240.196] - HEX: 3e52584152543b322e312e34383b49443d3335363631323032363336323034383c0d0a
2015-03-20 15:21:44 DEBUG: [5031 -> 190.91.240.196] - HEX: 333536363132303236333632303438
2015-03-20 15:38:13 DEBUG: [5031 <- 190.91.240.196] - HEX: 3e52584152543b322e312e34383b49443d3335363631323032363336323034383c0d0a
2015-03-20 15:38:13 DEBUG: [5031 -> 190.91.240.196] - HEX: 333536363132303236333632303438
2015-03-20 15:53:26 DEBUG: [5031 <- 190.91.240.196] - HEX: 3e52584152543b322e312e34383b49443d3335363631323032363336323034383c0d0a
2015-03-20 15:53:26 DEBUG: [5031 -> 190.91.240.196] - HEX: 333536363132303236333632303438
Your device doesn't send any location data. It send only messages like this:
>RXART;2.1.41;ID=356612026362048<
¿y que es lo que tiene que enviar? ¿en que formato?
It has to send location data.
sabes el comando para la configuración de syrus gps?
No, unfortunately I don't know how to configure the device.
|
gharchive/issue
| 2015-03-20T18:48:07 |
2025-04-01T06:40:33.898233
|
{
"authors": [
"nando1993",
"tananaev"
],
"repo": "tananaev/traccar",
"url": "https://github.com/tananaev/traccar/issues/1126",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
123890723
|
table name "USER" can not use in oracle database
in the database Oracle can not be created with a table named "user".
http://docs.oracle.com/cd/B19306_01/server.102/b14200/ap_keywd.htm
and also
VARCHAR2 Maximum size: 4000 bytes
https://docs.oracle.com/cd/B28359_01/server.111/b28320/limits001.htm
The fact that "user" is a keyword doesn't mean that you can't use it. It's a keyword in MySQL as well and it works fine.
As for VARCHAR size:
http://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF55623
Beginning with Oracle Database 12c, you can specify a maximum size of 32767 bytes for the VARCHAR2, NVARCHAR2, and RAW data types.
12c is not in the free version and in Oracle12c 32767bytes mode for varchar2 must be enabled additionally
in Oracle 11xe (free oracle) varchar2 can only be 4000
into Oracle i can`t create table with name "user" or "USER" (these are different names in the oracle) and it is a fact
scripts to create the database traccar can not work with Oracle and I had to work hard to start the traccar server
It's really hard to cater for all available database engines. If you have any ideas how to fix the problem, please let me know.
You can do some scripts for various databases
I just don't have time to write scripts for all available databases. If you provide scripts, I can include them into the project.
Seems you can use the workaround. I was really looking very quickly. Need add into \database\DataManager.java\getObjectsTableName special case for Oracle DB - Wrap the name in double quotes. Also, do the same when DB is deployed. The main idea is [select * from user] cause error in Oracle, but [select * from "user"] will work fine if you create table in this manner: [create table "user" ( id number );]
(NOTE: in this case table name is case sensitive)
Sure you also need to adjust field types.
There is no "user" table in Traccar anymore, so it shouldn't be a problem.
|
gharchive/issue
| 2015-12-25T17:05:15 |
2025-04-01T06:40:33.904188
|
{
"authors": [
"DenRozhko",
"IhorDavydenko",
"tananaev"
],
"repo": "tananaev/traccar",
"url": "https://github.com/tananaev/traccar/issues/1621",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
170193727
|
positions API swagger json
parameters deviceId, from and to for API call GET /positions are marked as required in the swagger JSON
However the api also accepts a request without parameters and returns a list of latest positions for all devices (including deviceId, not defined as a property of the Position model)
A client generated from swagger JSON won't include a call for GET /positions without the additional parameters.
Which api call should/can I use for getting the latest positions for my devices?
You can use /positions without parameters to get latest positions. You can also use WebSocket connection for live updates.
@tananaev, the WebSocket currently allow external access?
Server doesn't limit any access. It uses same HTTP server for WebSockets as it does for REST API.
I got this error when I try to connect to websocket
WebSocket connection to 'ws://37.230.96.30:8082/api/socket' failed: Error during WebSocket handshake: Unexpected response code: 503
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state
currently on this server only have one devices and the service is running
and here is how do I trying to connect (is a copy/paste with minimal modification of the official traccar web)
var socket, self = this;
socket = new WebSocket('ws://37.230.96.30:8082/api/socket');
socket.onclose = function (event) {
console.log("WebSocket closed");
// self.asyncUpdate(false);
};
socket.onmessage = function (event) {
var i, j, store, data, array, entity, device, typeKey, alarmKey, text, geofence;
data = Ext.decode(event.data);
if (data.devices) {
array = data.devices;
store = Ext.getStore('Devices');
for (i = 0; i < array.length; i++) {
entity = store.getById(array[i].id);
if (entity) {
entity.set({
status: array[i].status,
lastUpdate: array[i].lastUpdate
}, {
dirty: false
});
}
}
}
if (data.positions && !data.events) {
array = data.positions;
store = Ext.getStore('LatestPositions');
for (i = 0; i < array.length; i++) {
entity = store.findRecord('deviceId', array[i].deviceId, 0, false, false, true);
if (entity) {
entity.set(array[i]);
} else {
store.add(Ext.create('Traccar.model.Position', array[i]));
}
}
}
if (data.events) {
array = data.events;
store = Ext.getStore('Events');
for (i = 0; i < array.length; i++) {
store.add(array[i]);
if (array[i].type === 'commandResult' && data.positions) {
for (j = 0; j < data.positions.length; j++) {
if (data.positions[j].id === array[i].positionId) {
text = data.positions[j].attributes.result;
break;
}
}
text = Strings.eventCommandResult + ': ' + text;
} else if (array[i].type === 'alarm' && data.positions) {
alarmKey = 'alarm';
text = Strings[alarmKey];
if (!text) {
text = alarmKey;
}
for (j = 0; j < data.positions.length; j++) {
if (data.positions[j].id === array[i].positionId && data.positions[j].attributes.alarm !== null) {
if (typeof data.positions[j].attributes.alarm === 'string' && data.positions[j].attributes.alarm.length >= 2) {
alarmKey = 'alarm' + data.positions[j].attributes.alarm.charAt(0).toUpperCase() + data.positions[j].attributes.alarm.slice(1);
text = Strings[alarmKey];
if (!text) {
text = alarmKey;
}
}
break;
}
}
} else {
typeKey = 'event' + array[i].type.charAt(0).toUpperCase() + array[i].type.slice(1);
text = Strings[typeKey];
if (!text) {
text = typeKey;
}
}
if (array[i].geofenceId !== 0) {
geofence = Ext.getStore('Geofences').getById(array[i].geofenceId);
if (typeof geofence !== 'undefined') {
text += ' \"' + geofence.getData().name + '"';
}
}
device = Ext.getStore('Devices').getById(array[i].deviceId);
if (typeof device !== 'undefined') {
if (self.mutePressed()) {
self.beep();
}
Ext.toast(text, device.get('name'));
}
}
}
};
|
gharchive/issue
| 2016-08-09T15:18:06 |
2025-04-01T06:40:33.909658
|
{
"authors": [
"al3x1s",
"stevenbouma",
"tananaev"
],
"repo": "tananaev/traccar",
"url": "https://github.com/tananaev/traccar/issues/2199",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
239044808
|
Web not found
Hello folks I have an issues within web UI , the web view is alway blank from localhost http://prntscr.com/fp0fn8
Have you downloaded the web app?
yes folk, I download the traccar web UI and follow the instruction for traccar web in netbean but it said
"Attaching to localhost:8000
Connection refused.
"
I need to see your the source code folder structure that you have.
Also, if something doesn't work, please provide full information. What steps you followed, operating system, on what step it failed, ideally with screenshots.
I successfull to run it, but there something error popup after I add a new user device http://prntscr.com/fp2swu
That's not Traccar.
here screenshot from netbean http://prntscr.com/fp3c0f , really not sure if need to run this both traccar and traccar web.
I think you are confused. There is official traccar-web here:
https://github.com/tananaev/traccar-web
There is also unofficial traccar-web. That's the one you are using. If that's what you want, you asking in the wrong place.
where can I able to get the latest open source traccar web?
https://github.com/tananaev/traccar-web/releases
Em qui, 29 de jun de 2017 às 03:23, romano Mojica ednalan <
notifications@github.com> escreveu:
where can I able to get the latest open source traccar web?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/tananaev/traccar/issues/3301#issuecomment-311873731,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADu5jlU6GEA5Vjn7SrjofsImMR10TiYgks5sI0LSgaJpZM4OHdM0
.
--
Att.
Marcio Torres®
Thank you for response @Turbovix and @tananaev
|
gharchive/issue
| 2017-06-28T03:44:37 |
2025-04-01T06:40:33.918000
|
{
"authors": [
"Turbovix",
"neroshin",
"tananaev"
],
"repo": "tananaev/traccar",
"url": "https://github.com/tananaev/traccar/issues/3301",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1957434216
|
IOS-4919 Update Token properties
Не будут использоваться
Сначала дождусь пока выпилю из апки
|
gharchive/pull-request
| 2023-10-23T15:23:51 |
2025-04-01T06:40:33.922502
|
{
"authors": [
"Balashov152"
],
"repo": "tangem/blockchain-sdk-swift",
"url": "https://github.com/tangem/blockchain-sdk-swift/pull/442",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
108024193
|
Mix of landuse kind wood and park in Prospect Park Brooklyn
Not sure what to do here:
Add wood to the park layer’s filter.
On Wed, Sep 23, 2015 at 3:58 PM, Geraldine Sarmiento <
notifications@github.com> wrote:
Not sure what to do here:
[image: screen shot 2015-09-23 at 3 56 36 pm]
https://cloud.githubusercontent.com/assets/466585/10061239/ef4f3faa-620b-11e5-85c5-e3a541a7491b.png
[image: screen shot 2015-09-23 at 3 58 15 pm]
https://cloud.githubusercontent.com/assets/466585/10061240/ef67bc38-620b-11e5-98e3-092bc6295331.png
—
Reply to this email directly or view it on GitHub
https://github.com/tangrams/eraser-map/issues/33.
ok, added
https://github.com/tangrams/eraser-map/blob/gh-pages/eraser-map.yaml#L1705
|
gharchive/issue
| 2015-09-23T22:58:35 |
2025-04-01T06:40:33.941011
|
{
"authors": [
"nvkelso",
"sensescape"
],
"repo": "tangrams/eraser-map",
"url": "https://github.com/tangrams/eraser-map/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
128951932
|
Use rounding when encoding position attributes as integers
Fixes some small (but visible) discrepancies in vertex positions at the edges of tiles.
LGTM.
|
gharchive/pull-request
| 2016-01-26T21:41:52 |
2025-04-01T06:40:33.941930
|
{
"authors": [
"blair1618",
"tallytalwar"
],
"repo": "tangrams/tangram-es",
"url": "https://github.com/tangrams/tangram-es/pull/504",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
774948528
|
Firefox assumes that created data blobs (using data Object instead of an URL) are XML, outputs errors
TANGRAM VERSION: 0.21.1
ENVIRONMENT: win7 64bit Firefox 84.0.1, win10 64bit Firefox 85.0, (no errors:) win10 64bit Edge 87
TO REPRODUCE THE ISSUE, FOLLOW THESE STEPS:
use scene.setDataSource with type "GeoJSON" and data property instead of an URL
look at the console output (at layer rebuild)
RESULT:
There are errors at the blob locations: "XML Parsing Error: not well-formed".
Apparently the browser assumes blobs to be XML by default, if no type is given.
Outside of the console, I could not notice negative effects.
EXPECTED RESULT:
No such errors appearing in the console.
It seems like this is fixed, at least for (Geo)JSON data, by providing an object like { type: 'application/geo+json' } or at least { type: 'application/json' } to the Blob constructor here: https://github.com/tangrams/tangram/blob/990d2608c7dce2c3801c2cfd676e5c2e5b74c743/src/scene/scene.js#L1037
Thanks for the report and easy fix @d3d9! I wonder if this behavior has changed at some point in Firefox versions, but no matter now... this will be released in v0.21.2 and the issue will be closed then.
Thanks for the report and easy fix @d3d9! I wonder if this behavior has changed at some point in Firefox versions, but no matter now... this will be released in v0.21.2 and the issue will be closed then.
Fixed in v0.22.0
|
gharchive/issue
| 2020-12-26T23:34:20 |
2025-04-01T06:40:33.948230
|
{
"authors": [
"bcamper",
"d3d9"
],
"repo": "tangrams/tangram",
"url": "https://github.com/tangrams/tangram/issues/772",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
824299227
|
Update scalatest to 3.2.6
Updates org.scalatest:scalatest from 3.1.4 to 3.2.6.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalatest", artifactId = "scalatest" } ]
labels: test-library-update, semver-minor
Superseded by #113.
|
gharchive/pull-request
| 2021-03-08T08:09:35 |
2025-04-01T06:40:33.969308
|
{
"authors": [
"scala-steward"
],
"repo": "tanishiking/scalaunfmt",
"url": "https://github.com/tanishiking/scalaunfmt/pull/109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
480699170
|
Header groups colSpan doesn't adjust with hidden columns
Using v7?
Thanks for using the alpha version of React Table v7! We're very excited about it.
Yes, and I'm excited too! 🔥
Describe the bug
When hiding columns, the colSpan of their respective headers doesn't change, resulting in columns being shifted to wrong headers.
To Reproduce
Steps to reproduce the behavior:
Go to https://codesandbox.io/s/tannerlinsleyreact-table-basic-b33h9
Play around with the checkboxes to hide the columns
Notice how the colSpan of the header groups doesn't change and columns get shifted to wrong headers
Expected behavior
The colSpan property of the header groups should be calculated with hidden columns in mind.
Screenshots
All columns show in this one.
Last name is hidden, age gets shifted to the Name header group.
This bug seems to be easy to fix. I'm happy to follow up with a PR!
See my commit for a possible fix.
I don't know why prettier changed that many lines though. 😕
I would love a PR!
On Aug 14, 2019, 8:56 AM -0600, Maximilian Brandau notifications@github.com, wrote:
See my commit for a possible fix.
I don't know why prettier changed that many lines though. 😕
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
The issue is occurring again. https://codesandbox.io/s/tannerlinsleyreact-table-basic-b33h9
I‘m gonna have a look at this again. A test for this case might be a good idea.
Please reopen this issue @tannerlinsley
|
gharchive/issue
| 2019-08-14T14:05:37 |
2025-04-01T06:40:33.986154
|
{
"authors": [
"mbrandau",
"tannerlinsley"
],
"repo": "tannerlinsley/react-table",
"url": "https://github.com/tannerlinsley/react-table/issues/1446",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
490737084
|
JSX cell is unexpected render to '[Object Object]' in v7.
JSX cell is render to '[Object Object]'
Affected version: 7.0.0-alpha.32 - 7.0.0-alpha.34
You can achieve the same with
columns: [ { Header: 'First Name', Cell: ({cell}) => <div>cell.value.firstName</div> accessor: 'firstName' }]
This may be its intentional functionality to combat cross site scripting.
This is fixed in the latest release. All renderers now support Function components, Class components, non-component functions, JSX elements, and primitives.
Also,@elivoa, Its important to note that accessor is meant to resolve a primitive data type (number, string, boolean, etc), so that it can be used to sort and filter the table. If you want to customize it's display, please do the following:
{
accessor: row => row.firstName,
Cell: ({ cell: { value }}) => <div>{value}<div>`
}
|
gharchive/issue
| 2019-09-08T10:23:43 |
2025-04-01T06:40:33.989459
|
{
"authors": [
"Codar97",
"elivoa",
"tannerlinsley"
],
"repo": "tannerlinsley/react-table",
"url": "https://github.com/tannerlinsley/react-table/issues/1505",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
299696114
|
How to remove the script from the index.html to use useGoogleGeoApi ?
I tried to removed the script of index.html
<script type="text/javascript" src="https://maps.google.com/maps/api/js?key=myKey&libraries=places"></script>
to
public userSettings5: any = {
geoCountryRestriction: ['fr'],
geoTypes: ['address'],
showSearchButton: false,
showRecentSearch: false,
useGoogleGeoApi: true
};
But not work
hi,
without the google map script mentioned in index.html, the directive won't work with Google API.
If u want to remove the script then u have to implement your own API which u can give the link to the directive to use. please refer demo and read the document.
Exactly, it's because I use agm-core at the same time, so with the src script I have a conflict with 2 google API called at the same time.
I had the same issue because two scripts were being loaded at the same time which was causing the issue.
I was able to make it work by not loading the Google Maps API in AgmCoreModule.
If you put this in your AppModule, AgmCoreModule does not load Google Maps API:
providers: [ ...BROWSER_GLOBALS_PROVIDERS, {provide: MapsAPILoader, useClass: NoOpMapsAPILoader} ],
Of course you have to leave the <script> in index.html.
This is a suboptimal but easy solution to make the two libraries cooperate.
Unfortunately I need the agm core to display a Google map (with markers, ...) on another page of my website, so ..
Same here, but for me that is still working with this setup.
AgmCoreModule.forRoot({
apiKey: 'AIzaSyDH1n-WWp1WgfRbK17-J0-BTlkF7i_czMg'
}),
//ng4-geoautocomplete
Ng4GeoautocompleteModule.forRoot(),
//infinite scroll
InfiniteScrollModule
],
This code got me api conflict when i call ng4-geoautcomplete and agm-core same time
Console screen error
|
gharchive/issue
| 2018-02-23T12:29:11 |
2025-04-01T06:40:33.994596
|
{
"authors": [
"ishan123456789",
"remyblancke",
"tanoy009",
"wilgert"
],
"repo": "tanoy009/ng4-geoautocomplete",
"url": "https://github.com/tanoy009/ng4-geoautocomplete/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
660436568
|
custom baudrate for darwin and linux
Uses the IOCTL for darwin, and termios2 for linux custom baudrate. Retains non-darwin (termios 1) behavior for other posix targets. Change also modified the DTR "hangup" behavior for linux to more accurately mimic a terminal.
I was able to test this on OSX, Windows and Linux running 1M BAUD on an external device. Working a-ok here.
I've been using this for Raspbian and it works great.
|
gharchive/pull-request
| 2020-07-18T23:05:03 |
2025-04-01T06:40:34.067528
|
{
"authors": [
"colinrgodsey",
"jackjameshoward",
"jaredwolff"
],
"repo": "tarm/serial",
"url": "https://github.com/tarm/serial/pull/113",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1324663373
|
Change configuration strategy
Auto-init strategy didn't work well during testing. We need a new strategy for handling of the configuration file:
The configuration file must be loaded when taro cli is executed. (Config dir resolution has been fixed to behave exactly according to the XDG spec)
~When the config file is not found then the executed command will print a message, suspend the execution, run config create and continues in case the config was created.~
~An exception to this is the taro exec command which will print a warning and load the default config file to be able to continue the execution. This is important because this command is expected to be executed in a non-interactive shell.~
We need also smarter taro config create command. When this command is executed we need a prompt similar to this one:
Select where to create the config file:
User specific config directory:
[1] $XDG_CONFIG_HOME/taro or ~/.config/taro
System directories (You must have the write permissions!)
[2] {The first directory from $XDG_CONFIG_DIRS}/taro or /etc/xdg/taro
[3] /etc/taro
Choose any of [1,2,3]: _
Bonus: taro config create --jobs for creating the jobs.yaml file template
The work will be done in #65
|
gharchive/issue
| 2022-08-01T16:18:25 |
2025-04-01T06:40:34.070266
|
{
"authors": [
"StanSvec"
],
"repo": "taro-suite/taro",
"url": "https://github.com/taro-suite/taro/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
920921487
|
CI tests on Windows+macOS
..hopefully. As per https://github.com/taskchampion/taskchampion/issues/274
These failures now seem like legitimate failures which should be fixed by #275
🚀
|
gharchive/pull-request
| 2021-06-15T02:15:22 |
2025-04-01T06:40:34.090598
|
{
"authors": [
"dbr"
],
"repo": "taskchampion/taskchampion",
"url": "https://github.com/taskchampion/taskchampion/pull/276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
135859874
|
Will not overwrite file if it exists
My understanding is if a file already exists it should just overwrite. I am receiving this error message when trying to overwrite a file.
Tue, 23 Feb 2016 20:46:57 GMT | starting dump
Tue, 23 Feb 2016 20:46:57 GMT | got 1 objects from source elasticsearch (offset: 0)
Tue, 23 Feb 2016 20:46:57 GMT | Error Emitted => File /usr/data/car.json already exists, quitting
Tue, 23 Feb 2016 20:46:57 GMT | Total Writes: 0
Tue, 23 Feb 2016 20:46:57 GMT | dump ended with error (set phase) => Error: File /usr/data/car.json already exists, quitting
Thanks,
Bret
We changed this behavior (safety++) in v1 https://github.com/taskrabbit/elasticsearch-dump/releases/tag/v1.0.0
It looks like we forgot to update the README, which I've done here https://github.com/taskrabbit/elasticsearch-dump/commit/8aa9b7c37f579fc31c5997ff676873e17587f960
Thank you
|
gharchive/issue
| 2016-02-23T20:49:00 |
2025-04-01T06:40:34.106434
|
{
"authors": [
"bretd25",
"evantahler"
],
"repo": "taskrabbit/elasticsearch-dump",
"url": "https://github.com/taskrabbit/elasticsearch-dump/issues/170",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
625244443
|
Credit- Multiple Line Credit- Created multiple Credit orders one for each line that was credited back
BC order 724
Myhq Order # 26929028
Issued Credit back... seen via myhq I see 2 credit orders created
26935340
26935341
It should be like this to match BC
Looks like it only gave me one credit order number in the table.
and note- I messed up and forgot to delete the tax part out... oops
TaxTotal and ShippingShippingCreditAmount are "order level" values, so they should be the same on all the items credited for the order. Being different caused the uspI_CreateCreditOrderForExternalProcess to process the items as separate orders. I did add code so that when this happens it will take them max of each of the values just in case this happens again.
Looks like this is fixed
Closing bug
|
gharchive/issue
| 2020-05-26T22:38:40 |
2025-04-01T06:40:34.117049
|
{
"authors": [
"monicakarnes",
"sevincent"
],
"repo": "tastefully-simple/cornerstone",
"url": "https://github.com/tastefully-simple/cornerstone/issues/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977436869
|
fix(linux): avoid unwrapping in Window::primary_monitor
Fixes #832
What kind of change does this PR introduce?
[x] Bugfix
[ ] Feature
[ ] Docs
[ ] Code style update
[ ] Refactor
[ ] Build-related changes
[ ] Other, please describe:
Does this PR introduce a breaking change?
[ ] Yes
[x] No
Checklist
[x] This PR will resolve #832
[x] A change file is added if any packages will require a version bump due to this PR per the instructions in the readme.
[x] I have added a convincing reason for adding this feature, if necessary
[x] It can be built on all targets and pass CI/CD.
Other information
Thanks for resolving this issue!
|
gharchive/pull-request
| 2023-11-04T17:23:01 |
2025-04-01T06:40:34.140391
|
{
"authors": [
"olivierlemasle",
"wusyong"
],
"repo": "tauri-apps/tao",
"url": "https://github.com/tauri-apps/tao/pull/835",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1680887616
|
Error Invalid byte 45, offset 0.
See Workflow Run for more infomation!
Already set up github token, tauri private key, password
You'll also need to update the pubKey in tauri.conf.json to match your new private key/password.
But i'm not sure if that could cause this error message, but worth a try.
Also, for generating the keys, you did follow this guide, right? https://tauri.app/v1/guides/distribution/updater
|
gharchive/issue
| 2023-04-24T09:57:17 |
2025-04-01T06:40:34.142237
|
{
"authors": [
"FabianLars",
"NguyenDuck"
],
"repo": "tauri-apps/tauri-action",
"url": "https://github.com/tauri-apps/tauri-action/issues/443",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2553943356
|
[bug] Frequent hiding and displaying of webviewWindow significantly increases CPU usage
Describe the bug
Thanks to the official for proposing the webviewWindow hiding and displaying functions in my last issue and adopting them so quickly. Because my tauri v2 project is used in the production environment, I cannot provide you with video recordings. However, after my comparison, frequent Showing and hiding webviewWindow CPU usage is about 2%-3% higher than changing the position of webviewWindow. so currently I still hide and show by changing the position of webviewWindow, because it takes up much less CPU. Finally, I would like to make a small suggestion: If you can add a option parameter config to webview.show(config), you can configure the display location is even more perfect
Reproduction
No response
Expected behavior
No response
Full tauri info output
[✔] Environment
- OS: Windows 10.0.19045 X64
✔ WebView2: 129.0.2792.52
✔ MSVC:
- Visual Studio Enterprise 2022
- Visual Studio ���ɹ��� 2022
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ Cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.17.0
- yarn: 1.22.19
- npm: 10.8.2
[-] Packages
- tauri [RUST]: 2.0.0-rc.16
- tauri-build [RUST]: 2.0.0-rc.13
- wry [RUST]: 0.44.1
- tao [RUST]: 0.30.2
- @tauri-apps/api [NPM]: 2.0.0-rc.6
- @tauri-apps/cli [NPM]: 1.4.0 (outdated, latest: 1.6.2)
Stack trace
No response
Additional context
I donno if this is the same issue I'm having but tauri website destroys my cpu and makes the laptop go 90 C on an M1 Max.
Example this page: https://tauri.app/concept/architecture/
|
gharchive/issue
| 2024-09-28T01:38:42 |
2025-04-01T06:40:34.149430
|
{
"authors": [
"gageracer",
"moom-en"
],
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/issues/11169",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1664692335
|
npx tauri doesn't work unless @tauri/cli already installed
Describe the problem
I was setting up a CI workflow for https://github.com/ActivityWatch/aw-tauri, and it uses the tauri-action, which ran the command npx tauri build. The workflow was based on the docs, I had only removed the npm install step.
However, that command failed, as it tried to install the deprecated tauri npm package, not @tauri/cli. With errors like ERROR: tauri.conf.json > tauri has unknown property updater +0ms.
If I'd have run npm install first, it'd pick the right tauri binary from the @tauri/cli package.
Describe the solution you'd like
It would be nice if the old tauri package could somehow be updated to allow for running npx tauri without having first installed @tauri/cli.
Alternatives considered
I suppose updating the old tauri package with a major-semver version bump release that simply depends on @tauri/cli could work, but I'm not sure whether it is the right call.
Additional context
I commented in the following issue about it earlier today, and it was suggested I create a proper issue here: https://github.com/tauri-apps/tauri-action/issues/113#issuecomment-1505211808
And finally, because I have the opportunity: thanks to everyone who's working on Tauri! It looks truly amazing and I'm hyped and likely to adopt it.
The docs I followed were: https://tauri.app/v1/guides/building/cross-platform/
And it turns out it uses the tauri-action which in turn uses npx (I updated the issue to correct that).
I can understand if npm install is necessary before, but the comment in the above docs seemed to suggest it was optional:
- name: Install frontend dependencies
# If you don't have `beforeBuildCommand` configured you may want to build your frontend here too.
Nvm, I'm dumb. Idk why I read that as "this is optional, tauri action will fix".
So I guess the fault is still mostly with me. Sad to give up on npx tauri not resolving to the correct package if not installed, but I can understand.
|
gharchive/issue
| 2023-04-12T14:34:13 |
2025-04-01T06:40:34.156109
|
{
"authors": [
"ErikBjare"
],
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/issues/6690",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
947231007
|
"cannot find type MenuHash in this scope"
When using:
tauri = { version = "1.0.0-beta.5", features = ["api-all", "system-tray"] }
This error is being thrown:
Compiling tauri-runtime-wry v0.1.4
error[E0412]: cannot find type `MenuHash` in this scope
--> /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/tauri-runtime-wry-0.1.4/src/menu.rs:269:35
|
269 | custom_menu_items: &mut HashMap<MenuHash, WryCustomMenuItem>,
| ^^^^^^^^ not found in this scope
|
help: consider importing this type alias
|
5 | use tauri_runtime::menu::MenuHash;
|
error: aborting due to previous error
This PR fix it.
What kind of change does this PR introduce? (check at least one)
[x] Bugfix
[ ] Feature
[ ] Docs
[ ] New Binding Issue #___
[ ] Code style update
[ ] Refactor
[ ] Build-related changes
[ ] Other, please describe:
Does this PR introduce a breaking change? (check one)
[ ] Yes. Issue #___
[x] No
The PR fulfills these requirements:
[ ] When resolving a specific issue, it's referenced in the PR's title (e.g. fix: #xxx[,#xxx], where "xxx" is the issue number)
[ ] A change file is added if any packages will require a version bump due to this PR per the instructions in the readme.
If adding a new feature, the PR's description includes:
[ ] A convincing reason for adding this feature (to avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it)
Other information:
Nice catch!
Could you add a change file please
There is an example;
https://github.com/tauri-apps/tauri/blob/764bc6631806ea1196e66e8045a7ce9a45e0f7ff/.changes/tauri-wry-migrate.md
@lucasfernog
Since we can have a system_tray without a menu.
I think it would be better to have 3 different feature flags.
menu for all menu creation whether it is with a menu_bar or a system_tray
menu_bar for the menu_bar itself and would include the menu feature flag
system_tray for the tray itself
So if users just want a system_tray without a menu, they won't have to bundle the extra code for menu
I agree with you @amrbashir
Updated @lemarier
Thanks @dizda we do appreciate a lot your contribution
Pleasure guys, tauri is fantastic to work with!
|
gharchive/pull-request
| 2021-07-19T03:46:09 |
2025-04-01T06:40:34.166011
|
{
"authors": [
"amrbashir",
"dizda",
"lemarier",
"lucasfernog"
],
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/pull/2240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
660979109
|
chore(tauri.js): Update yarn.lock
What kind of change does this PR introduce? (check at least one)
[ ] Bugfix
[ ] Feature
[ ] New Binding Issue #___
[ ] Code style update
[ ] Refactor
[ ] Build-related changes
[x] Other, please describe: Chore
Does this PR introduce a breaking change? (check one)
[ ] Yes. Issue #___
[x] No
The PR fulfills these requirements:
[x] It's submitted to the dev branch and not the latest branch
[ ] When resolving a specific issue, it's referenced in the PR's title (e.g. fix: #xxx[,#xxx], where "xxx" is the issue number)
[ ] A change file is added if any packages will require a version bump due to this PR per the instructions in the readme.
If adding a new feature, the PR's description includes:
[ ] A convincing reason for adding this feature (to avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it)
Other information:
yarn.lock was out of sync
🤔 I thought we weren't shipping lock-files?
Right, libraries don't usually ship lockfiles. However, yarn audit requires either a lockfile or node_modules folder
|
gharchive/pull-request
| 2020-07-19T18:05:39 |
2025-04-01T06:40:34.173012
|
{
"authors": [
"jbolda",
"rajivshah3"
],
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/pull/860",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
58497590
|
A TOC for examples
At that level, a list of all examples and what they do would help us a lot. Simply using the docstrings of the classes would be sufficient. For example:
Examples of WAMP with asyncio
basic: short description
rpc: short description
decorators: an application component registering RPC endpoints using decorators.
we have that now:
http://autobahn.ws/python/wamp/examples.html
http://autobahn.ws/python/websocket/examples.html
|
gharchive/issue
| 2015-02-22T10:11:21 |
2025-04-01T06:40:34.177098
|
{
"authors": [
"Vayel",
"oberstet"
],
"repo": "tavendo/AutobahnPython",
"url": "https://github.com/tavendo/AutobahnPython/issues/346",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
460192996
|
Training with CPU mode
/home/sm/venv/lib/python3.6/site-packages/torch/cuda/init.py:118: UserWarning:
Found GPU0 Quadro K4200 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
Traceback (most recent call last):
File "train.py", line 90, in
main()
File "train.py", line 86, in main
diarization_experiment(model_args, training_args, inference_args)
File "train.py", line 44, in diarization_experiment
model = uisrnn.UISRNN(model_args)
File "/home/sm/Speaker-Diarization/uisrnn/uisrnn.py", line 99, in init
sigma2 * torch.ones(self.observation_dim).to(self.device))
RuntimeError: CUDA error: no kernel image is available for execution on the device
Can anyone suggest me any alternative to perform training?
Thanks in advance
Resolved by changing line 86 of uisrnn.py
self.device = torch.device(
'cuda:0' if torch.cuda.is_available() else 'cpu')
to
self.device = torch.device('cpu')
|
gharchive/issue
| 2019-06-25T03:42:05 |
2025-04-01T06:40:34.196286
|
{
"authors": [
"Arroosh"
],
"repo": "taylorlu/Speaker-Diarization",
"url": "https://github.com/taylorlu/Speaker-Diarization/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2184802366
|
Adde Hanif's changes
Added the Koleo loss
Added the weighted sup t con
Switched Batchnorm from layernorm
thanks! runs with following changes: https://github.com/tbenst/silent_speech/compare/f88716...tbenst:silent_speech:tb/hanif#diff-afab5b47f3c9255cc34937f181df1eefac20a9e304f1c8756a37666cf1c8d1d9L862
|
gharchive/pull-request
| 2024-03-13T20:08:25 |
2025-04-01T06:40:34.219620
|
{
"authors": [
"Leoputera2407",
"tbenst"
],
"repo": "tbenst/silent_speech",
"url": "https://github.com/tbenst/silent_speech/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
689102219
|
add export from example
The syntax allows that but I noticed that we were missing an example in the readme.
@MylesBorins do you mean adding
export { val } from './foo.js';
above?
|
gharchive/pull-request
| 2020-08-31T10:38:26 |
2025-04-01T06:40:34.273445
|
{
"authors": [
"xtuc"
],
"repo": "tc39/proposal-import-assertions",
"url": "https://github.com/tc39/proposal-import-assertions/pull/92",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2355204951
|
Why is JSON.rawJSON limited to primitives only?
Forgive me if this is the wrong spot to put this.
I think JSON.rawJSON is a really powerful API for performance-optimizing JSON serialization. But, because it is limited to only producing valid primitive JSON, it can't be used for "inline"-ing existing JSON.
I've got a couple use cases I want to use it for that requires feeding pre-serialized objects and arrays into the serialization of outer object trees. For example, in a typical REST API, you might retrieve 10 records from the database, and reply with one big JSON array of all of them. Each record might have a big JSON value on it, and if they are large, it performs poorly to de-serialize each record's JSON object to then just serialize it again to produce the REST API response holding all 10 records. Instead, it'd be great to leave the data as a string when fetching from the database, and then just insert it into the final JSON string produced by JSON.stringify using JSON.rawJSON to wrap each of these strings.
Without this capability, one has to resort to manually clobbering together JSON strings which is far less performant and correct than using the engine's built-in capabilities, or always deserializing just to serialize again. Userland implementations like json-stream-stringify are far, far slower, and at least in my case, the JSON objects are really big, so deserializing and reserializing is a major performance issue.
I presume there is a justification for limiting what can be go through a .rawJSON, but what is it? And, could there ever be a trusted mode, or some sort of escape hatch where for very performance sensitive use cases, any ole string could be sent along?
Also one other note: it seems that this low level API could really assist with performance optimization around avoiding re-serializing values you already have the source JSON string for, but as currently specified it can't because it does the safety check by parsing the string anyways. That seems correct but inefficient, again suggesting that it'd be great to have some sort of escape hatch for the brave. Notably, [[IsRawJSON]] being an internal slot means that userland can't create their own raw JSON objects and pay the complexity / reliability price.
@gibson042 apologies for the direct ping but it'd be super helpful to understand this and/or collaborate on widening the applicability!
Thanks for the ping. The reason for limiting to primitive values is cutting off what would otherwise be a bigger opportunity for surreptitious communication by varying representation details within JSON text representing the same data. See https://github.com/tc39/proposal-json-parse-with-source/issues/12#issuecomment-704441889 , https://github.com/tc39/proposal-json-parse-with-source/issues/19#issuecomment-951787505 , and also the extensive discussion at the October 2021 plenary that ultimately resulting in global availability with primitive-only constraints as a balance of convenience vs. integrity (the latter being a concern about the ability for an untrusted data-only input object to encode itself as arbitrary JSON text, originally raised in July 2020).
So the worry is that people would exfiltrate information through whitespace and/or repeated keys? Because as far as I can tell, toJSON already allows an object to replace itself with something completely different (or, more likely, something with some extra fields), and if that were a problem for some reason, rawJSON already gives us quite a few places to squeeze some information:
For strings:
which characters to escape (only control characters and " must be escaped)
in some cases, which escape to use
For numbers:
what exponent to use
for integers, whether to use an exponent at all
whether to precede an exponent with e or E
whether or not to place a + before the digits of a positive exponent
trailing zeros in the fractional part
leading zeros in the exponent
Or, getting back to things possible even without this proposal, you could just reorder the fields in objects.
|
gharchive/issue
| 2024-06-15T20:13:48 |
2025-04-01T06:40:34.287136
|
{
"authors": [
"SamB",
"airhorns",
"gibson042"
],
"repo": "tc39/proposal-json-parse-with-source",
"url": "https://github.com/tc39/proposal-json-parse-with-source/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2261893725
|
Remove ngDevMode condition on error message
Error messages shouldn't differ like that based on environment. It's also useful outside of Angular.
Blocks #193. If #175 gets merged instead of #193, this will need redone.
|
gharchive/pull-request
| 2024-04-24T18:10:23 |
2025-04-01T06:40:34.300073
|
{
"authors": [
"dead-claudia"
],
"repo": "tc39/proposal-signals",
"url": "https://github.com/tc39/proposal-signals/pull/193",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
527160541
|
TestablePublisher failed after receiving subscriptions .
I tried to play with examples you cited in documentation and the testMap() example failed for me .
the program stop here :
func request(_ demand: Subscribers.Demand) { _ = queue?.requestDemand(demand) }
I'm using xcode 11.2.1
Maybe something to do with SR-11564 and described in issue #14?
Try setting ‘DEAD_CODE_STRIPPING = NO’ in your project build settings and see if that resolves the issue.
Thanks.
This resolve the issue .
|
gharchive/issue
| 2019-11-22T12:10:13 |
2025-04-01T06:40:34.325122
|
{
"authors": [
"abdelmajidrajad",
"tcldr"
],
"repo": "tcldr/Entwine",
"url": "https://github.com/tcldr/Entwine/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2159162517
|
Allow install without duckdb
I’m trying to install Harlequin on a Raspberry Pi Zero 2 and the duckdb part takes hours and hours, I haven’t seen it actually complete yet.
I only want to use harlequin with SQLite, is it possible to get it without the duckdb support/adapter?
Sounds like there isn't a duckdb wheel for your platform, so you're compiling from source on your rpi. You could probably use a dev box to build a wheel for your rpi platform and install that wheel before installing harlequin.
You could also compute harlequin's dependencies with pip freeze (etc), create a requirements.txt, delete duckdb (which has no python deps), and install with pip install --no-deps -r requirements.txt
Harlequin should mostly work without DuckDb (the exporter will crash). In the future though, I'm planning on a deeper integration with DuckDb (for cross-database joins etc), so ripping it out doesn't make sense. I thought about moving it to an Extra, and I might do that if I could make it a default extra, but Python doesn't have such a thing.
Thanks!
That did help some, at least going the route where I would build it on my Mac in venv then freeze pip and run with --no-deps like you said.
However, I ran into more issues installing pyarrow, and also tree-sitter-languages if I recall correctly.
Yeah, those all require c extensions. Pyarrow is critical - you could do without tree-sitter and tree-sitter-languages (it's just for syntax highlighting and it degrades gracefully without)
|
gharchive/issue
| 2024-02-28T14:55:20 |
2025-04-01T06:40:34.328982
|
{
"authors": [
"albertfilice",
"tconbeer"
],
"repo": "tconbeer/harlequin",
"url": "https://github.com/tconbeer/harlequin/issues/475",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1599809250
|
Authorization code does not come
calling the function: SetAuthenticationPhoneNumber
It returns: Ok.
The state changes to:
{@type: updateAuthorizationState, authorization_state: {@type: authorizationStateWaitCode, code_info: {@type: authenticationCodeInfo, phone_number: <some_number>, type: {@type: authenticationCodeTypeTelegramMessage, length: 5}, next_type: null, timeout: 0}}}
As a result message does not come to telegram. tried many times and from different phone numbers
This happens very often. This may occur when we decide to release our application.
What to do in such a situation?
Sorry, there is an issue with Telegram login system. Please wait until it's fixed.
I seem to have the same problem developing an alternative Android client (using tdlib's java bindings). The phone number is from a phone that has the official client installed and I'm connecting with the following settings:
deviceModel = "Phone"
applicationVersion = "1.0"
useFileDatabase = false
useChatInfoDatabase = false
useMessageDatabase = false
useSecretChats = false
enableStorageOptimizer = true
systemLanguageCode = "en"
useTestDc = true
I call setAuthenticationPhoneNumber(phoneNumber), receive an Ok, and the state is switched to AuthorizationStateWaitCode with the same contents as in the issue, but I get no notifications in Telegram. Is there something I'm doing wrong or is it something on Telegram's end?
@eugene2k You specified useTestDc = true. Are you sure that you are logged to the Test DC in the official client?
By the way, you can try to test authorization in Test DC with a test account first.
I haven't realized there's a need to log in to the test dc with the official client. I can't find any mention of this or how it can be done.
Test DC is a completely independent environment.
|
gharchive/issue
| 2023-02-25T19:37:25 |
2025-04-01T06:40:34.349576
|
{
"authors": [
"AYMENJD",
"eugene2k",
"levlam",
"rr8733380"
],
"repo": "tdlib/td",
"url": "https://github.com/tdlib/td/issues/2322",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1968102158
|
how to get responce value in tdlib?
i have this code
td_api::make_object<td_api::some_function>()
i got responce in to_string(response)
user {
....
pattern = 123
some_other_parameters...
.....
how i can get value of pattern from c++ tdlib?
Is there any ready-made method?
like this - check for error
if (response->get_id() == td_api::error::ID) {
what i need to do to get my pattern?
like
response->pattern
See documentation of the class ClientManager.
|
gharchive/issue
| 2023-10-30T11:18:10 |
2025-04-01T06:40:34.352168
|
{
"authors": [
"levlam",
"sip-for-telegram-com"
],
"repo": "tdlib/td",
"url": "https://github.com/tdlib/td/issues/2652",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
427271530
|
Clarification: stateProvince/county/municipality when georeference radius covers more than one
Using a point-radius (or a dwc:footprintWKT) for a georeference, and that radius encompasses several administrative divisions, what should go into these fields? The division that the points falls into, or a list of the divisions that the radius covers?
To illustrate, the Matroosberg mountains in South Africa fall into two districts, the Worcester Distr. and the Ceres Distr. Some specimens are recorded from the 'Matroosberg', with no extra information.
Should the answer to this be included in the documentation for these DwC properties?
I think they should be left blank. The answer is not available.
Yes, it would be useful have this better documented, but this is a general problem and no solution seems to have been found yet.
Hi Ian,
This question is interesting as it approaches the georeference from a perspective opposite to the normal one for retrospective georeferencing. Usually, one has a textual location as raw material from which the georeference is derived. That is usually something akin to Darwin Core's verbatimLocality, where all of the details could be written out. The principles of best practice suggest that the spatial representation should contain all of the possible interpretations of where that textual location can be. That interpretation often intersects multiple entities at the same administrative level (e.g., dwc:stateProvince). But the Darwin Core higher geography terms are meant to be singular, so that they could, in principle, but validated from a geographic authority. We capture that principle in the standardization of geography using the VertNet Geographic Lookup file (see https://github.com/VertNet/DwCVocabs/blob/master/vocabs/Geography.csv and the principles behind it at https://github.com/VertNet/DwCVocabs). Summarizing, to answer you question, I agree with @qgroom, the administrative levels that would have multiple values should be left blank, and the parent administrative level that contains all of the location should be provided. I have cross-referenced this issue in The Darwin Core Questions & Answers repository (see https://github.com/tdwg/dwc-qa/issues/141), where issues lead to documentation improvement.
Thank you for the responses John and Quentin. Blank it will be. Just another example, which may be useful for documentation, would be a verbatimLocality of '5km from Rust de Winter, Transvaal'. Our South African provinces were changed in 2004, with the Transvaal split into four. A radius of 5km around the small town of Rust de Winter includes three, namely Gauteng, Limpopo and Mpumalanga. To further complicate matters, our provincial boundaries are still somewhat fluid, with the last set of changes in 2016, which puts stateProvince values near but within provincial boundaries in peril too.
I am going to keep this one open, following our process to not close the issue until the answers have been incorporated in documentation.
The usage comments for country incorporate the recommendations covered here. The remaining administrative geography terms (continent, countrycode, stateProvince, county, municipality) should do the same.
Closing as having been answered.
|
gharchive/issue
| 2019-03-30T11:08:12 |
2025-04-01T06:40:34.482324
|
{
"authors": [
"ianengelbrecht",
"qgroom",
"tucotuco"
],
"repo": "tdwg/dwc",
"url": "https://github.com/tdwg/dwc/issues/221",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
250927962
|
消费者先启动,提供者后启动,消费者@reference就一直NullPointerException了
消费者先启动,提供者后启动,消费者@reference就一直NullPointerException了
这个问题太重要了 不然失去了 注册的意义
改造Dubbo,使其能够兼容Spring 4注解配置
https://my.oschina.net/roccn/blog/847635
解决了
还是得多看看源码就懂了
你好,请问您是怎么解决的,可以分享一下吗? @15174834
我看到别的地方说在application.properties添加下面这条spring.dubbo.consumer.check=false可是并没有效果。
|
gharchive/issue
| 2017-08-17T12:32:30 |
2025-04-01T06:40:34.484962
|
{
"authors": [
"15174834",
"nice2mu",
"roc-cn",
"traburiss"
],
"repo": "teaey/spring-boot-starter-dubbo",
"url": "https://github.com/teaey/spring-boot-starter-dubbo/issues/308",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1323880044
|
[Bug] Short Bug Description
Bug Category
[ ] Credential Login
[ ] Token Login
[ ] Local Audio Issue
[ ] Remote Audio Issue
[ ] Audio Device Switching
[ ] Mute / Unmute
[ ] Hold / Unhold
[ ] Performance issues
[x] Other
SDK Version
implementation 'com.github.team-telnyx:telnyx-webrtc-android:v1.2.12-alpha'
Describe the bug
The activity is not in the display state. After the mobile phone presses the home button, the mobile phone screen is on the home page. At this time, the SocketObserver does not return information. Only when the activity is displayed again, the information can be received.
Expected behaviour
In the non-display state of the activity, SocketObserver can return information normally
To Reproduce
Use the Git demo
Android Device (please complete the following information):
Emulator: (false)
Android Device: [SM-G9500]
Android Version: [ android 9 ]
Logs
Hi @tongsoftinfo
I will look into this. You should still receive socket messages even in the background, however the OS can sometimes kill this process (after an extended period of time). Does this happen immediately for you when the app is in the background? Or after some time?
Generally, when the OS kills this process, you should receive a Push Notification instead.
The background socket message has information, but onMessageReceived does not return information. In the telnyx_rtc.ringing state, it can be in the background. After a period of time, you can see that the socket message has telnyx_rtc.answer information, but onMessageReceived does not return the status information of telnyx_rtc.answer
This happens immediately when in the background
Okay, I will investigate and update this ticket.
The MediaPlayer in TelnyxClient.kt does not use release() at the end, only stop() and reset(), is this situation correct?
The same MediaPlayer instance is overwritten each time it is used, I don't believe release is necessary however I can test if adding it causes any issues.
Are you experiencing any issues relating to this?
app acquires a partial wake lock by calling acquire() with the PARTIAL_WAKE_LOCK flag. A partial wake lock becomes stuck if it is held for a long time while your app is running in the background (no part of your app is visible to the user). This information is the suggestion that Google returned to our APP. After I checked the corresponding code, I found this situation, so I asked your opinion on this. This is the document address given by Google : https://developer.android. com/topic/performance/vitals/wakelock
Okay, I will look into this on top of the background related stuff
@tongsoftinfo in relation to the MediaPlayer release, we have an open PR here:
https://github.com/team-telnyx/telnyx-webrtc-android/pull/189
I will release a new version when it is reviewed and let you know here.
@tongsoftinfo we have released a version that releases the MediaPlayer. You can see the PR here:
https://github.com/team-telnyx/telnyx-webrtc-android/pull/189
This is available in the SDK here:
https://jitpack.io/#team-telnyx/telnyx-webrtc-android/v1.2.15-alpha
In regards to your original bug report. In testing I realized what you were describing. Messages come in the socket and the ringtone will play, but onMessageReceived is not fired. I initially thought you were talking about the socket receiving messages in general.
This is not a bug, and isn't something we will attempt to fix. This is actually how LiveData and MVVM works.
LiveData won't get new values if there isn't an active Observer and observers get paused when your application is minimized (they are lifecycle dependent). This is okay though because once the app is moved back to the foreground, the observer becomes active again and receives the latest posted data. This is how LiveData and Observers work.
This should be okay though, the SDK will ring and notify the user they are receiving a call (as long as a ringtone is set) and the activity will update immediately when resumed. I'm not sure why you would want the UI to update while the activity is not visible.
If you are developing your own app, there are a few things you can do here to make this more streamlined. You could look into using the ConnectionService to integrate with native OS Call UI (This is out of scope for an MVP sample app though) or alternatively an easier method would be to manually disconnect whenever the user enters the background so that they receive a Push Notification instead when the app is in the background (if you have set this up with the guide in the docs)
https://developers.telnyx.com/docs/v2/webrtc/push-notifications?lang=android
Some users are accustomed to placing the application in the background after making a call, and then using other applications, but in this case, LiveData has no information transmission, and cannot automatically perform the next operation. Place the front desk to operate, such a user experience is a very poor feeling
I have found the corresponding processing method, I will use mTelnyxClient.getSocketResponse().observeForever() for the corresponding operation
The SDK is still receiving socket messages and working in the background. The only thing that is not updating is the UI which is okay because it is in the background. Once the app is resumed the UI will immediately update. There is no poor user experience because there is nothing for the user to experience while it is in the background.
However, if .observeForever() works in your specific use case then that's great. Remember to manually remove the observer when you clear the view model though.
|
gharchive/issue
| 2022-08-01T06:11:10 |
2025-04-01T06:40:34.518402
|
{
"authors": [
"Oliver-Zimmerman",
"tongsoftinfo"
],
"repo": "team-telnyx/telnyx-webrtc-android",
"url": "https://github.com/team-telnyx/telnyx-webrtc-android/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
530002716
|
[#169988099] Vulnerable packages upgrade
This PR aims to remove the detected vulnerability issues by upgrading the vulnerable packages.
Affected stories
⚙️ #169988099: Aggiornare i package del backend che presentano vulnerabiltà di sicurezza
Generated by :no_entry_sign: dangerJS against fc576b92761c8f3ecdb9dff74f446cca4d173f80
|
gharchive/pull-request
| 2019-11-28T16:30:02 |
2025-04-01T06:40:34.559047
|
{
"authors": [
"alexgpeppe",
"digitalcitizenship"
],
"repo": "teamdigitale/io-onboarding-pa-api",
"url": "https://github.com/teamdigitale/io-onboarding-pa-api/pull/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.