func
string | target
string | cwe
list | project
string | commit_id
string | hash
string | size
int64 | message
string | vul
int64 |
---|---|---|---|---|---|---|---|---|
static int fuse_do_readpage(struct file *file, struct page *page)
{
struct fuse_io_priv io = { .async = 0, .file = file };
struct inode *inode = page->mapping->host;
struct fuse_conn *fc = get_fuse_conn(inode);
struct fuse_req *req;
size_t num_read;
loff_t pos = page_offset(page);
size_t count = PAGE_CACHE_SIZE;
u64 attr_ver;
int err;
/*
* Page writeback can extend beyond the lifetime of the
* page-cache page, so make sure we read a properly synced
* page.
*/
fuse_wait_on_page_writeback(inode, page->index);
req = fuse_get_req(fc, 1);
if (IS_ERR(req))
return PTR_ERR(req);
attr_ver = fuse_get_attr_version(fc);
req->out.page_zeroing = 1;
req->out.argpages = 1;
req->num_pages = 1;
req->pages[0] = page;
req->page_descs[0].length = count;
num_read = fuse_send_read(req, &io, pos, count, NULL);
err = req->out.h.error;
if (!err) {
/*
* Short read means EOF. If file size is larger, truncate it
*/
if (num_read < count)
fuse_short_read(req, inode, attr_ver);
SetPageUptodate(page);
}
fuse_put_request(fc, req);
return err;
}
|
Safe
|
[
"CWE-399",
"CWE-835"
] |
linux
|
3ca8138f014a913f98e6ef40e939868e1e9ea876
|
8.236889421272296e+37
| 47 |
fuse: break infinite loop in fuse_fill_write_pages()
I got a report about unkillable task eating CPU. Further
investigation shows, that the problem is in the fuse_fill_write_pages()
function. If iov's first segment has zero length, we get an infinite
loop, because we never reach iov_iter_advance() call.
Fix this by calling iov_iter_advance() before repeating an attempt to
copy data from userspace.
A similar problem is described in 124d3b7041f ("fix writev regression:
pan hanging unkillable and un-straceable"). If zero-length segmend
is followed by segment with invalid address,
iov_iter_fault_in_readable() checks only first segment (zero-length),
iov_iter_copy_from_user_atomic() skips it, fails at second and
returns zero -> goto again without skipping zero-length segment.
Patch calls iov_iter_advance() before goto again: we'll skip zero-length
segment at second iteraction and iov_iter_fault_in_readable() will detect
invalid address.
Special thanks to Konstantin Khlebnikov, who helped a lot with the commit
description.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Maxim Patlasov <mpatlasov@parallels.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Fixes: ea9b9907b82a ("fuse: implement perform_write")
Cc: <stable@vger.kernel.org>
| 0 |
R_API char *r_str_prefix_all(const char *s, const char *pfx) {
const char *os = s;
char *p;
int newlines = 1;
int len = 0;
int pfx_len = 0;
if (!s) {
return strdup (pfx);
}
if (!pfx) {
return strdup (s);
}
len = strlen (s);
pfx_len = strlen (pfx);
for (os = s; *os; os++) {
if (*os == '\n') {
newlines++;
}
}
char *o = malloc (len + (pfx_len * newlines) + 1);
if (!o) {
return NULL;
}
memcpy (o, pfx, pfx_len);
for (p = o + pfx_len; *s; s++) {
*p++ = *s;
if (*s == '\n' && s[1]) {
memcpy (p, pfx, pfx_len);
p += pfx_len;
}
}
*p = 0;
return o;
}
|
Safe
|
[
"CWE-78"
] |
radare2
|
04edfa82c1f3fa2bc3621ccdad2f93bdbf00e4f9
|
2.9025160124306812e+38
| 35 |
Fix command injection on PDB download (#16966)
* Fix r_sys_mkdirp with absolute path on Windows
* Fix build with --with-openssl
* Use RBuffer in r_socket_http_answer()
* r_socket_http_answer: Fix read for big responses
* Implement r_str_escape_sh()
* Cleanup r_socket_connect() on Windows
* Fix socket being created without a protocol
* Fix socket connect with SSL ##socket
* Use select() in r_socket_ready()
* Fix read failing if received only protocol answer
* Fix double-free
* r_socket_http_get: Fail if req. SSL with no support
* Follow redirects in r_socket_http_answer()
* Fix r_socket_http_get result length with R2_CURL=1
* Also follow redirects
* Avoid using curl for downloading PDBs
* Use r_socket_http_get() on UNIXs
* Use WinINet API on Windows for r_socket_http_get()
* Fix command injection
* Fix r_sys_cmd_str_full output for binary data
* Validate GUID on PDB download
* Pass depth to socket_http_get_recursive()
* Remove 'r_' and '__' from static function names
* Fix is_valid_guid
* Fix for comments
| 0 |
entry_guard_describe(const entry_guard_t *guard)
{
static char buf[256];
tor_snprintf(buf, sizeof(buf),
"%s ($%s)",
strlen(guard->nickname) ? guard->nickname : "[bridge]",
hex_str(guard->identity, DIGEST_LEN));
return buf;
}
|
Safe
|
[
"CWE-200"
] |
tor
|
665baf5ed5c6186d973c46cdea165c0548027350
|
1.5327280118780544e+38
| 9 |
Consider the exit family when applying guard restrictions.
When the new path selection logic went into place, I accidentally
dropped the code that considered the _family_ of the exit node when
deciding if the guard was usable, and we didn't catch that during
code review.
This patch makes the guard_restriction_t code consider the exit
family as well, and adds some (hopefully redundant) checks for the
case where we lack a node_t for a guard but we have a bridge_info_t
for it.
Fixes bug 22753; bugfix on 0.3.0.1-alpha. Tracked as TROVE-2016-006
and CVE-2017-0377.
| 0 |
u_int16_t ndpi_get_lower_proto(ndpi_protocol proto) {
return((proto.master_protocol != NDPI_PROTOCOL_UNKNOWN) ? proto.master_protocol : proto.app_protocol);
}
|
Safe
|
[
"CWE-416",
"CWE-787"
] |
nDPI
|
6a9f5e4f7c3fd5ddab3e6727b071904d76773952
|
2.9302119241032863e+38
| 3 |
Fixed use after free caused by dangling pointer
* This fix also improved RCE Injection detection
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
| 0 |
int dtls1_send_client_verify(SSL *s)
{
unsigned char *p, *d;
unsigned char data[MD5_DIGEST_LENGTH + SHA_DIGEST_LENGTH];
EVP_PKEY *pkey;
#ifndef OPENSSL_NO_RSA
unsigned u = 0;
#endif
unsigned long n;
#if !defined(OPENSSL_NO_DSA) || !defined(OPENSSL_NO_ECDSA)
int j;
#endif
if (s->state == SSL3_ST_CW_CERT_VRFY_A) {
d = (unsigned char *)s->init_buf->data;
p = &(d[DTLS1_HM_HEADER_LENGTH]);
pkey = s->cert->key->privatekey;
s->method->ssl3_enc->cert_verify_mac(s,
NID_sha1,
&(data[MD5_DIGEST_LENGTH]));
#ifndef OPENSSL_NO_RSA
if (pkey->type == EVP_PKEY_RSA) {
s->method->ssl3_enc->cert_verify_mac(s, NID_md5, &(data[0]));
if (RSA_sign(NID_md5_sha1, data,
MD5_DIGEST_LENGTH + SHA_DIGEST_LENGTH,
&(p[2]), &u, pkey->pkey.rsa) <= 0) {
SSLerr(SSL_F_DTLS1_SEND_CLIENT_VERIFY, ERR_R_RSA_LIB);
goto err;
}
s2n(u, p);
n = u + 2;
} else
#endif
#ifndef OPENSSL_NO_DSA
if (pkey->type == EVP_PKEY_DSA) {
if (!DSA_sign(pkey->save_type,
&(data[MD5_DIGEST_LENGTH]),
SHA_DIGEST_LENGTH, &(p[2]),
(unsigned int *)&j, pkey->pkey.dsa)) {
SSLerr(SSL_F_DTLS1_SEND_CLIENT_VERIFY, ERR_R_DSA_LIB);
goto err;
}
s2n(j, p);
n = j + 2;
} else
#endif
#ifndef OPENSSL_NO_ECDSA
if (pkey->type == EVP_PKEY_EC) {
if (!ECDSA_sign(pkey->save_type,
&(data[MD5_DIGEST_LENGTH]),
SHA_DIGEST_LENGTH, &(p[2]),
(unsigned int *)&j, pkey->pkey.ec)) {
SSLerr(SSL_F_DTLS1_SEND_CLIENT_VERIFY, ERR_R_ECDSA_LIB);
goto err;
}
s2n(j, p);
n = j + 2;
} else
#endif
{
SSLerr(SSL_F_DTLS1_SEND_CLIENT_VERIFY, ERR_R_INTERNAL_ERROR);
goto err;
}
d = dtls1_set_message_header(s, d,
SSL3_MT_CERTIFICATE_VERIFY, n, 0, n);
s->init_num = (int)n + DTLS1_HM_HEADER_LENGTH;
s->init_off = 0;
/* buffer the message to handle re-xmits */
dtls1_buffer_message(s, 0);
s->state = SSL3_ST_CW_CERT_VRFY_B;
}
/* s->state = SSL3_ST_CW_CERT_VRFY_B */
return (dtls1_do_write(s, SSL3_RT_HANDSHAKE));
err:
return (-1);
}
|
Safe
|
[
"CWE-399"
] |
openssl
|
00a4c1421407b6ac796688871b0a49a179c694d9
|
2.818960153543605e+38
| 83 |
Fix DTLS buffered message DoS attack
DTLS can handle out of order record delivery. Additionally since
handshake messages can be bigger than will fit into a single packet, the
messages can be fragmented across multiple records (as with normal TLS).
That means that the messages can arrive mixed up, and we have to
reassemble them. We keep a queue of buffered messages that are "from the
future", i.e. messages we're not ready to deal with yet but have arrived
early. The messages held there may not be full yet - they could be one
or more fragments that are still in the process of being reassembled.
The code assumes that we will eventually complete the reassembly and
when that occurs the complete message is removed from the queue at the
point that we need to use it.
However, DTLS is also tolerant of packet loss. To get around that DTLS
messages can be retransmitted. If we receive a full (non-fragmented)
message from the peer after previously having received a fragment of
that message, then we ignore the message in the queue and just use the
non-fragmented version. At that point the queued message will never get
removed.
Additionally the peer could send "future" messages that we never get to
in order to complete the handshake. Each message has a sequence number
(starting from 0). We will accept a message fragment for the current
message sequence number, or for any sequence up to 10 into the future.
However if the Finished message has a sequence number of 2, anything
greater than that in the queue is just left there.
So, in those two ways we can end up with "orphaned" data in the queue
that will never get removed - except when the connection is closed. At
that point all the queues are flushed.
An attacker could seek to exploit this by filling up the queues with
lots of large messages that are never going to be used in order to
attempt a DoS by memory exhaustion.
I will assume that we are only concerned with servers here. It does not
seem reasonable to be concerned about a memory exhaustion attack on a
client. They are unlikely to process enough connections for this to be
an issue.
A "long" handshake with many messages might be 5 messages long (in the
incoming direction), e.g. ClientHello, Certificate, ClientKeyExchange,
CertificateVerify, Finished. So this would be message sequence numbers 0
to 4. Additionally we can buffer up to 10 messages in the future.
Therefore the maximum number of messages that an attacker could send
that could get orphaned would typically be 15.
The maximum size that a DTLS message is allowed to be is defined by
max_cert_list, which by default is 100k. Therefore the maximum amount of
"orphaned" memory per connection is 1500k.
Message sequence numbers get reset after the Finished message, so
renegotiation will not extend the maximum number of messages that can be
orphaned per connection.
As noted above, the queues do get cleared when the connection is closed.
Therefore in order to mount an effective attack, an attacker would have
to open many simultaneous connections.
Issue reported by Quan Luo.
CVE-2016-2179
Reviewed-by: Richard Levitte <levitte@openssl.org>
| 0 |
static int acrn_dev_open(struct inode *inode, struct file *filp)
{
struct acrn_vm *vm;
vm = kzalloc(sizeof(*vm), GFP_KERNEL);
if (!vm)
return -ENOMEM;
vm->vmid = ACRN_INVALID_VMID;
filp->private_data = vm;
return 0;
}
|
Safe
|
[
"CWE-401"
] |
linux
|
ecd1735f14d6ac868ae5d8b7a2bf193fa11f388b
|
2.384821151866877e+38
| 12 |
virt: acrn: fix a memory leak in acrn_dev_ioctl()
The vm_param and cpu_regs need to be freed via kfree()
before return -EINVAL error.
Fixes: 9c5137aedd11 ("virt: acrn: Introduce VM management interfaces")
Fixes: 2ad2aaee1bc9 ("virt: acrn: Introduce an ioctl to set vCPU registers state")
Signed-off-by: Xiaolong Huang <butterflyhuangxx@gmail.com>
Signed-off-by: Fei Li <fei1.li@intel.com>
Link: https://lore.kernel.org/r/20220308092047.1008409-1-butterflyhuangxx@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
int apparmor_bprm_set_creds(struct linux_binprm *bprm)
{
struct aa_task_cxt *cxt;
struct aa_profile *profile, *new_profile = NULL;
struct aa_namespace *ns;
char *buffer = NULL;
unsigned int state;
struct file_perms perms = {};
struct path_cond cond = {
bprm->file->f_path.dentry->d_inode->i_uid,
bprm->file->f_path.dentry->d_inode->i_mode
};
const char *name = NULL, *target = NULL, *info = NULL;
int error = cap_bprm_set_creds(bprm);
if (error)
return error;
if (bprm->cred_prepared)
return 0;
cxt = bprm->cred->security;
BUG_ON(!cxt);
profile = aa_get_profile(aa_newest_version(cxt->profile));
/*
* get the namespace from the replacement profile as replacement
* can change the namespace
*/
ns = profile->ns;
state = profile->file.start;
/* buffer freed below, name is pointer into buffer */
error = aa_path_name(&bprm->file->f_path, profile->path_flags, &buffer,
&name, &info);
if (error) {
if (profile->flags &
(PFLAG_IX_ON_NAME_ERROR | PFLAG_UNCONFINED))
error = 0;
name = bprm->filename;
goto audit;
}
/* Test for onexec first as onexec directives override other
* x transitions.
*/
if (unconfined(profile)) {
/* unconfined task */
if (cxt->onexec)
/* change_profile on exec already been granted */
new_profile = aa_get_profile(cxt->onexec);
else
new_profile = find_attach(ns, &ns->base.profiles, name);
if (!new_profile)
goto cleanup;
goto apply;
}
/* find exec permissions for name */
state = aa_str_perms(profile->file.dfa, state, name, &cond, &perms);
if (cxt->onexec) {
struct file_perms cp;
info = "change_profile onexec";
if (!(perms.allow & AA_MAY_ONEXEC))
goto audit;
/* test if this exec can be paired with change_profile onexec.
* onexec permission is linked to exec with a standard pairing
* exec\0change_profile
*/
state = aa_dfa_null_transition(profile->file.dfa, state);
cp = change_profile_perms(profile, cxt->onexec->ns,
cxt->onexec->base.name,
AA_MAY_ONEXEC, state);
if (!(cp.allow & AA_MAY_ONEXEC))
goto audit;
new_profile = aa_get_profile(aa_newest_version(cxt->onexec));
goto apply;
}
if (perms.allow & MAY_EXEC) {
/* exec permission determine how to transition */
new_profile = x_to_profile(profile, name, perms.xindex);
if (!new_profile) {
if (perms.xindex & AA_X_INHERIT) {
/* (p|c|n)ix - don't change profile but do
* use the newest version, which was picked
* up above when getting profile
*/
info = "ix fallback";
new_profile = aa_get_profile(profile);
goto x_clear;
} else if (perms.xindex & AA_X_UNCONFINED) {
new_profile = aa_get_profile(ns->unconfined);
info = "ux fallback";
} else {
error = -ENOENT;
info = "profile not found";
}
}
} else if (COMPLAIN_MODE(profile)) {
/* no exec permission - are we in learning mode */
new_profile = aa_new_null_profile(profile, 0);
if (!new_profile) {
error = -ENOMEM;
info = "could not create null profile";
} else {
error = -EACCES;
target = new_profile->base.hname;
}
perms.xindex |= AA_X_UNSAFE;
} else
/* fail exec */
error = -EACCES;
if (!new_profile)
goto audit;
if (bprm->unsafe & LSM_UNSAFE_SHARE) {
/* FIXME: currently don't mediate shared state */
;
}
if (bprm->unsafe & (LSM_UNSAFE_PTRACE | LSM_UNSAFE_PTRACE_CAP)) {
error = may_change_ptraced_domain(current, new_profile);
if (error) {
aa_put_profile(new_profile);
goto audit;
}
}
/* Determine if secure exec is needed.
* Can be at this point for the following reasons:
* 1. unconfined switching to confined
* 2. confined switching to different confinement
* 3. confined switching to unconfined
*
* Cases 2 and 3 are marked as requiring secure exec
* (unless policy specified "unsafe exec")
*
* bprm->unsafe is used to cache the AA_X_UNSAFE permission
* to avoid having to recompute in secureexec
*/
if (!(perms.xindex & AA_X_UNSAFE)) {
AA_DEBUG("scrubbing environment variables for %s profile=%s\n",
name, new_profile->base.hname);
bprm->unsafe |= AA_SECURE_X_NEEDED;
}
apply:
target = new_profile->base.hname;
/* when transitioning profiles clear unsafe personality bits */
bprm->per_clear |= PER_CLEAR_ON_SETID;
x_clear:
aa_put_profile(cxt->profile);
/* transfer new profile reference will be released when cxt is freed */
cxt->profile = new_profile;
/* clear out all temporary/transitional state from the context */
aa_put_profile(cxt->previous);
aa_put_profile(cxt->onexec);
cxt->previous = NULL;
cxt->onexec = NULL;
cxt->token = 0;
audit:
error = aa_audit_file(profile, &perms, GFP_KERNEL, OP_EXEC, MAY_EXEC,
name, target, cond.uid, info, error);
cleanup:
aa_put_profile(profile);
kfree(buffer);
return error;
}
|
Vulnerable
|
[
"CWE-264"
] |
linux
|
259e5e6c75a910f3b5e656151dc602f53f9d7548
|
5.118671667825238e+37
| 175 |
Add PR_{GET,SET}_NO_NEW_PRIVS to prevent execve from granting privs
With this change, calling
prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)
disables privilege granting operations at execve-time. For example, a
process will not be able to execute a setuid binary to change their uid
or gid if this bit is set. The same is true for file capabilities.
Additionally, LSM_UNSAFE_NO_NEW_PRIVS is defined to ensure that
LSMs respect the requested behavior.
To determine if the NO_NEW_PRIVS bit is set, a task may call
prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0);
It returns 1 if set and 0 if it is not set. If any of the arguments are
non-zero, it will return -1 and set errno to -EINVAL.
(PR_SET_NO_NEW_PRIVS behaves similarly.)
This functionality is desired for the proposed seccomp filter patch
series. By using PR_SET_NO_NEW_PRIVS, it allows a task to modify the
system call behavior for itself and its child tasks without being
able to impact the behavior of a more privileged task.
Another potential use is making certain privileged operations
unprivileged. For example, chroot may be considered "safe" if it cannot
affect privileged tasks.
Note, this patch causes execve to fail when PR_SET_NO_NEW_PRIVS is
set and AppArmor is in use. It is fixed in a subsequent patch.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Drewry <wad@chromium.org>
Acked-by: Eric Paris <eparis@redhat.com>
Acked-by: Kees Cook <keescook@chromium.org>
v18: updated change desc
v17: using new define values as per 3.4
Signed-off-by: James Morris <james.l.morris@oracle.com>
| 1 |
static noinline int commit_fs_roots(struct btrfs_trans_handle *trans,
struct btrfs_root *root)
{
struct btrfs_root *gang[8];
struct btrfs_fs_info *fs_info = root->fs_info;
int i;
int ret;
int err = 0;
spin_lock(&fs_info->fs_roots_radix_lock);
while (1) {
ret = radix_tree_gang_lookup_tag(&fs_info->fs_roots_radix,
(void **)gang, 0,
ARRAY_SIZE(gang),
BTRFS_ROOT_TRANS_TAG);
if (ret == 0)
break;
for (i = 0; i < ret; i++) {
root = gang[i];
radix_tree_tag_clear(&fs_info->fs_roots_radix,
(unsigned long)root->root_key.objectid,
BTRFS_ROOT_TRANS_TAG);
spin_unlock(&fs_info->fs_roots_radix_lock);
btrfs_free_log(trans, root);
btrfs_update_reloc_root(trans, root);
btrfs_orphan_commit_root(trans, root);
btrfs_save_ino_cache(root, trans);
/* see comments in should_cow_block() */
root->force_cow = 0;
smp_wmb();
if (root->commit_root != root->node) {
mutex_lock(&root->fs_commit_mutex);
switch_commit_root(root);
btrfs_unpin_free_ino(root);
mutex_unlock(&root->fs_commit_mutex);
btrfs_set_root_node(&root->root_item,
root->node);
}
err = btrfs_update_root(trans, fs_info->tree_root,
&root->root_key,
&root->root_item);
spin_lock(&fs_info->fs_roots_radix_lock);
if (err)
break;
}
}
spin_unlock(&fs_info->fs_roots_radix_lock);
return err;
}
|
Safe
|
[
"CWE-310"
] |
linux-2.6
|
9c52057c698fb96f8f07e7a4bcf4801a092bda89
|
1.0270564433447278e+38
| 55 |
Btrfs: fix hash overflow handling
The handling for directory crc hash overflows was fairly obscure,
split_leaf returns EOVERFLOW when we try to extend the item and that is
supposed to bubble up to userland. For a while it did so, but along the
way we added better handling of errors and forced the FS readonly if we
hit IO errors during the directory insertion.
Along the way, we started testing only for EEXIST and the EOVERFLOW case
was dropped. The end result is that we may force the FS readonly if we
catch a directory hash bucket overflow.
This fixes a few problem spots. First I add tests for EOVERFLOW in the
places where we can safely just return the error up the chain.
btrfs_rename is harder though, because it tries to insert the new
directory item only after it has already unlinked anything the rename
was going to overwrite. Rather than adding very complex logic, I added
a helper to test for the hash overflow case early while it is still safe
to bail out.
Snapshot and subvolume creation had a similar problem, so they are using
the new helper now too.
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Reported-by: Pascal Junod <pascal@junod.info>
| 0 |
struct device *device_create(struct class *class, struct device *parent,
dev_t devt, void *drvdata, const char *fmt, ...)
{
va_list vargs;
struct device *dev;
va_start(vargs, fmt);
dev = device_create_groups_vargs(class, parent, devt, drvdata, NULL,
fmt, vargs);
va_end(vargs);
return dev;
}
|
Safe
|
[
"CWE-787"
] |
linux
|
aa838896d87af561a33ecefea1caa4c15a68bc47
|
2.5637119335371517e+38
| 12 |
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions
Convert the various sprintf fmaily calls in sysfs device show functions
to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety.
Done with:
$ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 .
And cocci script:
$ cat sysfs_emit_dev.cocci
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- strcpy(buf, chr);
+ sysfs_emit(buf, chr);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
- len += scnprintf(buf + len, PAGE_SIZE - len,
+ len += sysfs_emit_at(buf, len,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
...
- strcpy(buf, chr);
- return strlen(buf);
+ return sysfs_emit(buf, chr);
}
Signed-off-by: Joe Perches <joe@perches.com>
Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
TEST_F(ZNCTest, Encoding) {
auto znc = Run();
auto ircd = ConnectIRCd();
auto client = LoginClient();
ircd.Write(":server 001 nick :hello");
// legacy
ircd.Write(":n!u@h PRIVMSG nick :Hello\xE6world");
client.ReadUntil("Hello\xE6world");
client.Write("PRIVMSG *controlpanel :SetNetwork Encoding $me $net UTF-8");
client.ReadUntil("Encoding = UTF-8");
ircd.Write(":n!u@h PRIVMSG nick :Hello\xE6world");
client.ReadUntil("Hello\xEF\xBF\xBDworld");
client.Write(
"PRIVMSG *controlpanel :SetNetwork Encoding $me $net ^CP-1251");
client.ReadUntil("Encoding = ^CP-1251");
ircd.Write(":n!u@h PRIVMSG nick :Hello\xE6world");
client.ReadUntil("Hello\xD0\xB6world");
ircd.Write(":n!u@h PRIVMSG nick :Hello\xD0\xB6world");
client.ReadUntil("Hello\xD0\xB6world");
}
|
Safe
|
[
"CWE-476"
] |
znc
|
2390ad111bde16a78c98ac44572090b33c3bd2d8
|
3.1572625151695304e+38
| 20 |
Fix null pointer dereference in echo-message
The bug was introduced while fixing #1705. If a client did not enable
echo-message, and doesn't have a network, it crashes.
Thanks to LunarBNC for reporting this
| 0 |
static int set_altsetting(struct usbtest_dev *dev, int alternate)
{
struct usb_interface *iface = dev->intf;
struct usb_device *udev;
if (alternate < 0 || alternate >= 256)
return -EINVAL;
udev = interface_to_usbdev(iface);
return usb_set_interface(udev,
iface->altsetting[0].desc.bInterfaceNumber,
alternate);
}
|
Safe
|
[
"CWE-476"
] |
linux
|
7c80f9e4a588f1925b07134bb2e3689335f6c6d8
|
2.3044323705623745e+38
| 13 |
usb: usbtest: fix NULL pointer dereference
If the usbtest driver encounters a device with an IN bulk endpoint but
no OUT bulk endpoint, it will try to dereference a NULL pointer
(out->desc.bEndpointAddress). The problem can be solved by adding a
missing test.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
| 0 |
bool Model::operator==(const Model &other) const {
return this->accessors == other.accessors &&
this->animations == other.animations && this->asset == other.asset &&
this->buffers == other.buffers &&
this->bufferViews == other.bufferViews &&
this->cameras == other.cameras &&
this->defaultScene == other.defaultScene &&
this->extensions == other.extensions &&
this->extensionsRequired == other.extensionsRequired &&
this->extensionsUsed == other.extensionsUsed &&
this->extras == other.extras && this->images == other.images &&
this->lights == other.lights && this->materials == other.materials &&
this->meshes == other.meshes && this->nodes == other.nodes &&
this->samplers == other.samplers && this->scenes == other.scenes &&
this->skins == other.skins && this->textures == other.textures;
}
|
Safe
|
[
"CWE-20"
] |
tinygltf
|
52ff00a38447f06a17eab1caa2cf0730a119c751
|
1.5135978951932748e+38
| 16 |
Do not expand file path since its not necessary for glTF asset path(URI) and for security reason(`wordexp`).
| 0 |
static void prb_retire_current_block(struct tpacket_kbdq_core *pkc,
struct packet_sock *po, unsigned int status)
{
struct tpacket_block_desc *pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc);
/* retire/close the current block */
if (likely(TP_STATUS_KERNEL == BLOCK_STATUS(pbd))) {
/*
* Plug the case where copy_bits() is in progress on
* cpu-0 and tpacket_rcv() got invoked on cpu-1, didn't
* have space to copy the pkt in the current block and
* called prb_retire_current_block()
*
* We don't need to worry about the TMO case because
* the timer-handler already handled this case.
*/
if (!(status & TP_STATUS_BLK_TMO)) {
while (atomic_read(&pkc->blk_fill_in_prog)) {
/* Waiting for skb_copy_bits to finish... */
cpu_relax();
}
}
prb_close_block(pkc, pbd, po, status);
return;
}
}
|
Safe
|
[
"CWE-416",
"CWE-362"
] |
linux
|
84ac7260236a49c79eede91617700174c2c19b0c
|
1.7954724752402196e+38
| 26 |
packet: fix race condition in packet_set_ring
When packet_set_ring creates a ring buffer it will initialize a
struct timer_list if the packet version is TPACKET_V3. This value
can then be raced by a different thread calling setsockopt to
set the version to TPACKET_V1 before packet_set_ring has finished.
This leads to a use-after-free on a function pointer in the
struct timer_list when the socket is closed as the previously
initialized timer will not be deleted.
The bug is fixed by taking lock_sock(sk) in packet_setsockopt when
changing the packet version while also taking the lock at the start
of packet_set_ring.
Fixes: f6fb8f100b80 ("af-packet: TPACKET_V3 flexible buffer implementation.")
Signed-off-by: Philip Pettersson <philip.pettersson@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static int adouble_destructor(struct adouble *ad)
{
NTSTATUS status;
if (!ad->ad_opened) {
return 0;
}
SMB_ASSERT(ad->ad_fsp != NULL);
status = fd_close(ad->ad_fsp);
if (!NT_STATUS_IS_OK(status)) {
DBG_ERR("Closing [%s] failed: %s\n",
fsp_str_dbg(ad->ad_fsp), nt_errstr(status));
}
file_free(NULL, ad->ad_fsp);
ad->ad_fsp = NULL;
ad->ad_opened = false;
return 0;
}
|
Safe
|
[
"CWE-787"
] |
samba
|
0e2b3fb982d1f53d111e10d9197ed2ec2e13712c
|
2.1374495543964213e+38
| 21 |
CVE-2021-44142: libadouble: harden parsing code
BUG: https://bugzilla.samba.org/show_bug.cgi?id=14914
Signed-off-by: Ralph Boehme <slow@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
| 0 |
mt76_dma_tx_cleanup_idx(struct mt76_dev *dev, struct mt76_queue *q, int idx,
struct mt76_queue_entry *prev_e)
{
struct mt76_queue_entry *e = &q->entry[idx];
__le32 __ctrl = READ_ONCE(q->desc[idx].ctrl);
u32 ctrl = le32_to_cpu(__ctrl);
if (!e->skip_buf0) {
__le32 addr = READ_ONCE(q->desc[idx].buf0);
u32 len = FIELD_GET(MT_DMA_CTL_SD_LEN0, ctrl);
dma_unmap_single(dev->dev, le32_to_cpu(addr), len,
DMA_TO_DEVICE);
}
if (!(ctrl & MT_DMA_CTL_LAST_SEC0)) {
__le32 addr = READ_ONCE(q->desc[idx].buf1);
u32 len = FIELD_GET(MT_DMA_CTL_SD_LEN1, ctrl);
dma_unmap_single(dev->dev, le32_to_cpu(addr), len,
DMA_TO_DEVICE);
}
if (e->txwi == DMA_DUMMY_DATA)
e->txwi = NULL;
if (e->skb == DMA_DUMMY_DATA)
e->skb = NULL;
*prev_e = *e;
memset(e, 0, sizeof(*e));
}
|
Safe
|
[
"CWE-120",
"CWE-787"
] |
linux
|
b102f0c522cf668c8382c56a4f771b37d011cda2
|
3.1367187136568835e+38
| 32 |
mt76: fix array overflow on receiving too many fragments for a packet
If the hardware receives an oversized packet with too many rx fragments,
skb_shinfo(skb)->frags can overflow and corrupt memory of adjacent pages.
This becomes especially visible if it corrupts the freelist pointer of
a slab page.
Cc: stable@vger.kernel.org
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
| 0 |
int isHLLObjectOrReply(client *c, robj *o) {
struct hllhdr *hdr;
/* Key exists, check type */
if (checkType(c,o,OBJ_STRING))
return C_ERR; /* Error already sent. */
if (!sdsEncodedObject(o)) goto invalid;
if (stringObjectLen(o) < sizeof(*hdr)) goto invalid;
hdr = o->ptr;
/* Magic should be "HYLL". */
if (hdr->magic[0] != 'H' || hdr->magic[1] != 'Y' ||
hdr->magic[2] != 'L' || hdr->magic[3] != 'L') goto invalid;
if (hdr->encoding > HLL_MAX_ENCODING) goto invalid;
/* Dense representation string length should match exactly. */
if (hdr->encoding == HLL_DENSE &&
stringObjectLen(o) != HLL_DENSE_SIZE) goto invalid;
/* All tests passed. */
return C_OK;
invalid:
addReplySds(c,
sdsnew("-WRONGTYPE Key is not a valid "
"HyperLogLog string value.\r\n"));
return C_ERR;
}
|
Safe
|
[
"CWE-787"
] |
redis
|
e216ceaf0e099536fe3658a29dcb725d812364e0
|
1.6187957519953054e+38
| 30 |
HyperLogLog: handle wrong offset in the base case.
| 0 |
static void merge_endpoints(
const endpoints& ep_plane1,
const endpoints& ep_plane2,
unsigned int component_plane2,
endpoints& result
) {
unsigned int partition_count = ep_plane1.partition_count;
assert(partition_count == 1);
vmask4 sep_mask = vint4::lane_id() == vint4(component_plane2);
result.partition_count = partition_count;
result.endpt0[0] = select(ep_plane1.endpt0[0], ep_plane2.endpt0[0], sep_mask);
result.endpt1[0] = select(ep_plane1.endpt1[0], ep_plane2.endpt1[0], sep_mask);
}
|
Safe
|
[
"CWE-787"
] |
astc-encoder
|
6ffb3058bfbcc836108c25274e955e399481e2b4
|
3.29213010520057e+38
| 15 |
Provide a fallback for blocks which find no valid encoding
| 0 |
_copyClosePortalStmt(const ClosePortalStmt *from)
{
ClosePortalStmt *newnode = makeNode(ClosePortalStmt);
COPY_STRING_FIELD(portalname);
return newnode;
}
|
Safe
|
[
"CWE-362"
] |
postgres
|
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
|
2.637216540180996e+38
| 8 |
Avoid repeated name lookups during table and index DDL.
If the name lookups come to different conclusions due to concurrent
activity, we might perform some parts of the DDL on a different table
than other parts. At least in the case of CREATE INDEX, this can be
used to cause the permissions checks to be performed against a
different table than the index creation, allowing for a privilege
escalation attack.
This changes the calling convention for DefineIndex, CreateTrigger,
transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible
(in 9.2 and newer), and AlterTable (in 9.1 and older). In addition,
CheckRelationOwnership is removed in 9.2 and newer and the calling
convention is changed in older branches. A field has also been added
to the Constraint node (FkConstraint in 8.4). Third-party code calling
these functions or using the Constraint node will require updating.
Report by Andres Freund. Patch by Robert Haas and Andres Freund,
reviewed by Tom Lane.
Security: CVE-2014-0062
| 0 |
crypto_setup(void)
{
struct pkey_info *pinfo; /* private/public key */
char filename[MAXFILENAME]; /* file name buffer */
char hostname[MAXFILENAME]; /* host name buffer */
char *randfile;
char statstr[NTP_MAXSTRLEN]; /* statistics for filegen */
l_fp seed; /* crypto PRNG seed as NTP timestamp */
u_int len;
int bytes;
u_char *ptr;
/*
* Check for correct OpenSSL version and avoid initialization in
* the case of multiple crypto commands.
*/
if (crypto_flags & CRYPTO_FLAG_ENAB) {
msyslog(LOG_NOTICE,
"crypto_setup: spurious crypto command");
return;
}
ssl_check_version();
/*
* Load required random seed file and seed the random number
* generator. Be default, it is found as .rnd in the user home
* directory. The root home directory may be / or /root,
* depending on the system. Wiggle the contents a bit and write
* it back so the sequence does not repeat when we next restart.
*/
if (!RAND_status()) {
if (rand_file == NULL) {
RAND_file_name(filename, sizeof(filename));
randfile = filename;
} else if (*rand_file != '/') {
snprintf(filename, sizeof(filename), "%s/%s",
keysdir, rand_file);
randfile = filename;
} else
randfile = rand_file;
if ((bytes = RAND_load_file(randfile, -1)) == 0) {
msyslog(LOG_ERR,
"crypto_setup: random seed file %s missing",
randfile);
exit (-1);
}
get_systime(&seed);
RAND_seed(&seed, sizeof(l_fp));
RAND_write_file(randfile);
DPRINTF(1, ("crypto_setup: OpenSSL version %lx random seed file %s bytes read %d\n",
SSLeay(), randfile, bytes));
}
/*
* Initialize structures.
*/
gethostname(hostname, sizeof(hostname));
if (host_filename != NULL)
strlcpy(hostname, host_filename, sizeof(hostname));
if (passwd == NULL)
passwd = estrdup(hostname);
memset(&hostval, 0, sizeof(hostval));
memset(&pubkey, 0, sizeof(pubkey));
memset(&tai_leap, 0, sizeof(tai_leap));
/*
* Load required host key from file "ntpkey_host_<hostname>". If
* no host key file is not found or has invalid password, life
* as we know it ends. The host key also becomes the default
* sign key.
*/
snprintf(filename, sizeof(filename), "ntpkey_host_%s", hostname);
pinfo = crypto_key(filename, passwd, NULL);
if (pinfo == NULL) {
msyslog(LOG_ERR,
"crypto_setup: host key file %s not found or corrupt",
filename);
exit (-1);
}
if (pinfo->pkey->type != EVP_PKEY_RSA) {
msyslog(LOG_ERR,
"crypto_setup: host key is not RSA key type");
exit (-1);
}
host_pkey = pinfo->pkey;
sign_pkey = host_pkey;
hostval.fstamp = htonl(pinfo->fstamp);
/*
* Construct public key extension field for agreement scheme.
*/
len = i2d_PublicKey(host_pkey, NULL);
ptr = emalloc(len);
pubkey.ptr = ptr;
i2d_PublicKey(host_pkey, &ptr);
pubkey.fstamp = hostval.fstamp;
pubkey.vallen = htonl(len);
/*
* Load optional sign key from file "ntpkey_sign_<hostname>". If
* available, it becomes the sign key.
*/
snprintf(filename, sizeof(filename), "ntpkey_sign_%s", hostname);
pinfo = crypto_key(filename, passwd, NULL);
if (pinfo != NULL)
sign_pkey = pinfo->pkey;
/*
* Load required certificate from file "ntpkey_cert_<hostname>".
*/
snprintf(filename, sizeof(filename), "ntpkey_cert_%s", hostname);
cinfo = crypto_cert(filename);
if (cinfo == NULL) {
msyslog(LOG_ERR,
"crypto_setup: certificate file %s not found or corrupt",
filename);
exit (-1);
}
cert_host = cinfo;
sign_digest = cinfo->digest;
sign_siglen = EVP_PKEY_size(sign_pkey);
if (cinfo->flags & CERT_PRIV)
crypto_flags |= CRYPTO_FLAG_PRIV;
/*
* The certificate must be self-signed.
*/
if (strcmp(cinfo->subject, cinfo->issuer) != 0) {
msyslog(LOG_ERR,
"crypto_setup: certificate %s is not self-signed",
filename);
exit (-1);
}
hostval.ptr = estrdup(cinfo->subject);
hostval.vallen = htonl(strlen(cinfo->subject));
sys_hostname = hostval.ptr;
ptr = (u_char *)strchr(sys_hostname, '@');
if (ptr != NULL)
sys_groupname = estrdup((char *)++ptr);
if (ident_filename != NULL)
strlcpy(hostname, ident_filename, sizeof(hostname));
/*
* Load optional IFF parameters from file
* "ntpkey_iffkey_<hostname>".
*/
snprintf(filename, sizeof(filename), "ntpkey_iffkey_%s",
hostname);
iffkey_info = crypto_key(filename, passwd, NULL);
if (iffkey_info != NULL)
crypto_flags |= CRYPTO_FLAG_IFF;
/*
* Load optional GQ parameters from file
* "ntpkey_gqkey_<hostname>".
*/
snprintf(filename, sizeof(filename), "ntpkey_gqkey_%s",
hostname);
gqkey_info = crypto_key(filename, passwd, NULL);
if (gqkey_info != NULL)
crypto_flags |= CRYPTO_FLAG_GQ;
/*
* Load optional MV parameters from file
* "ntpkey_mvkey_<hostname>".
*/
snprintf(filename, sizeof(filename), "ntpkey_mvkey_%s",
hostname);
mvkey_info = crypto_key(filename, passwd, NULL);
if (mvkey_info != NULL)
crypto_flags |= CRYPTO_FLAG_MV;
/*
* We met the enemy and he is us. Now strike up the dance.
*/
crypto_flags |= CRYPTO_FLAG_ENAB | (cinfo->nid << 16);
snprintf(statstr, sizeof(statstr), "setup 0x%x host %s %s",
crypto_flags, hostname, OBJ_nid2ln(cinfo->nid));
record_crypto_stats(NULL, statstr);
DPRINTF(1, ("crypto_setup: %s\n", statstr));
}
|
Safe
|
[
"CWE-20"
] |
ntp
|
c4cd4aaf418f57f7225708a93bf48afb2bc9c1da
|
3.234449777823224e+38
| 182 |
CVE-2014-9297
| 0 |
HttpTransact::OriginServerRawOpen(State* s)
{
DebugTxn("http_trans", "[HttpTransact::OriginServerRawOpen]");
switch (s->current.state) {
case STATE_UNDEFINED:
/* fall through */
case OPEN_RAW_ERROR:
/* fall through */
case CONNECTION_ERROR:
/* fall through */
case CONNECTION_CLOSED:
/* fall through */
case CONGEST_CONTROL_CONGESTED_ON_F:
/* fall through */
case CONGEST_CONTROL_CONGESTED_ON_M:
handle_server_died(s);
ink_assert(s->cache_info.action == CACHE_DO_NO_ACTION);
s->next_action = SM_ACTION_INTERNAL_CACHE_NOOP;
break;
case CONNECTION_ALIVE:
build_response(s, &s->hdr_info.client_response, s->client_info.http_version, HTTP_STATUS_OK);
DebugTxn("http_trans", "[OriginServerRawOpen] connection alive. next action is ssl_tunnel");
s->next_action = SM_ACTION_SSL_TUNNEL;
break;
default:
ink_assert(!("s->current.state is set to something unsupported"));
break;
}
return;
}
|
Safe
|
[
"CWE-119"
] |
trafficserver
|
8b5f0345dade6b2822d9b52c8ad12e63011a5c12
|
2.782737350192381e+38
| 34 |
Fix the internal buffer sizing. Thanks to Sudheer for helping isolating this bug
| 0 |
static rsRetVal qDestructDisk(qqueue_t *pThis)
{
DEFiRet;
ASSERT(pThis != NULL);
if(pThis->tVars.disk.pWrite != NULL)
strm.Destruct(&pThis->tVars.disk.pWrite);
if(pThis->tVars.disk.pReadDeq != NULL)
strm.Destruct(&pThis->tVars.disk.pReadDeq);
if(pThis->tVars.disk.pReadDel != NULL)
strm.Destruct(&pThis->tVars.disk.pReadDel);
RETiRet;
}
|
Safe
|
[
"CWE-772"
] |
rsyslog
|
dfa88369d4ca4290db56b843f9eabdae1bfe0fd5
|
2.5433756054676573e+38
| 15 |
bugfix: memory leak when $RepeatedMsgReduction on was used
bug tracker: http://bugzilla.adiscon.com/show_bug.cgi?id=225
| 0 |
TEST_F(RouterTest, HedgedPerTryTimeoutGlobalTimeout) {
enableHedgeOnPerTryTimeout();
NiceMock<Http::MockRequestEncoder> encoder1;
Http::ResponseDecoder* response_decoder1 = nullptr;
EXPECT_CALL(cm_.thread_local_cluster_.conn_pool_, newStream(_, _))
.WillOnce(Invoke(
[&](Http::ResponseDecoder& decoder,
Http::ConnectionPool::Callbacks& callbacks) -> Http::ConnectionPool::Cancellable* {
response_decoder1 = &decoder;
EXPECT_CALL(*router_.retry_state_, onHostAttempted(_));
callbacks.onPoolReady(encoder1, cm_.thread_local_cluster_.conn_pool_.host_,
upstream_stream_info_, Http::Protocol::Http10);
return nullptr;
}));
EXPECT_CALL(cm_.thread_local_cluster_.conn_pool_.host_->outlier_detector_,
putResult(Upstream::Outlier::Result::LocalOriginConnectSuccess,
absl::optional<uint64_t>(absl::nullopt)))
.Times(2);
expectPerTryTimerCreate();
expectResponseTimerCreate();
Http::TestRequestHeaderMapImpl headers{{"x-envoy-upstream-rq-per-try-timeout-ms", "5"}};
HttpTestUtility::addDefaultHeaders(headers);
router_.decodeHeaders(headers, true);
EXPECT_EQ(1U,
callbacks_.route_->route_entry_.virtual_cluster_.stats().upstream_rq_total_.value());
EXPECT_CALL(
cm_.thread_local_cluster_.conn_pool_.host_->outlier_detector_,
putResult(Upstream::Outlier::Result::LocalOriginTimeout, absl::optional<uint64_t>(504)));
EXPECT_CALL(encoder1.stream_, resetStream(_)).Times(0);
EXPECT_CALL(callbacks_, encodeHeaders_(_, _)).Times(0);
router_.retry_state_->expectHedgedPerTryTimeoutRetry();
per_try_timeout_->invokeCallback();
NiceMock<Http::MockRequestEncoder> encoder2;
Http::ResponseDecoder* response_decoder2 = nullptr;
EXPECT_CALL(cm_.thread_local_cluster_.conn_pool_, newStream(_, _))
.WillOnce(Invoke(
[&](Http::ResponseDecoder& decoder,
Http::ConnectionPool::Callbacks& callbacks) -> Http::ConnectionPool::Cancellable* {
response_decoder2 = &decoder;
EXPECT_CALL(*router_.retry_state_, onHostAttempted(_));
callbacks.onPoolReady(encoder2, cm_.thread_local_cluster_.conn_pool_.host_,
upstream_stream_info_, Http::Protocol::Http10);
return nullptr;
}));
expectPerTryTimerCreate();
router_.retry_state_->callback_();
EXPECT_EQ(2U,
callbacks_.route_->route_entry_.virtual_cluster_.stats().upstream_rq_total_.value());
EXPECT_TRUE(verifyHostUpstreamStats(0, 0));
// Now trigger global timeout, expect everything to be reset
EXPECT_CALL(encoder1.stream_, resetStream(_));
EXPECT_CALL(encoder2.stream_, resetStream(_));
EXPECT_CALL(
cm_.thread_local_cluster_.conn_pool_.host_->outlier_detector_,
putResult(Upstream::Outlier::Result::LocalOriginTimeout, absl::optional<uint64_t>(504)));
EXPECT_CALL(callbacks_, encodeHeaders_(_, _))
.WillOnce(Invoke([&](Http::ResponseHeaderMap& headers, bool) -> void {
EXPECT_EQ(headers.Status()->value(), "504");
}));
response_timeout_->invokeCallback();
EXPECT_TRUE(verifyHostUpstreamStats(0, 2));
EXPECT_EQ(2, cm_.thread_local_cluster_.conn_pool_.host_->stats_.rq_timeout_.value());
// TODO: Verify hedge stats here once they are implemented.
}
|
Safe
|
[
"CWE-703"
] |
envoy
|
18871dbfb168d3512a10c78dd267ff7c03f564c6
|
7.36117020062278e+37
| 71 |
[1.18] CVE-2022-21655
Crash with direct_response
Signed-off-by: Otto van der Schaaf <ovanders@redhat.com>
| 0 |
PHP_METHOD(Phar, startBuffering)
{
PHAR_ARCHIVE_OBJECT();
if (zend_parse_parameters_none() == FAILURE) {
return;
}
phar_obj->arc.archive->donotflush = 1;
}
|
Safe
|
[
"CWE-119"
] |
php-src
|
13ad4d3e971807f9a58ab5933182907dc2958539
|
3.243025403131136e+38
| 10 |
Fix bug #71354 - remove UMR when size is 0
| 0 |
term_check_multiplot_okay(TBOOLEAN f_interactive)
{
FPRINTF((stderr, "term_multiplot_okay(%d)\n", f_interactive));
if (!term_initialised)
return; /* they've not started yet */
/* make sure that it is safe to issue an interactive prompt
* it is safe if
* it is not an interactive read, or
* the terminal supports interactive multiplot, or
* we are not writing to stdout and terminal doesn't
* refuse multiplot outright
*/
if (!f_interactive || (term->flags & TERM_CAN_MULTIPLOT) ||
((gpoutfile != stdout) && !(term->flags & TERM_CANNOT_MULTIPLOT))
) {
/* it's okay to use multiplot here, but suspend first */
term_suspend();
return;
}
/* uh oh: they're not allowed to be in multiplot here */
term_end_multiplot();
/* at this point we know that it is interactive and that the
* terminal can either only do multiplot when writing to
* to a file, or it does not do multiplot at all
*/
if (term->flags & TERM_CANNOT_MULTIPLOT)
int_error(NO_CARET, "This terminal does not support multiplot");
else
int_error(NO_CARET, "Must set output to a file or put all multiplot commands on one input line");
}
|
Safe
|
[
"CWE-787"
] |
gnuplot
|
963c7df3e0c5266efff260d0dff757dfe03d3632
|
1.3569525329551762e+38
| 35 |
Better error handling for faulty font syntax
A missing close-quote in an enhanced text font specification could
cause a segfault.
Bug #2303
| 0 |
NOEXPORT int service_install() {
SC_HANDLE scm, service;
TCHAR stunnel_exe_path[MAX_PATH];
LPTSTR service_path;
TCHAR descr_str[DESCR_LEN];
SERVICE_DESCRIPTION descr;
scm=OpenSCManager(0, 0, SC_MANAGER_CREATE_SERVICE);
if(!scm) {
error_box(TEXT("OpenSCManager"));
return 1;
}
GetModuleFileName(0, stunnel_exe_path, MAX_PATH);
service_path=str_tprintf(TEXT("\"%s\" -service %s"),
stunnel_exe_path, get_params());
service=CreateService(scm, SERVICE_NAME, SERVICE_DISPLAY_NAME,
SERVICE_ALL_ACCESS,
SERVICE_WIN32_OWN_PROCESS|SERVICE_INTERACTIVE_PROCESS,
SERVICE_AUTO_START, SERVICE_ERROR_NORMAL, service_path,
NULL, NULL, TEXT("TCPIP\0"), NULL, NULL);
if(!service) {
error_box(TEXT("CreateService"));
str_free(service_path);
CloseServiceHandle(scm);
return 1;
}
str_free(service_path);
if(LoadString(ghInst, IDS_SERVICE_DESC, descr_str, DESCR_LEN)) {
descr.lpDescription=descr_str;
ChangeServiceConfig2(service, SERVICE_CONFIG_DESCRIPTION, &descr);
}
message_box(TEXT("Service installed"), MB_ICONINFORMATION);
CloseServiceHandle(service);
CloseServiceHandle(scm);
return 0;
}
|
Safe
|
[
"CWE-295"
] |
stunnel
|
ebad9ddc4efb2635f37174c9d800d06206f1edf9
|
3.1388332819114803e+38
| 36 |
stunnel-5.57
| 0 |
static int setCompDefaults(struct jpeg_compress_struct *cinfo, int pixelFormat,
int subsamp, int jpegQual, int flags)
{
int retval = 0;
#ifndef NO_GETENV
char *env = NULL;
#endif
cinfo->in_color_space = pf2cs[pixelFormat];
cinfo->input_components = tjPixelSize[pixelFormat];
jpeg_set_defaults(cinfo);
#ifndef NO_GETENV
if ((env = getenv("TJ_OPTIMIZE")) != NULL && strlen(env) > 0 &&
!strcmp(env, "1"))
cinfo->optimize_coding = TRUE;
if ((env = getenv("TJ_ARITHMETIC")) != NULL && strlen(env) > 0 &&
!strcmp(env, "1"))
cinfo->arith_code = TRUE;
if ((env = getenv("TJ_RESTART")) != NULL && strlen(env) > 0) {
int temp = -1;
char tempc = 0;
if (sscanf(env, "%d%c", &temp, &tempc) >= 1 && temp >= 0 &&
temp <= 65535) {
if (toupper(tempc) == 'B') {
cinfo->restart_interval = temp;
cinfo->restart_in_rows = 0;
} else
cinfo->restart_in_rows = temp;
}
}
#endif
if (jpegQual >= 0) {
jpeg_set_quality(cinfo, jpegQual, TRUE);
if (jpegQual >= 96 || flags & TJFLAG_ACCURATEDCT)
cinfo->dct_method = JDCT_ISLOW;
else
cinfo->dct_method = JDCT_FASTEST;
}
if (subsamp == TJSAMP_GRAY)
jpeg_set_colorspace(cinfo, JCS_GRAYSCALE);
else if (pixelFormat == TJPF_CMYK)
jpeg_set_colorspace(cinfo, JCS_YCCK);
else
jpeg_set_colorspace(cinfo, JCS_YCbCr);
if (flags & TJFLAG_PROGRESSIVE)
jpeg_simple_progression(cinfo);
#ifndef NO_GETENV
else if ((env = getenv("TJ_PROGRESSIVE")) != NULL && strlen(env) > 0 &&
!strcmp(env, "1"))
jpeg_simple_progression(cinfo);
#endif
cinfo->comp_info[0].h_samp_factor = tjMCUWidth[subsamp] / 8;
cinfo->comp_info[1].h_samp_factor = 1;
cinfo->comp_info[2].h_samp_factor = 1;
if (cinfo->num_components > 3)
cinfo->comp_info[3].h_samp_factor = tjMCUWidth[subsamp] / 8;
cinfo->comp_info[0].v_samp_factor = tjMCUHeight[subsamp] / 8;
cinfo->comp_info[1].v_samp_factor = 1;
cinfo->comp_info[2].v_samp_factor = 1;
if (cinfo->num_components > 3)
cinfo->comp_info[3].v_samp_factor = tjMCUHeight[subsamp] / 8;
return retval;
}
|
Safe
|
[
"CWE-787"
] |
libjpeg-turbo
|
2a9e3bd7430cfda1bc812d139e0609c6aca0b884
|
5.525622601288828e+36
| 69 |
TurboJPEG: Properly handle gigapixel images
Prevent several integer overflow issues and subsequent segfaults that
occurred when attempting to compress or decompress gigapixel images with
the TurboJPEG API:
- Modify tjBufSize(), tjBufSizeYUV2(), and tjPlaneSizeYUV() to avoid
integer overflow when computing the return values and to return an
error if such an overflow is unavoidable.
- Modify tjunittest to validate the above.
- Modify tjCompress2(), tjEncodeYUVPlanes(), tjDecompress2(), and
tjDecodeYUVPlanes() to avoid integer overflow when computing the row
pointers in the 64-bit TurboJPEG C API.
- Modify TJBench (both C and Java versions) to avoid overflowing the
size argument to malloc()/new and to fail gracefully if such an
overflow is unavoidable.
In general, this allows gigapixel images to be accommodated by the
64-bit TurboJPEG C API when using automatic JPEG buffer (re)allocation.
Such images cannot currently be accommodated without automatic JPEG
buffer (re)allocation, due to the fact that tjAlloc() accepts a 32-bit
integer argument (oops.) Such images cannot be accommodated in the
TurboJPEG Java API due to the fact that Java always uses a signed 32-bit
integer as an array index.
Fixes #361
| 0 |
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}
retry:
binder_inner_proc_lock(proc);
wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);
binder_inner_proc_unlock(proc);
thread->looper |= BINDER_LOOPER_STATE_WAITING;
trace_binder_wait_for_work(wait_for_proc_work,
!!thread->transaction_stack,
!binder_worklist_empty(proc, &thread->todo));
if (wait_for_proc_work) {
if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n",
proc->pid, thread->pid, thread->looper);
wait_event_interruptible(binder_user_error_wait,
binder_stop_on_user_error < 2);
}
binder_set_nice(proc->default_priority);
}
if (non_block) {
if (!binder_has_work(thread, wait_for_proc_work))
ret = -EAGAIN;
} else {
ret = binder_wait_for_work(thread, wait_for_proc_work);
}
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
if (ret)
return ret;
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w = NULL;
struct list_head *list = NULL;
struct binder_transaction *t = NULL;
struct binder_thread *t_from;
binder_inner_proc_lock(proc);
if (!binder_worklist_empty_ilocked(&thread->todo))
list = &thread->todo;
else if (!binder_worklist_empty_ilocked(&proc->todo) &&
wait_for_proc_work)
list = &proc->todo;
else {
binder_inner_proc_unlock(proc);
/* no data added */
if (ptr - buffer == 4 && !thread->looper_need_return)
goto retry;
break;
}
if (end - ptr < sizeof(tr) + 4) {
binder_inner_proc_unlock(proc);
break;
}
w = binder_dequeue_work_head_ilocked(list);
if (binder_worklist_empty_ilocked(&thread->todo))
thread->process_todo = false;
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
binder_inner_proc_unlock(proc);
t = container_of(w, struct binder_transaction, work);
} break;
case BINDER_WORK_RETURN_ERROR: {
struct binder_error *e = container_of(
w, struct binder_error, work);
WARN_ON(e->cmd == BR_OK);
binder_inner_proc_unlock(proc);
if (put_user(e->cmd, (uint32_t __user *)ptr))
return -EFAULT;
e->cmd = BR_OK;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, e->cmd);
} break;
case BINDER_WORK_TRANSACTION_COMPLETE: {
binder_inner_proc_unlock(proc);
cmd = BR_TRANSACTION_COMPLETE;
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
"%d:%d BR_TRANSACTION_COMPLETE\n",
proc->pid, thread->pid);
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
case BINDER_WORK_NODE: {
struct binder_node *node = container_of(w, struct binder_node, work);
int strong, weak;
binder_uintptr_t node_ptr = node->ptr;
binder_uintptr_t node_cookie = node->cookie;
int node_debug_id = node->debug_id;
int has_weak_ref;
int has_strong_ref;
void __user *orig_ptr = ptr;
BUG_ON(proc != node->proc);
strong = node->internal_strong_refs ||
node->local_strong_refs;
weak = !hlist_empty(&node->refs) ||
node->local_weak_refs ||
node->tmp_refs || strong;
has_strong_ref = node->has_strong_ref;
has_weak_ref = node->has_weak_ref;
if (weak && !has_weak_ref) {
node->has_weak_ref = 1;
node->pending_weak_ref = 1;
node->local_weak_refs++;
}
if (strong && !has_strong_ref) {
node->has_strong_ref = 1;
node->pending_strong_ref = 1;
node->local_strong_refs++;
}
if (!strong && has_strong_ref)
node->has_strong_ref = 0;
if (!weak && has_weak_ref)
node->has_weak_ref = 0;
if (!weak && !strong) {
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx deleted\n",
proc->pid, thread->pid,
node_debug_id,
(u64)node_ptr,
(u64)node_cookie);
rb_erase(&node->rb_node, &proc->nodes);
binder_inner_proc_unlock(proc);
binder_node_lock(node);
/*
* Acquire the node lock before freeing the
* node to serialize with other threads that
* may have been holding the node lock while
* decrementing this node (avoids race where
* this thread frees while the other thread
* is unlocking the node after the final
* decrement)
*/
binder_node_unlock(node);
binder_free_node(node);
} else
binder_inner_proc_unlock(proc);
if (weak && !has_weak_ref)
ret = binder_put_node_cmd(
proc, thread, &ptr, node_ptr,
node_cookie, node_debug_id,
BR_INCREFS, "BR_INCREFS");
if (!ret && strong && !has_strong_ref)
ret = binder_put_node_cmd(
proc, thread, &ptr, node_ptr,
node_cookie, node_debug_id,
BR_ACQUIRE, "BR_ACQUIRE");
if (!ret && !strong && has_strong_ref)
ret = binder_put_node_cmd(
proc, thread, &ptr, node_ptr,
node_cookie, node_debug_id,
BR_RELEASE, "BR_RELEASE");
if (!ret && !weak && has_weak_ref)
ret = binder_put_node_cmd(
proc, thread, &ptr, node_ptr,
node_cookie, node_debug_id,
BR_DECREFS, "BR_DECREFS");
if (orig_ptr == ptr)
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx state unchanged\n",
proc->pid, thread->pid,
node_debug_id,
(u64)node_ptr,
(u64)node_cookie);
if (ret)
return ret;
} break;
case BINDER_WORK_DEAD_BINDER:
case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
struct binder_ref_death *death;
uint32_t cmd;
binder_uintptr_t cookie;
death = container_of(w, struct binder_ref_death, work);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
else
cmd = BR_DEAD_BINDER;
cookie = death->cookie;
binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
"%d:%d %s %016llx\n",
proc->pid, thread->pid,
cmd == BR_DEAD_BINDER ?
"BR_DEAD_BINDER" :
"BR_CLEAR_DEATH_NOTIFICATION_DONE",
(u64)cookie);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
binder_inner_proc_unlock(proc);
kfree(death);
binder_stats_deleted(BINDER_STAT_DEATH);
} else {
binder_enqueue_work_ilocked(
w, &proc->delivered_death);
binder_inner_proc_unlock(proc);
}
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (put_user(cookie,
(binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
binder_stat_br(proc, thread, cmd);
if (cmd == BR_DEAD_BINDER)
goto done; /* DEAD_BINDER notifications can cause transactions */
} break;
}
if (!t)
continue;
BUG_ON(t->buffer == NULL);
if (t->buffer->target_node) {
struct binder_node *target_node = t->buffer->target_node;
tr.target.ptr = target_node->ptr;
tr.cookie = target_node->cookie;
t->saved_priority = task_nice(current);
if (t->priority < target_node->min_priority &&
!(t->flags & TF_ONE_WAY))
binder_set_nice(t->priority);
else if (!(t->flags & TF_ONE_WAY) ||
t->saved_priority > target_node->min_priority)
binder_set_nice(target_node->min_priority);
cmd = BR_TRANSACTION;
} else {
tr.target.ptr = 0;
tr.cookie = 0;
cmd = BR_REPLY;
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
t_from = binder_get_txn_from(t);
if (t_from) {
struct task_struct *sender = t_from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
tr.sender_pid = 0;
}
tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (binder_uintptr_t)
((uintptr_t)t->buffer->data +
binder_alloc_get_user_buffer_offset(&proc->alloc));
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
if (put_user(cmd, (uint32_t __user *)ptr)) {
if (t_from)
binder_thread_dec_tmpref(t_from);
binder_cleanup_transaction(t, "put_user failed",
BR_FAILED_REPLY);
return -EFAULT;
}
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr))) {
if (t_from)
binder_thread_dec_tmpref(t_from);
binder_cleanup_transaction(t, "copy_to_user failed",
BR_FAILED_REPLY);
return -EFAULT;
}
ptr += sizeof(tr);
trace_binder_transaction_received(t);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_TRANSACTION,
"%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
proc->pid, thread->pid,
(cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
"BR_REPLY",
t->debug_id, t_from ? t_from->proc->pid : 0,
t_from ? t_from->pid : 0, cmd,
t->buffer->data_size, t->buffer->offsets_size,
(u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
if (t_from)
binder_thread_dec_tmpref(t_from);
t->buffer->allow_user_free = 1;
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
binder_inner_proc_lock(thread->proc);
t->to_parent = thread->transaction_stack;
t->to_thread = thread;
thread->transaction_stack = t;
binder_inner_proc_unlock(thread->proc);
} else {
binder_free_transaction(t);
}
break;
}
done:
*consumed = ptr - buffer;
binder_inner_proc_lock(proc);
if (proc->requested_threads == 0 &&
list_empty(&thread->proc->waiting_threads) &&
proc->requested_threads_started < proc->max_threads &&
(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
/*spawn a new thread if we leave this out */) {
proc->requested_threads++;
binder_inner_proc_unlock(proc);
binder_debug(BINDER_DEBUG_THREADS,
"%d:%d BR_SPAWN_LOOPER\n",
proc->pid, thread->pid);
if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
return -EFAULT;
binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
} else
binder_inner_proc_unlock(proc);
return 0;
}
|
Safe
|
[
"CWE-362"
] |
linux
|
5eeb2ca02a2f6084fc57ae5c244a38baab07033a
|
1.6595143341908929e+38
| 359 |
ANDROID: binder: synchronize_rcu() when using POLLFREE.
To prevent races with ep_remove_waitqueue() removing the
waitqueue at the same time.
Reported-by: syzbot+a2a3c4909716e271487e@syzkaller.appspotmail.com
Signed-off-by: Martijn Coenen <maco@android.com>
Cc: stable <stable@vger.kernel.org> # 4.14+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
nm_utils_dhcp_client_id_duid(guint32 iaid, const guint8 *duid, gsize duid_len)
{
struct _nm_packed {
guint8 type;
guint32 iaid;
guint8 duid[];
} * client_id;
gsize total_size;
/* the @duid must include the 16 bit duid-type and the data (of max 128 bytes). */
g_return_val_if_fail(duid_len > 2 && duid_len < 128 + 2, NULL);
g_return_val_if_fail(duid, NULL);
total_size = sizeof(*client_id) + duid_len;
client_id = g_malloc(total_size);
client_id->type = 255;
unaligned_write_be32(&client_id->iaid, iaid);
memcpy(client_id->duid, duid, duid_len);
return g_bytes_new_take(client_id, total_size);
}
|
Safe
|
[
"CWE-20"
] |
NetworkManager
|
420784e342da4883f6debdfe10cde68507b10d27
|
1.3304249205347131e+38
| 22 |
core: fix crash in nm_wildcard_match_check()
It's not entirely clear how to treat %NULL.
Clearly "match.interface-name=eth0" should not
match with an interface %NULL. But what about
"match.interface-name=!eth0"? It's now implemented
that negative matches still succeed against %NULL.
What about "match.interface-name=*"? That probably
should also match with %NULL. So we treat %NULL really
like "".
Against commit 11cd443448bc ('iwd: Don't call IWD methods when device
unmanaged'), we got this backtrace:
#0 0x00007f1c164069f1 in __strnlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62
#1 0x00007f1c1637ac9e in __fnmatch (pattern=<optimized out>, string=<optimized out>, string@entry=0x0, flags=flags@entry=0) at fnmatch.c:379
p = 0x0
res = <optimized out>
orig_pattern = <optimized out>
n = <optimized out>
wpattern = 0x7fff8d860730 L"pci-0000:03:00.0"
ps = {__count = 0, __value = {__wch = 0, __wchb = "\000\000\000"}}
wpattern_malloc = 0x0
wstring_malloc = 0x0
wstring = <optimized out>
alloca_used = 80
__PRETTY_FUNCTION__ = "__fnmatch"
#2 0x0000564484a978bf in nm_wildcard_match_check (str=0x0, patterns=<optimized out>, num_patterns=<optimized out>) at src/core/nm-core-utils.c:1959
is_inverted = 0
is_mandatory = 0
match = <optimized out>
p = 0x564486c43fa0 "pci-0000:03:00.0"
has_optional = 0
has_any_optional = 0
i = <optimized out>
#3 0x0000564484bf4797 in check_connection_compatible (self=<optimized out>, connection=<optimized out>, error=0x0) at src/core/devices/nm-device.c:7499
patterns = <optimized out>
device_driver = 0x564486c76bd0 "veth"
num_patterns = 1
priv = 0x564486cbe0b0
__func__ = "check_connection_compatible"
device_iface = <optimized out>
local = 0x564486c99a60
conn_iface = 0x0
klass = <optimized out>
s_match = 0x564486c63df0 [NMSettingMatch]
#4 0x0000564484c38491 in check_connection_compatible (device=0x564486cbe590 [NMDeviceVeth], connection=0x564486c6b160, error=0x0) at src/core/devices/nm-device-ethernet.c:348
self = 0x564486cbe590 [NMDeviceVeth]
s_wired = <optimized out>
Fixes: 3ced486f4162 ('libnm/match: extend syntax for match patterns with '|', '&', '!' and '\\'')
https://bugzilla.redhat.com/show_bug.cgi?id=1942741
| 0 |
void channels_query_init(void)
{
settings_add_bool("misc", "channel_sync", TRUE);
settings_add_int("misc", "channel_max_who_sync", 1000);
signal_add("server connected", (SIGNAL_FUNC) sig_connected);
signal_add("server disconnected", (SIGNAL_FUNC) sig_disconnected);
signal_add("channel joined", (SIGNAL_FUNC) sig_channel_joined);
signal_add("channel destroyed", (SIGNAL_FUNC) sig_channel_destroyed);
signal_add("chanquery mode", (SIGNAL_FUNC) event_channel_mode);
signal_add("chanquery who end", (SIGNAL_FUNC) event_end_of_who);
signal_add("chanquery ban end", (SIGNAL_FUNC) event_end_of_banlist);
signal_add("chanquery abort", (SIGNAL_FUNC) query_current_error);
}
|
Safe
|
[
"CWE-416"
] |
irssi
|
43e44d553d44e313003cee87e6ea5e24d68b84a1
|
8.106589224176458e+37
| 16 |
Merge branch 'security' into 'master'
Security
Closes GL#12, GL#13, GL#14, GL#15, GL#16
See merge request irssi/irssi!23
| 0 |
msg_print (struct msg *msg)
{
if (!msg)
{
zlog_debug ("msg_print msg=NULL!\n");
return;
}
#ifdef ORIGINAL_CODING
zlog_debug
("msg=%p msgtype=%d msglen=%d msgseq=%d streamdata=%p streamsize=%lu\n",
msg, msg->hdr.msgtype, ntohs (msg->hdr.msglen), ntohl (msg->hdr.msgseq),
STREAM_DATA (msg->s), STREAM_SIZE (msg->s));
#else /* ORIGINAL_CODING */
/* API message common header part. */
zlog_debug
("API-msg [%s]: type(%d),len(%d),seq(%lu),data(%p),size(%zd)",
ospf_api_typename (msg->hdr.msgtype), msg->hdr.msgtype,
ntohs (msg->hdr.msglen), (unsigned long) ntohl (msg->hdr.msgseq),
STREAM_DATA (msg->s), STREAM_SIZE (msg->s));
/* API message body part. */
#ifdef ndef
/* Generic Hex/Ascii dump */
DumpBuf (STREAM_DATA (msg->s), STREAM_SIZE (msg->s)); /* Sorry, deleted! */
#else /* ndef */
/* Message-type dependent dump function. */
#endif /* ndef */
return;
#endif /* ORIGINAL_CODING */
}
|
Safe
|
[
"CWE-119"
] |
quagga
|
3f872fe60463a931c5c766dbf8c36870c0023e88
|
2.7021269072026554e+38
| 32 |
ospfd: CVE-2013-2236, stack overrun in apiserver
the OSPF API-server (exporting the LSDB and allowing announcement of
Opaque-LSAs) writes past the end of fixed on-stack buffers. This leads
to an exploitable stack overflow.
For this condition to occur, the following two conditions must be true:
- Quagga is configured with --enable-opaque-lsa
- ospfd is started with the "-a" command line option
If either of these does not hold, the relevant code is not executed and
the issue does not get triggered.
Since the issue occurs on receiving large LSAs (larger than 1488 bytes),
it is possible for this to happen during normal operation of a network.
In particular, if there is an OSPF router with a large number of
interfaces, the Router-LSA of that router may exceed 1488 bytes and
trigger this, leading to an ospfd crash.
For an attacker to exploit this, s/he must be able to inject valid LSAs
into the OSPF domain. Any best-practice protection measure (using
crypto authentication, restricting OSPF to internal interfaces, packet
filtering protocol 89, etc.) will prevent exploitation. On top of that,
remote (not on an OSPF-speaking network segment) attackers will have
difficulties bringing up the adjacency needed to inject a LSA.
This patch only performs minimal changes to remove the possibility of a
stack overrun. The OSPF API in general is quite ugly and needs a
rewrite.
Reported-by: Ricky Charlet <ricky.charlet@hp.com>
Cc: Florian Weimer <fweimer@redhat.com>
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
| 0 |
int vrend_renderer_execute(void *execute_args, uint32_t execute_size)
{
struct virgl_renderer_hdr *hdr = execute_args;
if (hdr->stype_version != 0)
return -EINVAL;
switch (hdr->stype) {
case VIRGL_RENDERER_STRUCTURE_TYPE_SUPPORTED_STRUCTURES:
return vrend_renderer_supported_structures(execute_args, execute_size);
case VIRGL_RENDERER_STRUCTURE_TYPE_EXPORT_QUERY:
return vrend_renderer_export_query(execute_args, execute_size);
default:
return -EINVAL;
}
}
|
Safe
|
[
"CWE-787"
] |
virglrenderer
|
cbc8d8b75be360236cada63784046688aeb6d921
|
3.041358007526132e+37
| 15 |
vrend: check transfer bounds for negative values too and report error
Closes #138
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
| 0 |
THD *next_global_thread(THD *thd)
{
if (threads.is_last(thd))
return NULL;
struct ilink *next= thd->next;
return static_cast<THD*>(next);
}
|
Safe
|
[
"CWE-264"
] |
mysql-server
|
48bd8b16fe382be302c6f0b45931be5aa6f29a0e
|
9.462902965824894e+37
| 7 |
Bug#24388753: PRIVILEGE ESCALATION USING MYSQLD_SAFE
[This is the 5.5/5.6 version of the bugfix].
The problem was that it was possible to write log files ending
in .ini/.cnf that later could be parsed as an options file.
This made it possible for users to specify startup options
without the permissions to do so.
This patch fixes the problem by disallowing general query log
and slow query log to be written to files ending in .ini and .cnf.
| 0 |
static void vmxnet3_set_events(VMXNET3State *s, uint32_t val)
{
uint32_t events;
PCIDevice *d = PCI_DEVICE(s);
VMW_CBPRN("Setting events: 0x%x", val);
events = VMXNET3_READ_DRV_SHARED32(d, s->drv_shmem, ecr) | val;
VMXNET3_WRITE_DRV_SHARED32(d, s->drv_shmem, ecr, events);
}
|
Safe
|
[
"CWE-416"
] |
qemu
|
6c352ca9b4ee3e1e286ea9e8434bd8e69ac7d0d8
|
2.559157396380201e+38
| 9 |
net: vmxnet3: check for device_active before write
Vmxnet3 device emulator does not check if the device is active,
before using it for write. It leads to a use after free issue,
if the vmxnet3_io_bar0_write routine is called after the device is
deactivated. Add check to avoid it.
Reported-by: Li Qiang <liqiang6-s@360.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Acked-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
| 0 |
void ASC_getUserIdentRQ(T_ASC_Parameters* params, UserIdentityNegotiationSubItemRQ** usrIdentRQ)
{
*usrIdentRQ = params->DULparams.reqUserIdentNeg;
}
|
Safe
|
[
"CWE-415",
"CWE-703",
"CWE-401"
] |
dcmtk
|
a9697dfeb672b0b9412c00c7d36d801e27ec85cb
|
6.823272757933464e+37
| 4 |
Fixed poss. NULL pointer dereference/double free.
Thanks to Jinsheng Ba <bajinsheng@u.nus.edu> for the report and some patches.
| 0 |
int fpm_children_init_main() /* {{{ */
{
if (fpm_global_config.emergency_restart_threshold &&
fpm_global_config.emergency_restart_interval) {
last_faults = malloc(sizeof(time_t) * fpm_global_config.emergency_restart_threshold);
if (!last_faults) {
return -1;
}
memset(last_faults, 0, sizeof(time_t) * fpm_global_config.emergency_restart_threshold);
}
if (0 > fpm_cleanup_add(FPM_CLEANUP_ALL, fpm_children_cleanup, 0)) {
return -1;
}
return 0;
}
|
Safe
|
[
"CWE-787"
] |
php-src
|
fadb1f8c1d08ae62b4f0a16917040fde57a3b93b
|
1.7521850700041236e+38
| 20 |
Fix bug #81026 (PHP-FPM oob R/W in root process leading to priv escalation)
The main change is to store scoreboard procs directly to the variable sized
array rather than indirectly through the pointer.
Signed-off-by: Stanislav Malyshev <stas@php.net>
| 0 |
static void si_dbus_match_rule_update(struct session_info *info)
{
DBusError error;
if (info->connection == NULL)
return;
si_dbus_match_remove(info);
/* Seat signals */
if (info->seat != NULL) {
info->match_seat_signals =
g_strdup_printf ("type='signal',interface='%s',path='%s',"
"member='ActiveSessionChanged'",
INTERFACE_CONSOLE_KIT_SEAT,
info->seat);
if (info->verbose)
syslog(LOG_DEBUG, "(console-kit) seat match: %s",
info->match_seat_signals);
dbus_error_init(&error);
dbus_bus_add_match(info->connection,
info->match_seat_signals,
&error);
if (dbus_error_is_set(&error)) {
syslog(LOG_WARNING, "Unable to add dbus rule match: %s",
error.message);
dbus_error_free(&error);
g_clear_pointer(&info->match_seat_signals, g_free);
}
}
/* Session signals */
if (info->active_session != NULL) {
info->match_session_signals =
g_strdup_printf ("type='signal',interface='%s',path='%s'",
INTERFACE_CONSOLE_KIT_SESSION,
info->active_session);
if (info->verbose)
syslog(LOG_DEBUG, "(console-kit) session match: %s",
info->match_session_signals);
dbus_error_init(&error);
dbus_bus_add_match(info->connection,
info->match_session_signals,
&error);
if (dbus_error_is_set(&error)) {
syslog(LOG_WARNING, "Unable to add dbus rule match: %s",
error.message);
dbus_error_free(&error);
g_clear_pointer(&info->match_session_signals, g_free);
}
}
}
|
Safe
|
[
"CWE-362"
] |
spice-vd_agent
|
5c50131797e985d0a5654c1fd7000ae945ed29a7
|
6.355668079631186e+37
| 54 |
Better check for sessions
Do not allow other users to hijack a session checking that
the process is launched by the owner of the session.
Signed-off-by: Frediano Ziglio <freddy77@gmail.com>
Acked-by: Uri Lublin <uril@redhat.com>
| 0 |
static int ieee80211_assign_beacon(struct ieee80211_sub_if_data *sdata,
struct cfg80211_beacon_data *params,
const struct ieee80211_csa_settings *csa)
{
struct beacon_data *new, *old;
int new_head_len, new_tail_len;
int size, err;
u32 changed = BSS_CHANGED_BEACON;
old = sdata_dereference(sdata->u.ap.beacon, sdata);
/* Need to have a beacon head if we don't have one yet */
if (!params->head && !old)
return -EINVAL;
/* new or old head? */
if (params->head)
new_head_len = params->head_len;
else
new_head_len = old->head_len;
/* new or old tail? */
if (params->tail || !old)
/* params->tail_len will be zero for !params->tail */
new_tail_len = params->tail_len;
else
new_tail_len = old->tail_len;
size = sizeof(*new) + new_head_len + new_tail_len;
new = kzalloc(size, GFP_KERNEL);
if (!new)
return -ENOMEM;
/* start filling the new info now */
/*
* pointers go into the block we allocated,
* memory is | beacon_data | head | tail |
*/
new->head = ((u8 *) new) + sizeof(*new);
new->tail = new->head + new_head_len;
new->head_len = new_head_len;
new->tail_len = new_tail_len;
if (csa) {
new->csa_current_counter = csa->count;
memcpy(new->csa_counter_offsets, csa->counter_offsets_beacon,
csa->n_counter_offsets_beacon *
sizeof(new->csa_counter_offsets[0]));
}
/* copy in head */
if (params->head)
memcpy(new->head, params->head, new_head_len);
else
memcpy(new->head, old->head, new_head_len);
/* copy in optional tail */
if (params->tail)
memcpy(new->tail, params->tail, new_tail_len);
else
if (old)
memcpy(new->tail, old->tail, new_tail_len);
err = ieee80211_set_probe_resp(sdata, params->probe_resp,
params->probe_resp_len, csa);
if (err < 0) {
kfree(new);
return err;
}
if (err == 0)
changed |= BSS_CHANGED_AP_PROBE_RESP;
if (params->ftm_responder != -1) {
sdata->vif.bss_conf.ftm_responder = params->ftm_responder;
err = ieee80211_set_ftm_responder_params(sdata,
params->lci,
params->lci_len,
params->civicloc,
params->civicloc_len);
if (err < 0) {
kfree(new);
return err;
}
changed |= BSS_CHANGED_FTM_RESPONDER;
}
rcu_assign_pointer(sdata->u.ap.beacon, new);
if (old)
kfree_rcu(old, rcu_head);
return changed;
}
|
Safe
|
[
"CWE-287"
] |
linux
|
3e493173b7841259a08c5c8e5cbe90adb349da7e
|
8.316692534828557e+37
| 98 |
mac80211: Do not send Layer 2 Update frame before authorization
The Layer 2 Update frame is used to update bridges when a station roams
to another AP even if that STA does not transmit any frames after the
reassociation. This behavior was described in IEEE Std 802.11F-2003 as
something that would happen based on MLME-ASSOCIATE.indication, i.e.,
before completing 4-way handshake. However, this IEEE trial-use
recommended practice document was published before RSN (IEEE Std
802.11i-2004) and as such, did not consider RSN use cases. Furthermore,
IEEE Std 802.11F-2003 was withdrawn in 2006 and as such, has not been
maintained amd should not be used anymore.
Sending out the Layer 2 Update frame immediately after association is
fine for open networks (and also when using SAE, FT protocol, or FILS
authentication when the station is actually authenticated by the time
association completes). However, it is not appropriate for cases where
RSN is used with PSK or EAP authentication since the station is actually
fully authenticated only once the 4-way handshake completes after
authentication and attackers might be able to use the unauthenticated
triggering of Layer 2 Update frame transmission to disrupt bridge
behavior.
Fix this by postponing transmission of the Layer 2 Update frame from
station entry addition to the point when the station entry is marked
authorized. Similarly, send out the VLAN binding update only if the STA
entry has already been authorized.
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb, int k, unsigned int size)
{
u8 *ptr = NULL;
if (k >= SKF_NET_OFF)
ptr = skb_network_header(skb) + k - SKF_NET_OFF;
else if (k >= SKF_LL_OFF)
ptr = skb_mac_header(skb) + k - SKF_LL_OFF;
if (ptr >= skb->head && ptr + size <= skb_tail_pointer(skb))
return ptr;
return NULL;
}
|
Safe
|
[
"CWE-416",
"CWE-125",
"CWE-189"
] |
linux
|
05ab8f2647e4221cbdb3856dd7d32bd5407316b3
|
1.288851392224009e+38
| 13 |
filter: prevent nla extensions to peek beyond the end of the message
The BPF_S_ANC_NLATTR and BPF_S_ANC_NLATTR_NEST extensions fail to check
for a minimal message length before testing the supplied offset to be
within the bounds of the message. This allows the subtraction of the nla
header to underflow and therefore -- as the data type is unsigned --
allowing far to big offset and length values for the search of the
netlink attribute.
The remainder calculation for the BPF_S_ANC_NLATTR_NEST extension is
also wrong. It has the minuend and subtrahend mixed up, therefore
calculates a huge length value, allowing to overrun the end of the
message while looking for the netlink attribute.
The following three BPF snippets will trigger the bugs when attached to
a UNIX datagram socket and parsing a message with length 1, 2 or 3.
,-[ PoC for missing size check in BPF_S_ANC_NLATTR ]--
| ld #0x87654321
| ldx #42
| ld #nla
| ret a
`---
,-[ PoC for the same bug in BPF_S_ANC_NLATTR_NEST ]--
| ld #0x87654321
| ldx #42
| ld #nlan
| ret a
`---
,-[ PoC for wrong remainder calculation in BPF_S_ANC_NLATTR_NEST ]--
| ; (needs a fake netlink header at offset 0)
| ld #0
| ldx #42
| ld #nlan
| ret a
`---
Fix the first issue by ensuring the message length fulfills the minimal
size constrains of a nla header. Fix the second bug by getting the math
for the remainder calculation right.
Fixes: 4738c1db15 ("[SKFILTER]: Add SKF_ADF_NLATTR instruction")
Fixes: d214c7537b ("filter: add SKF_AD_NLATTR_NEST to look for nested..")
Cc: Patrick McHardy <kaber@trash.net>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static void set_signals_client(void)
{
sigset_t sigs;
struct sigaction sa;
sigfillset(&sigs);
sigemptyset(&sa.sa_mask);
sa.sa_flags = SA_RESTART;
sa.sa_handler = SIG_IGN;
(void) sigaction(SIGPIPE, &sa, NULL);
(void) sigaction(SIGURG, &sa, NULL);
#ifdef SIGIO
(void) sigaction(SIGIO, &sa, NULL);
#endif
sa.sa_handler = SIG_DFL;
sigdelset(&sigs, SIGCHLD);
(void) sigaction(SIGCHLD, &sa, NULL);
#ifdef SIGFPE
(void) sigaction(SIGFPE, &sa, NULL);
sigdelset(&sigs, SIGFPE);
#endif
sa.sa_flags = 0;
sa.sa_handler = sigalarm;
sigdelset(&sigs, SIGALRM);
(void) sigaction(SIGALRM, &sa, NULL);
sa.sa_handler = sigterm_client;
sigdelset(&sigs, SIGTERM);
(void) sigaction(SIGTERM, &sa, NULL);
sigdelset(&sigs, SIGHUP);
(void) sigaction(SIGHUP, &sa, NULL);
sigdelset(&sigs, SIGQUIT);
(void) sigaction(SIGQUIT, &sa, NULL);
sigdelset(&sigs, SIGINT);
(void) sigaction(SIGINT, &sa, NULL);
#ifdef SIGXCPU
sigdelset(&sigs, SIGXCPU);
(void) sigaction(SIGXCPU, &sa, NULL);
#endif
(void) sigprocmask(SIG_SETMASK, &sigs, NULL);
}
|
Safe
|
[
"CWE-434"
] |
pure-ftpd
|
37ad222868e52271905b94afea4fc780d83294b4
|
1.6518562588266972e+38
| 45 |
Initialize the max upload file size when quotas are enabled
Due to an unwanted check, files causing the quota to be exceeded
were deleted after the upload, but not during the upload.
The bug was introduced in 2009 in version 1.0.23
Spotted by @DroidTest, thanks!
| 0 |
void close()
{
if (m_file) {
if (m_file != stdout)
fclose(m_file);
else
fflush(m_file);
}
m_file= NULL;
}
|
Safe
|
[
"CWE-284",
"CWE-295"
] |
mysql-server
|
3bd5589e1a5a93f9c224badf983cd65c45215390
|
7.225313609155976e+37
| 10 |
WL#6791 : Redefine client --ssl option to imply enforced encryption
# Changed the meaning of the --ssl=1 option of all client binaries
to mean force ssl, not try ssl and fail over to eunecrypted
# Added a new MYSQL_OPT_SSL_ENFORCE mysql_options()
option to specify that an ssl connection is required.
# Added a new macro SSL_SET_OPTIONS() to the client
SSL handling headers that sets all the relevant SSL options at
once.
# Revamped all of the current native clients to use the new macro
# Removed some Windows line endings.
# Added proper handling of the new option into the ssl helper
headers.
# If SSL is mandatory assume that the media is secure enough
for the sha256 plugin to do unencrypted password exchange even
before establishing a connection.
# Set the default ssl cipher to DHE-RSA-AES256-SHA if none is
specified.
# updated test cases that require a non-default cipher to spawn
a mysql command line tool binary since mysqltest has no support
for specifying ciphers.
# updated the replication slave connection code to always enforce
SSL if any of the SSL config options is present.
# test cases added and updated.
# added a mysql_get_option() API to return mysql_options()
values. Used the new API inside the sha256 plugin.
# Fixed compilation warnings because of unused variables.
# Fixed test failures (mysql_ssl and bug13115401)
# Fixed whitespace issues.
# Fully implemented the mysql_get_option() function.
# Added a test case for mysql_get_option()
# fixed some trailing whitespace issues
# fixed some uint/int warnings in mysql_client_test.c
# removed shared memory option from non-windows get_options
tests
# moved MYSQL_OPT_LOCAL_INFILE to the uint options
| 0 |
gst_asf_demux_send_event_unlocked (GstASFDemux * demux, GstEvent * event)
{
gboolean ret = TRUE;
gint i;
GST_DEBUG_OBJECT (demux, "sending %s event to all source pads",
GST_EVENT_TYPE_NAME (event));
for (i = 0; i < demux->num_streams; ++i) {
gst_event_ref (event);
ret &= gst_pad_push_event (demux->stream[i].pad, event);
}
gst_event_unref (event);
return ret;
}
|
Safe
|
[
"CWE-125",
"CWE-787"
] |
gst-plugins-ugly
|
d21017b52a585f145e8d62781bcc1c5fefc7ee37
|
1.1437214008372674e+37
| 15 |
asfdemux: Check that we have enough data available before parsing bool/uint extended content descriptors
https://bugzilla.gnome.org/show_bug.cgi?id=777955
| 0 |
void MonClient::send_log(bool flush)
{
if (log_client) {
Message *lm = log_client->get_mon_log_message(flush);
if (lm)
_send_mon_message(lm);
more_log_pending = log_client->are_pending();
}
}
|
Safe
|
[
"CWE-294"
] |
ceph
|
2927fd91d41e505237cc73f9700e5c6a63e5cb4f
|
2.6242149471826527e+38
| 9 |
mon/MonClient: bring back CEPHX_V2 authorizer challenges
Commit c58c5754dfd2 ("msg/async/ProtocolV1: use AuthServer and
AuthClient") introduced a backwards compatibility issue into msgr1.
To fix it, commit 321548010578 ("mon/MonClient: skip CEPHX_V2
challenge if client doesn't support it") set out to skip authorizer
challenges for peers that don't support CEPHX_V2. However, it
made it so that authorizer challenges are skipped for all peers in
both msgr1 and msgr2 cases, effectively disabling the protection
against replay attacks that was put in place in commit f80b848d3f83
("auth/cephx: add authorizer challenge", CVE-2018-1128).
This is because con->get_features() always returns 0 at that
point. In msgr1 case, the peer shares its features along with the
authorizer, but while they are available in connect_msg.features they
aren't assigned to con until ProtocolV1::open(). In msgr2 case, the
peer doesn't share its features until much later (in CLIENT_IDENT
frame, i.e. after the authentication phase). The result is that
!CEPHX_V2 branch is taken in all cases and replay attack protection
is lost.
Only clusters with cephx_service_require_version set to 2 on the
service daemons would not be silently downgraded. But, since the
default is 1 and there are no reports of looping on BADAUTHORIZER
faults, I'm pretty sure that no one has ever done that. Note that
cephx_require_version set to 2 would have no effect even though it
is supposed to be stronger than cephx_service_require_version
because MonClient::handle_auth_request() didn't check it.
To fix:
- for msgr1, check connect_msg.features (as was done before commit
c58c5754dfd2) and challenge if CEPHX_V2 is supported. Together
with two preceding patches that resurrect proper cephx_* option
handling in msgr1, this covers both "I want old clients to work"
and "I wish to require better authentication" use cases.
- for msgr2, don't check anything and always challenge. CEPHX_V2
predates msgr2, anyone speaking msgr2 must support it.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
(cherry picked from commit 4a82c72e3bdddcb625933e83af8b50a444b961f1)
Conflicts:
src/msg/async/ProtocolV1.cc [ commit c58c5754dfd2
("msg/async/ProtocolV1: use AuthServer and AuthClient") not
in nautilus. This means that only msgr2 is affected, so drop
ProtocolV1.cc hunk. As a result, skip_authorizer_challenge is
never set, but this is fine because msgr1 still uses old ms_*
auth methods and tests CEPHX_V2 appropriately. ]
| 0 |
static int route4_set_parms(struct net *net, struct tcf_proto *tp,
unsigned long base, struct route4_filter *f,
u32 handle, struct route4_head *head,
struct nlattr **tb, struct nlattr *est, int new,
bool ovr, struct netlink_ext_ack *extack)
{
u32 id = 0, to = 0, nhandle = 0x8000;
struct route4_filter *fp;
unsigned int h1;
struct route4_bucket *b;
int err;
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, true, extack);
if (err < 0)
return err;
if (tb[TCA_ROUTE4_TO]) {
if (new && handle & 0x8000)
return -EINVAL;
to = nla_get_u32(tb[TCA_ROUTE4_TO]);
if (to > 0xFF)
return -EINVAL;
nhandle = to;
}
if (tb[TCA_ROUTE4_FROM]) {
if (tb[TCA_ROUTE4_IIF])
return -EINVAL;
id = nla_get_u32(tb[TCA_ROUTE4_FROM]);
if (id > 0xFF)
return -EINVAL;
nhandle |= id << 16;
} else if (tb[TCA_ROUTE4_IIF]) {
id = nla_get_u32(tb[TCA_ROUTE4_IIF]);
if (id > 0x7FFF)
return -EINVAL;
nhandle |= (id | 0x8000) << 16;
} else
nhandle |= 0xFFFF << 16;
if (handle && new) {
nhandle |= handle & 0x7F00;
if (nhandle != handle)
return -EINVAL;
}
h1 = to_hash(nhandle);
b = rtnl_dereference(head->table[h1]);
if (!b) {
b = kzalloc(sizeof(struct route4_bucket), GFP_KERNEL);
if (b == NULL)
return -ENOBUFS;
rcu_assign_pointer(head->table[h1], b);
} else {
unsigned int h2 = from_hash(nhandle >> 16);
for (fp = rtnl_dereference(b->ht[h2]);
fp;
fp = rtnl_dereference(fp->next))
if (fp->handle == f->handle)
return -EEXIST;
}
if (tb[TCA_ROUTE4_TO])
f->id = to;
if (tb[TCA_ROUTE4_FROM])
f->id = to | id<<16;
else if (tb[TCA_ROUTE4_IIF])
f->iif = id;
f->handle = nhandle;
f->bkt = b;
f->tp = tp;
if (tb[TCA_ROUTE4_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_ROUTE4_CLASSID]);
tcf_bind_filter(tp, &f->res, base);
}
return 0;
}
|
Safe
|
[
"CWE-416",
"CWE-200"
] |
linux
|
ef299cc3fa1a9e1288665a9fdc8bff55629fd359
|
1.201086607488058e+38
| 83 |
net_sched: cls_route: remove the right filter from hashtable
route4_change() allocates a new filter and copies values from
the old one. After the new filter is inserted into the hash
table, the old filter should be removed and freed, as the final
step of the update.
However, the current code mistakenly removes the new one. This
looks apparently wrong to me, and it causes double "free" and
use-after-free too, as reported by syzbot.
Reported-and-tested-by: syzbot+f9b32aaacd60305d9687@syzkaller.appspotmail.com
Reported-and-tested-by: syzbot+2f8c233f131943d6056d@syzkaller.appspotmail.com
Reported-and-tested-by: syzbot+9c2df9fd5e9445b74e01@syzkaller.appspotmail.com
Fixes: 1109c00547fc ("net: sched: RCU cls_route")
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
onig_st_lookup_strend(hash_table_type* table, const UChar* str_key,
const UChar* end_key, hash_data_type *value)
{
st_str_end_key key;
key.s = (UChar* )str_key;
key.end = (UChar* )end_key;
return onig_st_lookup(table, (st_data_t )(&key), value);
}
|
Safe
|
[
"CWE-400",
"CWE-399",
"CWE-674"
] |
oniguruma
|
4097828d7cc87589864fecf452f2cd46c5f37180
|
1.4665960291477721e+38
| 10 |
fix #147: Stack Exhaustion Problem caused by some parsing functions in regcomp.c making recursive calls to themselves.
| 0 |
MagickExport StringInfo *FileToStringInfo(const char *filename,
const size_t extent,ExceptionInfo *exception)
{
StringInfo
*string_info;
assert(filename != (const char *) NULL);
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",filename);
assert(exception != (ExceptionInfo *) NULL);
string_info=AcquireStringInfoContainer();
string_info->path=ConstantString(filename);
string_info->datum=(unsigned char *) FileToBlob(filename,extent,
&string_info->length,exception);
if (string_info->datum == (unsigned char *) NULL)
{
string_info=DestroyStringInfo(string_info);
return((StringInfo *) NULL);
}
return(string_info);
}
|
Safe
|
[
"CWE-190"
] |
ImageMagick
|
be90a5395695f0d19479a5d46b06c678be7f7927
|
3.380326013277189e+38
| 20 |
https://github.com/ImageMagick/ImageMagick/issues/1721
| 0 |
static int packet_mc_drop(struct sock *sk, struct packet_mreq_max *mreq)
{
struct packet_mclist *ml, **mlp;
rtnl_lock();
for (mlp = &pkt_sk(sk)->mclist; (ml = *mlp) != NULL; mlp = &ml->next) {
if (ml->ifindex == mreq->mr_ifindex &&
ml->type == mreq->mr_type &&
ml->alen == mreq->mr_alen &&
memcmp(ml->addr, mreq->mr_address, ml->alen) == 0) {
if (--ml->count == 0) {
struct net_device *dev;
*mlp = ml->next;
dev = __dev_get_by_index(sock_net(sk), ml->ifindex);
if (dev)
packet_dev_mc(dev, ml, -1);
kfree(ml);
}
rtnl_unlock();
return 0;
}
}
rtnl_unlock();
return -EADDRNOTAVAIL;
}
|
Safe
|
[
"CWE-909"
] |
linux-2.6
|
67286640f638f5ad41a946b9a3dc75327950248f
|
1.5778328248551242e+38
| 26 |
net: packet: fix information leak to userland
packet_getname_spkt() doesn't initialize all members of sa_data field of
sockaddr struct if strlen(dev->name) < 13. This structure is then copied
to userland. It leads to leaking of contents of kernel stack memory.
We have to fully fill sa_data with strncpy() instead of strlcpy().
The same with packet_getname(): it doesn't initialize sll_pkttype field of
sockaddr_ll. Set it to zero.
Signed-off-by: Vasiliy Kulikov <segooon@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
void trio_set_quote TRIO_ARGS2((ref, is_quote), trio_pointer_t ref, int is_quote)
{
if (is_quote)
((trio_reference_t*)ref)->parameter->flags |= FLAGS_QUOTE;
else
((trio_reference_t*)ref)->parameter->flags &= ~FLAGS_QUOTE;
}
|
Safe
|
[
"CWE-190",
"CWE-125"
] |
FreeRDP
|
05cd9ea2290d23931f615c1b004d4b2e69074e27
|
1.7680118690150275e+38
| 7 |
Fixed TrioParse and trio_length limts.
CVE-2020-4030 thanks to @antonio-morales for finding this.
| 0 |
static int ssl_socket_poll (CONNECTION* conn, time_t wait_secs)
{
sslsockdata *data = conn->sockdata;
if (!data)
return -1;
if (SSL_has_pending (data->ssl))
return 1;
else
return raw_socket_poll (conn, wait_secs);
}
|
Safe
|
[
"CWE-74"
] |
mutt
|
c547433cdf2e79191b15c6932c57f1472bfb5ff4
|
1.887386450479336e+38
| 12 |
Fix STARTTLS response injection attack.
Thanks again to Damian Poddebniak and Fabian Ising from the Münster
University of Applied Sciences for reporting this issue. Their
summary in ticket 248 states the issue clearly:
We found another STARTTLS-related issue in Mutt. Unfortunately, it
affects SMTP, POP3 and IMAP.
When the server responds with its "let's do TLS now message", e.g. A
OK begin TLS\r\n in IMAP or +OK begin TLS\r\n in POP3, Mutt will
also read any data after the \r\n and save it into some internal
buffer for later processing. This is problematic, because a MITM
attacker can inject arbitrary responses.
There is a nice blogpost by Wietse Venema about a "command
injection" in postfix (http://www.postfix.org/CVE-2011-0411.html).
What we have here is the problem in reverse, i.e. not a command
injection, but a "response injection."
This commit fixes the issue by clearing the CONNECTION input buffer in
mutt_ssl_starttls().
To make backporting this fix easier, the new functions only clear the
top-level CONNECTION buffer; they don't handle nested buffering in
mutt_zstrm.c or mutt_sasl.c. However both of those wrap the
connection *after* STARTTLS, so this is currently okay. mutt_tunnel.c
occurs before connecting, but it does not perform any nesting.
| 0 |
writev_bytes (GOutputStream * stream, GOutputVector * vectors, gint n_vectors,
gsize * bytes_written, gboolean block, GCancellable * cancellable)
{
gsize _bytes_written = 0;
gsize written;
GError *err = NULL;
GPollableReturn res = G_POLLABLE_RETURN_OK;
while (n_vectors > 0) {
if (block) {
if (G_UNLIKELY (!g_output_stream_writev (stream, vectors, n_vectors,
&written, cancellable, &err))) {
/* This will never return G_IO_ERROR_WOULD_BLOCK */
res = G_POLLABLE_RETURN_FAILED;
goto error;
}
} else {
res =
g_pollable_output_stream_writev_nonblocking (G_POLLABLE_OUTPUT_STREAM
(stream), vectors, n_vectors, &written, cancellable, &err);
if (res != G_POLLABLE_RETURN_OK) {
g_assert (written == 0);
goto error;
}
}
_bytes_written += written;
/* skip vectors that have been written in full */
while (written > 0 && written >= vectors[0].size) {
written -= vectors[0].size;
++vectors;
--n_vectors;
}
/* skip partially written vector data */
if (written > 0) {
vectors[0].size -= written;
vectors[0].buffer = ((guint8 *) vectors[0].buffer) + written;
}
}
*bytes_written = _bytes_written;
return GST_RTSP_OK;
/* ERRORS */
error:
{
*bytes_written = _bytes_written;
if (err)
GST_WARNING ("%s", err->message);
if (res == G_POLLABLE_RETURN_WOULD_BLOCK) {
g_assert (!err);
return GST_RTSP_EINTR;
} else if (g_error_matches (err, G_IO_ERROR, G_IO_ERROR_CANCELLED)) {
g_clear_error (&err);
return GST_RTSP_EINTR;
} else if (g_error_matches (err, G_IO_ERROR, G_IO_ERROR_TIMED_OUT)) {
g_clear_error (&err);
return GST_RTSP_ETIMEOUT;
} else if (G_UNLIKELY (written == 0)) {
g_clear_error (&err);
return GST_RTSP_EEOF;
}
g_clear_error (&err);
return GST_RTSP_ESYS;
}
}
|
Safe
|
[] |
gst-plugins-base
|
f672277509705c4034bc92a141eefee4524d15aa
|
4.433858891386166e+37
| 71 |
gstrtspconnection: Security loophole making heap overflow
The former code allowed an attacker to create a heap overflow by
sending a longer than allowed session id in a response and including a
semicolon to change the maximum length. With this change, the parser
will never go beyond 512 bytes.
| 0 |
void ceph_crypto_shutdown(void) {
unregister_key_type(&key_type_ceph);
}
|
Safe
|
[
"CWE-399"
] |
linux
|
a45f795c65b479b4ba107b6ccde29b896d51ee98
|
1.7947838458410816e+38
| 3 |
libceph: introduce ceph_crypt() for in-place en/decryption
Starting with 4.9, kernel stacks may be vmalloced and therefore not
guaranteed to be physically contiguous; the new CONFIG_VMAP_STACK
option is enabled by default on x86. This makes it invalid to use
on-stack buffers with the crypto scatterlist API, as sg_set_buf()
expects a logical address and won't work with vmalloced addresses.
There isn't a different (e.g. kvec-based) crypto API we could switch
net/ceph/crypto.c to and the current scatterlist.h API isn't getting
updated to accommodate this use case. Allocating a new header and
padding for each operation is a non-starter, so do the en/decryption
in-place on a single pre-assembled (header + data + padding) heap
buffer. This is explicitly supported by the crypto API:
"... the caller may provide the same scatter/gather list for the
plaintext and cipher text. After the completion of the cipher
operation, the plaintext data is replaced with the ciphertext data
in case of an encryption and vice versa for a decryption."
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Sage Weil <sage@redhat.com>
| 0 |
static int ext4_releasepage(struct page *page, gfp_t wait)
{
journal_t *journal = EXT4_JOURNAL(page->mapping->host);
trace_ext4_releasepage(page);
/* Page has dirty journalled data -> cannot release */
if (PageChecked(page))
return 0;
if (journal)
return jbd2_journal_try_to_free_buffers(journal, page);
else
return try_to_free_buffers(page);
}
|
Safe
|
[
"CWE-703"
] |
linux
|
ce9f24cccdc019229b70a5c15e2b09ad9c0ab5d1
|
1.2542676466806851e+38
| 14 |
ext4: check journal inode extents more carefully
Currently, system zones just track ranges of block, that are "important"
fs metadata (bitmaps, group descriptors, journal blocks, etc.). This
however complicates how extent tree (or indirect blocks) can be checked
for inodes that actually track such metadata - currently the journal
inode but arguably we should be treating quota files or resize inode
similarly. We cannot run __ext4_ext_check() on such metadata inodes when
loading their extents as that would immediately trigger the validity
checks and so we just hack around that and special-case the journal
inode. This however leads to a situation that a journal inode which has
extent tree of depth at least one can have invalid extent tree that gets
unnoticed until ext4_cache_extents() crashes.
To overcome this limitation, track inode number each system zone belongs
to (0 is used for zones not belonging to any inode). We can then verify
inode number matches the expected one when verifying extent tree and
thus avoid the false errors. With this there's no need to to
special-case journal inode during extent tree checking anymore so remove
it.
Fixes: 0a944e8a6c66 ("ext4: don't perform block validity checks on the journal inode")
Reported-by: Wolfgang Frisch <wolfgang.frisch@suse.com>
Reviewed-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20200728130437.7804-4-jack@suse.cz
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
| 0 |
random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
{
ssize_t n, retval = 0, count = 0;
if (nbytes == 0)
return 0;
while (nbytes > 0) {
n = nbytes;
if (n > SEC_XFER_SIZE)
n = SEC_XFER_SIZE;
DEBUG_ENT("reading %d bits\n", n*8);
n = extract_entropy_user(&blocking_pool, buf, n);
DEBUG_ENT("read got %d bits (%d still needed)\n",
n*8, (nbytes-n)*8);
if (n == 0) {
if (file->f_flags & O_NONBLOCK) {
retval = -EAGAIN;
break;
}
DEBUG_ENT("sleeping?\n");
wait_event_interruptible(random_read_wait,
input_pool.entropy_count >=
random_read_wakeup_thresh);
DEBUG_ENT("awake\n");
if (signal_pending(current)) {
retval = -ERESTARTSYS;
break;
}
continue;
}
if (n < 0) {
retval = n;
break;
}
count += n;
buf += n;
nbytes -= n;
break; /* This break makes the device work */
/* like a named pipe */
}
/*
* If we gave the user some bytes, update the access time.
*/
if (count)
file_accessed(file);
return (count ? count : retval);
}
|
Safe
|
[
"CWE-310"
] |
linux-2.6
|
8a0a9bd4db63bc45e3017bedeafbd88d0eb84d02
|
1.9384829661116584e+38
| 60 |
random: make get_random_int() more random
It's a really simple patch that basically just open-codes the current
"secure_ip_id()" call, but when open-coding it we now use a _static_
hashing area, so that it gets updated every time.
And to make sure somebody can't just start from the same original seed of
all-zeroes, and then do the "half_md4_transform()" over and over until
they get the same sequence as the kernel has, each iteration also mixes in
the same old "current->pid + jiffies" we used - so we should now have a
regular strong pseudo-number generator, but we also have one that doesn't
have a single seed.
Note: the "pid + jiffies" is just meant to be a tiny tiny bit of noise. It
has no real meaning. It could be anything. I just picked the previous
seed, it's just that now we keep the state in between calls and that will
feed into the next result, and that should make all the difference.
I made that hash be a per-cpu data just to avoid cache-line ping-pong:
having multiple CPU's write to the same data would be fine for randomness,
and add yet another layer of chaos to it, but since get_random_int() is
supposed to be a fast interface I did it that way instead. I considered
using "__raw_get_cpu_var()" to avoid any preemption overhead while still
getting the hash be _mostly_ ping-pong free, but in the end good taste won
out.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
int vc4_queue_seqno_cb(struct drm_device *dev,
struct vc4_seqno_cb *cb, uint64_t seqno,
void (*func)(struct vc4_seqno_cb *cb))
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
int ret = 0;
unsigned long irqflags;
cb->func = func;
INIT_WORK(&cb->work, vc4_seqno_cb_work);
spin_lock_irqsave(&vc4->job_lock, irqflags);
if (seqno > vc4->finished_seqno) {
cb->seqno = seqno;
list_add_tail(&cb->work.entry, &vc4->seqno_cb_list);
} else {
schedule_work(&cb->work);
}
spin_unlock_irqrestore(&vc4->job_lock, irqflags);
return ret;
}
|
Safe
|
[
"CWE-190",
"CWE-703"
] |
linux
|
0f2ff82e11c86c05d051cae32b58226392d33bbf
|
8.443835721423943e+37
| 22 |
drm/vc4: Fix an integer overflow in temporary allocation layout.
We copy the unvalidated ioctl arguments from the user into kernel
temporary memory to run the validation from, to avoid a race where the
user updates the unvalidate contents in between validating them and
copying them into the validated BO.
However, in setting up the layout of the kernel side, we failed to
check one of the additions (the roundup() for shader_rec_offset)
against integer overflow, allowing a nearly MAX_UINT value of
bin_cl_size to cause us to under-allocate the temporary space that we
then copy_from_user into.
Reported-by: Murray McAllister <murray.mcallister@insomniasec.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes: d5b1a78a772f ("drm/vc4: Add support for drawing 3D frames.")
| 0 |
};
static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
{
|
Safe
|
[
"CWE-476"
] |
linux
|
4969c06a0d83c9c3dc50b8efcdc8eeedfce896f6
|
1.9486221661400026e+38
| 4 |
f2fs: support swap file w/ DIO
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
| 0 |
make_join_statistics(JOIN *join, List<TABLE_LIST> &tables_list,
DYNAMIC_ARRAY *keyuse_array)
{
int error= 0;
TABLE *UNINIT_VAR(table); /* inited in all loops */
uint i,table_count,const_count,key;
table_map found_const_table_map, all_table_map;
key_map const_ref, eq_part;
bool has_expensive_keyparts;
TABLE **table_vector;
JOIN_TAB *stat,*stat_end,*s,**stat_ref, **stat_vector;
KEYUSE *keyuse,*start_keyuse;
table_map outer_join=0;
table_map no_rows_const_tables= 0;
SARGABLE_PARAM *sargables= 0;
List_iterator<TABLE_LIST> ti(tables_list);
TABLE_LIST *tables;
DBUG_ENTER("make_join_statistics");
table_count=join->table_count;
/*
best_positions is ok to allocate with alloc() as we copy things to it with
memcpy()
*/
if (!multi_alloc_root(join->thd->mem_root,
&stat, sizeof(JOIN_TAB)*(table_count),
&stat_ref, sizeof(JOIN_TAB*)* MAX_TABLES,
&stat_vector, sizeof(JOIN_TAB*)* (table_count +1),
&table_vector, sizeof(TABLE*)*(table_count*2),
&join->positions, sizeof(POSITION)*(table_count + 1),
&join->best_positions,
sizeof(POSITION)*(table_count + 1),
NullS))
DBUG_RETURN(1);
/* The following should be optimized to only clear critical things */
bzero((void*)stat, sizeof(JOIN_TAB)* table_count);
/* Initialize POSITION objects */
for (i=0 ; i <= table_count ; i++)
(void) new ((char*) (join->positions + i)) POSITION;
join->best_ref= stat_vector;
stat_end=stat+table_count;
found_const_table_map= all_table_map=0;
const_count=0;
for (s= stat, i= 0; (tables= ti++); s++, i++)
{
TABLE_LIST *embedding= tables->embedding;
stat_vector[i]=s;
s->keys.init();
s->const_keys.init();
s->checked_keys.init();
s->needed_reg.init();
table_vector[i]=s->table=table=tables->table;
s->tab_list= tables;
table->pos_in_table_list= tables;
error= tables->fetch_number_of_rows();
set_statistics_for_table(join->thd, table);
bitmap_clear_all(&table->cond_set);
#ifdef WITH_PARTITION_STORAGE_ENGINE
const bool all_partitions_pruned_away= table->all_partitions_pruned_away;
#else
const bool all_partitions_pruned_away= FALSE;
#endif
DBUG_EXECUTE_IF("bug11747970_raise_error",
{ join->thd->set_killed(KILL_QUERY_HARD); });
if (error)
{
table->file->print_error(error, MYF(0));
goto error;
}
table->quick_keys.clear_all();
table->intersect_keys.clear_all();
table->reginfo.join_tab=s;
table->reginfo.not_exists_optimize=0;
bzero((char*) table->const_key_parts, sizeof(key_part_map)*table->s->keys);
all_table_map|= table->map;
s->preread_init_done= FALSE;
s->join=join;
s->dependent= tables->dep_tables;
if (tables->schema_table)
table->file->stats.records= table->used_stat_records= 2;
table->quick_condition_rows= table->stat_records();
s->on_expr_ref= &tables->on_expr;
if (*s->on_expr_ref)
{
/* s is the only inner table of an outer join */
if (!table->is_filled_at_execution() &&
((!table->file->stats.records &&
(table->file->ha_table_flags() & HA_STATS_RECORDS_IS_EXACT)) ||
all_partitions_pruned_away) && !embedding)
{ // Empty table
s->dependent= 0; // Ignore LEFT JOIN depend.
no_rows_const_tables |= table->map;
set_position(join,const_count++,s,(KEYUSE*) 0);
continue;
}
outer_join|= table->map;
s->embedding_map= 0;
for (;embedding; embedding= embedding->embedding)
s->embedding_map|= embedding->nested_join->nj_map;
continue;
}
if (embedding)
{
/* s belongs to a nested join, maybe to several embedded joins */
s->embedding_map= 0;
bool inside_an_outer_join= FALSE;
do
{
/*
If this is a semi-join nest, skip it, and proceed upwards. Maybe
we're in some outer join nest
*/
if (embedding->sj_on_expr)
{
embedding= embedding->embedding;
continue;
}
inside_an_outer_join= TRUE;
NESTED_JOIN *nested_join= embedding->nested_join;
s->embedding_map|=nested_join->nj_map;
s->dependent|= embedding->dep_tables;
embedding= embedding->embedding;
outer_join|= nested_join->used_tables;
}
while (embedding);
if (inside_an_outer_join)
continue;
}
if (!table->is_filled_at_execution() &&
(table->s->system ||
(table->file->stats.records <= 1 &&
(table->file->ha_table_flags() & HA_STATS_RECORDS_IS_EXACT)) ||
all_partitions_pruned_away) &&
!s->dependent &&
!table->fulltext_searched && !join->no_const_tables)
{
set_position(join,const_count++,s,(KEYUSE*) 0);
no_rows_const_tables |= table->map;
}
/* SJ-Materialization handling: */
if (table->pos_in_table_list->jtbm_subselect &&
table->pos_in_table_list->jtbm_subselect->is_jtbm_const_tab)
{
set_position(join,const_count++,s,(KEYUSE*) 0);
no_rows_const_tables |= table->map;
}
}
stat_vector[i]=0;
join->outer_join=outer_join;
if (join->outer_join)
{
/*
Build transitive closure for relation 'to be dependent on'.
This will speed up the plan search for many cases with outer joins,
as well as allow us to catch illegal cross references/
Warshall's algorithm is used to build the transitive closure.
As we use bitmaps to represent the relation the complexity
of the algorithm is O((number of tables)^2).
The classic form of the Warshall's algorithm would look like:
for (i= 0; i < table_count; i++)
{
for (j= 0; j < table_count; j++)
{
for (k= 0; k < table_count; k++)
{
if (bitmap_is_set(stat[j].dependent, i) &&
bitmap_is_set(stat[i].dependent, k))
bitmap_set_bit(stat[j].dependent, k);
}
}
}
*/
for (s= stat ; s < stat_end ; s++)
{
table= s->table;
for (JOIN_TAB *t= stat ; t < stat_end ; t++)
{
if (t->dependent & table->map)
t->dependent |= table->reginfo.join_tab->dependent;
}
if (outer_join & s->table->map)
s->table->maybe_null= 1;
}
/* Catch illegal cross references for outer joins */
for (i= 0, s= stat ; i < table_count ; i++, s++)
{
if (s->dependent & s->table->map)
{
join->table_count=0; // Don't use join->table
my_message(ER_WRONG_OUTER_JOIN,
ER_THD(join->thd, ER_WRONG_OUTER_JOIN), MYF(0));
goto error;
}
s->key_dependent= s->dependent;
}
}
if (join->conds || outer_join)
{
if (update_ref_and_keys(join->thd, keyuse_array, stat, join->table_count,
join->conds, ~outer_join, join->select_lex, &sargables))
goto error;
/*
Keyparts without prefixes may be useful if this JOIN is a subquery, and
if the subquery may be executed via the IN-EXISTS strategy.
*/
bool skip_unprefixed_keyparts=
!(join->is_in_subquery() &&
((Item_in_subselect*)join->unit->item)->test_strategy(SUBS_IN_TO_EXISTS));
if (keyuse_array->elements &&
sort_and_filter_keyuse(join->thd, keyuse_array,
skip_unprefixed_keyparts))
goto error;
DBUG_EXECUTE("opt", print_keyuse_array(keyuse_array););
}
join->const_table_map= no_rows_const_tables;
join->const_tables= const_count;
eliminate_tables(join);
join->const_table_map &= ~no_rows_const_tables;
const_count= join->const_tables;
found_const_table_map= join->const_table_map;
/* Read tables with 0 or 1 rows (system tables) */
for (POSITION *p_pos=join->positions, *p_end=p_pos+const_count;
p_pos < p_end ;
p_pos++)
{
s= p_pos->table;
if (! (s->table->map & join->eliminated_tables))
{
int tmp;
s->type=JT_SYSTEM;
join->const_table_map|=s->table->map;
if ((tmp=join_read_const_table(join->thd, s, p_pos)))
{
if (tmp > 0)
goto error; // Fatal error
}
else
{
found_const_table_map|= s->table->map;
s->table->pos_in_table_list->optimized_away= TRUE;
}
}
}
/* loop until no more const tables are found */
int ref_changed;
do
{
ref_changed = 0;
more_const_tables_found:
/*
We only have to loop from stat_vector + const_count as
set_position() will move all const_tables first in stat_vector
*/
for (JOIN_TAB **pos=stat_vector+const_count ; (s= *pos) ; pos++)
{
table=s->table;
if (table->is_filled_at_execution())
continue;
/*
If equi-join condition by a key is null rejecting and after a
substitution of a const table the key value happens to be null
then we can state that there are no matches for this equi-join.
*/
if ((keyuse= s->keyuse) && *s->on_expr_ref && !s->embedding_map &&
!(table->map & join->eliminated_tables))
{
/*
When performing an outer join operation if there are no matching rows
for the single row of the outer table all the inner tables are to be
null complemented and thus considered as constant tables.
Here we apply this consideration to the case of outer join operations
with a single inner table only because the case with nested tables
would require a more thorough analysis.
TODO. Apply single row substitution to null complemented inner tables
for nested outer join operations.
*/
while (keyuse->table == table)
{
if (!keyuse->is_for_hash_join() &&
!(keyuse->val->used_tables() & ~join->const_table_map) &&
keyuse->val->is_null() && keyuse->null_rejecting)
{
s->type= JT_CONST;
mark_as_null_row(table);
found_const_table_map|= table->map;
join->const_table_map|= table->map;
set_position(join,const_count++,s,(KEYUSE*) 0);
goto more_const_tables_found;
}
keyuse++;
}
}
if (s->dependent) // If dependent on some table
{
// All dep. must be constants
if (s->dependent & ~(found_const_table_map))
continue;
if (table->file->stats.records <= 1L &&
(table->file->ha_table_flags() & HA_STATS_RECORDS_IS_EXACT) &&
!table->pos_in_table_list->embedding &&
!((outer_join & table->map) &&
(*s->on_expr_ref)->is_expensive()))
{ // system table
int tmp= 0;
s->type=JT_SYSTEM;
join->const_table_map|=table->map;
set_position(join,const_count++,s,(KEYUSE*) 0);
if ((tmp= join_read_const_table(join->thd, s, join->positions+const_count-1)))
{
if (tmp > 0)
goto error; // Fatal error
}
else
found_const_table_map|= table->map;
continue;
}
}
/* check if table can be read by key or table only uses const refs */
if ((keyuse=s->keyuse))
{
s->type= JT_REF;
while (keyuse->table == table)
{
if (keyuse->is_for_hash_join())
{
keyuse++;
continue;
}
start_keyuse=keyuse;
key=keyuse->key;
s->keys.set_bit(key); // TODO: remove this ?
const_ref.clear_all();
eq_part.clear_all();
has_expensive_keyparts= false;
do
{
if (keyuse->val->type() != Item::NULL_ITEM &&
!keyuse->optimize &&
keyuse->keypart != FT_KEYPART)
{
if (!((~found_const_table_map) & keyuse->used_tables))
{
const_ref.set_bit(keyuse->keypart);
if (keyuse->val->is_expensive())
has_expensive_keyparts= true;
}
eq_part.set_bit(keyuse->keypart);
}
keyuse++;
} while (keyuse->table == table && keyuse->key == key);
TABLE_LIST *embedding= table->pos_in_table_list->embedding;
/*
TODO (low priority): currently we ignore the const tables that
are within a semi-join nest which is within an outer join nest.
The effect of this is that we don't do const substitution for
such tables.
*/
KEY *keyinfo= table->key_info + key;
uint key_parts= table->actual_n_key_parts(keyinfo);
if (eq_part.is_prefix(key_parts) &&
!table->fulltext_searched &&
(!embedding || (embedding->sj_on_expr && !embedding->embedding)))
{
key_map base_part, base_const_ref, base_eq_part;
base_part.set_prefix(keyinfo->user_defined_key_parts);
base_const_ref= const_ref;
base_const_ref.intersect(base_part);
base_eq_part= eq_part;
base_eq_part.intersect(base_part);
if (table->actual_key_flags(keyinfo) & HA_NOSAME)
{
if (base_const_ref == base_eq_part &&
!has_expensive_keyparts &&
!((outer_join & table->map) &&
(*s->on_expr_ref)->is_expensive()))
{ // Found everything for ref.
int tmp;
ref_changed = 1;
s->type= JT_CONST;
join->const_table_map|=table->map;
set_position(join,const_count++,s,start_keyuse);
if (create_ref_for_key(join, s, start_keyuse, FALSE,
found_const_table_map))
goto error;
if ((tmp=join_read_const_table(join->thd, s,
join->positions+const_count-1)))
{
if (tmp > 0)
goto error; // Fatal error
}
else
found_const_table_map|= table->map;
break;
}
}
else if (base_const_ref == base_eq_part)
s->const_keys.set_bit(key);
}
}
}
}
} while (ref_changed);
join->sort_by_table= get_sort_by_table(join->order, join->group_list,
join->select_lex->leaf_tables,
join->const_table_map);
/*
Update info on indexes that can be used for search lookups as
reading const tables may has added new sargable predicates.
*/
if (const_count && sargables)
{
for( ; sargables->field ; sargables++)
{
Field *field= sargables->field;
JOIN_TAB *join_tab= field->table->reginfo.join_tab;
key_map possible_keys= field->key_start;
possible_keys.intersect(field->table->keys_in_use_for_query);
bool is_const= 1;
for (uint j=0; j < sargables->num_values; j++)
is_const&= sargables->arg_value[j]->const_item();
if (is_const)
join_tab[0].const_keys.merge(possible_keys);
}
}
join->impossible_where= false;
if (join->conds && const_count)
{
Item* &conds= join->conds;
COND_EQUAL *orig_cond_equal = join->cond_equal;
conds->update_used_tables();
conds= conds->remove_eq_conds(join->thd, &join->cond_value, true);
if (conds && conds->type() == Item::COND_ITEM &&
((Item_cond*) conds)->functype() == Item_func::COND_AND_FUNC)
join->cond_equal= &((Item_cond_and*) conds)->m_cond_equal;
join->select_lex->where= conds;
if (join->cond_value == Item::COND_FALSE)
{
join->impossible_where= true;
conds= new (join->thd->mem_root) Item_int(join->thd, (longlong) 0, 1);
}
join->cond_equal= NULL;
if (conds)
{
if (conds->type() == Item::COND_ITEM &&
((Item_cond*) conds)->functype() == Item_func::COND_AND_FUNC)
join->cond_equal= (&((Item_cond_and *) conds)->m_cond_equal);
else if (conds->type() == Item::FUNC_ITEM &&
((Item_func*) conds)->functype() == Item_func::MULT_EQUAL_FUNC)
{
if (!join->cond_equal)
join->cond_equal= new COND_EQUAL;
join->cond_equal->current_level.empty();
join->cond_equal->current_level.push_back((Item_equal*) conds,
join->thd->mem_root);
}
}
if (orig_cond_equal != join->cond_equal)
{
/*
If join->cond_equal has changed all references to it from COND_EQUAL
objects associated with ON expressions must be updated.
*/
for (JOIN_TAB **pos=stat_vector+const_count ; (s= *pos) ; pos++)
{
if (*s->on_expr_ref && s->cond_equal &&
s->cond_equal->upper_levels == orig_cond_equal)
s->cond_equal->upper_levels= join->cond_equal;
}
}
}
/* Calc how many (possible) matched records in each table */
for (s=stat ; s < stat_end ; s++)
{
s->startup_cost= 0;
if (s->type == JT_SYSTEM || s->type == JT_CONST)
{
/* Only one matching row */
s->found_records= s->records= 1;
s->read_time=1.0;
s->worst_seeks=1.0;
continue;
}
/* Approximate found rows and time to read them */
if (s->table->is_filled_at_execution())
{
get_delayed_table_estimates(s->table, &s->records, &s->read_time,
&s->startup_cost);
s->found_records= s->records;
table->quick_condition_rows=s->records;
}
else
{
s->scan_time();
}
/*
Set a max range of how many seeks we can expect when using keys
This is can't be to high as otherwise we are likely to use
table scan.
*/
s->worst_seeks= MY_MIN((double) s->found_records / 10,
(double) s->read_time*3);
if (s->worst_seeks < 2.0) // Fix for small tables
s->worst_seeks=2.0;
/*
Add to stat->const_keys those indexes for which all group fields or
all select distinct fields participate in one index.
*/
add_group_and_distinct_keys(join, s);
s->table->cond_selectivity= 1.0;
/*
Perform range analysis if there are keys it could use (1).
Don't do range analysis if we're on the inner side of an outer join (2).
Do range analysis if we're on the inner side of a semi-join (3).
Don't do range analysis for materialized subqueries (4).
Don't do range analysis for materialized derived tables (5)
*/
if ((!s->const_keys.is_clear_all() ||
!bitmap_is_clear_all(&s->table->cond_set)) && // (1)
(!s->table->pos_in_table_list->embedding || // (2)
(s->table->pos_in_table_list->embedding && // (3)
s->table->pos_in_table_list->embedding->sj_on_expr)) && // (3)
!s->table->is_filled_at_execution() && // (4)
!(s->table->pos_in_table_list->derived && // (5)
s->table->pos_in_table_list->is_materialized_derived())) // (5)
{
bool impossible_range= FALSE;
ha_rows records= HA_POS_ERROR;
SQL_SELECT *select= 0;
if (!s->const_keys.is_clear_all())
{
select= make_select(s->table, found_const_table_map,
found_const_table_map,
*s->on_expr_ref ? *s->on_expr_ref : join->conds,
(SORT_INFO*) 0,
1, &error);
if (!select)
goto error;
records= get_quick_record_count(join->thd, select, s->table,
&s->const_keys, join->row_limit);
/* Range analyzer could modify the condition. */
if (*s->on_expr_ref)
*s->on_expr_ref= select->cond;
else
{
join->conds= select->cond;
if (join->conds && join->conds->type() == Item::COND_ITEM &&
((Item_cond*) (join->conds))->functype() ==
Item_func::COND_AND_FUNC)
join->cond_equal= &((Item_cond_and*) (join->conds))->m_cond_equal;
}
s->quick=select->quick;
s->needed_reg=select->needed_reg;
select->quick=0;
impossible_range= records == 0 && s->table->reginfo.impossible_range;
}
if (!impossible_range)
{
if (join->thd->variables.optimizer_use_condition_selectivity > 1)
calculate_cond_selectivity_for_table(join->thd, s->table,
*s->on_expr_ref ?
s->on_expr_ref : &join->conds);
if (s->table->reginfo.impossible_range)
{
impossible_range= TRUE;
records= 0;
}
}
if (impossible_range)
{
/*
Impossible WHERE or ON expression
In case of ON, we mark that the we match one empty NULL row.
In case of WHERE, don't set found_const_table_map to get the
caller to abort with a zero row result.
*/
join->const_table_map|= s->table->map;
set_position(join,const_count++,s,(KEYUSE*) 0);
s->type= JT_CONST;
if (*s->on_expr_ref)
{
/* Generate empty row */
s->info= ET_IMPOSSIBLE_ON_CONDITION;
found_const_table_map|= s->table->map;
s->type= JT_CONST;
mark_as_null_row(s->table); // All fields are NULL
}
}
if (records != HA_POS_ERROR)
{
s->found_records=records;
s->read_time= s->quick ? s->quick->read_time : 0.0;
}
if (select)
delete select;
}
}
if (pull_out_semijoin_tables(join))
DBUG_RETURN(TRUE);
join->join_tab=stat;
join->top_join_tab_count= table_count;
join->map2table=stat_ref;
join->table= table_vector;
join->const_tables=const_count;
join->found_const_table_map=found_const_table_map;
if (join->const_tables != join->table_count)
optimize_keyuse(join, keyuse_array);
DBUG_ASSERT(!join->conds || !join->cond_equal ||
!join->cond_equal->current_level.elements ||
(join->conds->type() == Item::COND_ITEM &&
((Item_cond*) (join->conds))->functype() ==
Item_func::COND_AND_FUNC &&
join->cond_equal ==
&((Item_cond_and *) (join->conds))->m_cond_equal) ||
(join->conds->type() == Item::FUNC_ITEM &&
((Item_func*) (join->conds))->functype() ==
Item_func::MULT_EQUAL_FUNC &&
join->cond_equal->current_level.elements == 1 &&
join->cond_equal->current_level.head() == join->conds));
if (optimize_semijoin_nests(join, all_table_map))
DBUG_RETURN(TRUE); /* purecov: inspected */
{
double records= 1;
SELECT_LEX_UNIT *unit= join->select_lex->master_unit();
/* Find an optimal join order of the non-constant tables. */
if (join->const_tables != join->table_count)
{
if (choose_plan(join, all_table_map & ~join->const_table_map))
goto error;
#ifdef HAVE_valgrind
// JOIN::positions holds the current query plan. We've already
// made the plan choice, so we should only use JOIN::best_positions
for (uint k=join->const_tables; k < join->table_count; k++)
MEM_UNDEFINED(&join->positions[k], sizeof(join->positions[k]));
#endif
}
else
{
memcpy((uchar*) join->best_positions,(uchar*) join->positions,
sizeof(POSITION)*join->const_tables);
join->join_record_count= 1.0;
join->best_read=1.0;
}
if (!(join->select_options & SELECT_DESCRIBE) &&
unit->derived && unit->derived->is_materialized_derived())
{
/*
Calculate estimated number of rows for materialized derived
table/view.
*/
for (i= 0; i < join->table_count ; i++)
if (double rr= join->best_positions[i].records_read)
records= COST_MULT(records, rr);
ha_rows rows= records > (double) HA_ROWS_MAX ? HA_ROWS_MAX : (ha_rows) records;
set_if_smaller(rows, unit->select_limit_cnt);
join->select_lex->increase_derived_records(rows);
}
}
if (join->choose_subquery_plan(all_table_map & ~join->const_table_map))
goto error;
DEBUG_SYNC(join->thd, "inside_make_join_statistics");
/* Generate an execution plan from the found optimal join order. */
DBUG_RETURN(join->thd->check_killed() || join->get_best_combination());
error:
/*
Need to clean up join_tab from TABLEs in case of error.
They won't get cleaned up by JOIN::cleanup() because JOIN::join_tab
may not be assigned yet by this function (which is building join_tab).
Dangling TABLE::reginfo.join_tab may cause part_of_refkey to choke.
*/
{
TABLE_LIST *tmp_table;
List_iterator<TABLE_LIST> ti2(tables_list);
while ((tmp_table= ti2++))
tmp_table->table->reginfo.join_tab= NULL;
}
DBUG_RETURN (1);
}
|
Safe
|
[
"CWE-89"
] |
server
|
5ba77222e9fe7af8ff403816b5338b18b342053c
|
1.2793553528565937e+38
| 731 |
MDEV-21028 Server crashes in Query_arena::set_query_arena upon SELECT from view
if the view has algorithm=temptable it is not updatable,
so DEFAULT() for its fields is meaningless,
and thus it's NULL or 0/'' for NOT NULL columns.
| 0 |
static void fix_rmode_seg(int seg, struct kvm_save_segment *save)
{
struct kvm_vmx_segment_field *sf = &kvm_vmx_segment_fields[seg];
save->selector = vmcs_read16(sf->selector);
save->base = vmcs_readl(sf->base);
save->limit = vmcs_read32(sf->limit);
save->ar = vmcs_read32(sf->ar_bytes);
vmcs_write16(sf->selector, save->base >> 4);
vmcs_write32(sf->base, save->base & 0xfffff);
vmcs_write32(sf->limit, 0xffff);
vmcs_write32(sf->ar_bytes, 0xf3);
}
|
Safe
|
[
"CWE-20"
] |
linux-2.6
|
16175a796d061833aacfbd9672235f2d2725df65
|
3.3022889835344894e+38
| 13 |
KVM: VMX: Don't allow uninhibited access to EFER on i386
vmx_set_msr() does not allow i386 guests to touch EFER, but they can still
do so through the default: label in the switch. If they set EFER_LME, they
can oops the host.
Fix by having EFER access through the normal channel (which will check for
EFER_LME) even on i386.
Reported-and-tested-by: Benjamin Gilbert <bgilbert@cs.cmu.edu>
Cc: stable@kernel.org
Signed-off-by: Avi Kivity <avi@redhat.com>
| 0 |
static void iwl_fw_dump_rxf(struct iwl_fw_runtime *fwrt,
struct iwl_fw_error_dump_data **dump_data)
{
struct iwl_fwrt_shared_mem_cfg *cfg = &fwrt->smem_cfg;
unsigned long flags;
IWL_DEBUG_INFO(fwrt, "WRT RX FIFO dump\n");
if (!iwl_trans_grab_nic_access(fwrt->trans, &flags))
return;
if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_RXF)) {
/* Pull RXF1 */
iwl_fwrt_dump_rxf(fwrt, dump_data,
cfg->lmac[0].rxfifo1_size, 0, 0);
/* Pull RXF2 */
iwl_fwrt_dump_rxf(fwrt, dump_data, cfg->rxfifo2_size,
RXF_DIFF_FROM_PREV +
fwrt->trans->trans_cfg->umac_prph_offset, 1);
/* Pull LMAC2 RXF1 */
if (fwrt->smem_cfg.num_lmacs > 1)
iwl_fwrt_dump_rxf(fwrt, dump_data,
cfg->lmac[1].rxfifo1_size,
LMAC2_PRPH_OFFSET, 2);
}
iwl_trans_release_nic_access(fwrt->trans, &flags);
}
|
Safe
|
[
"CWE-400",
"CWE-401"
] |
linux
|
b4b814fec1a5a849383f7b3886b654a13abbda7d
|
2.8690534452298634e+38
| 28 |
iwlwifi: dbg_ini: fix memory leak in alloc_sgtable
In alloc_sgtable if alloc_page fails, the alocated table should be
released.
Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
| 0 |
xmlXPathDebugDumpLocationSet(FILE *output, xmlLocationSetPtr cur, int depth) {
int i;
char shift[100];
for (i = 0;((i < depth) && (i < 25));i++)
shift[2 * i] = shift[2 * i + 1] = ' ';
shift[2 * i] = shift[2 * i + 1] = 0;
if (cur == NULL) {
fprintf(output, "%s", shift);
fprintf(output, "LocationSet is NULL !\n");
return;
}
for (i = 0;i < cur->locNr;i++) {
fprintf(output, "%s", shift);
fprintf(output, "%d : ", i + 1);
xmlXPathDebugDumpObject(output, cur->locTab[i], depth + 1);
}
}
|
Safe
|
[
"CWE-119"
] |
libxml2
|
91d19754d46acd4a639a8b9e31f50f31c78f8c9c
|
2.058843367942448e+38
| 21 |
Fix the semantic of XPath axis for namespace/attribute context nodes
The processing of namespace and attributes nodes was not compliant
to the XPath-1.0 specification
| 0 |
may_adjust_incsearch_highlighting(
int firstc,
long count,
incsearch_state_T *is_state,
int c)
{
int skiplen, patlen;
pos_T t;
char_u *pat;
int search_flags = SEARCH_NOOF;
int i;
int save;
int search_delim;
// Parsing range may already set the last search pattern.
// NOTE: must call restore_last_search_pattern() before returning!
save_last_search_pattern();
if (!do_incsearch_highlighting(firstc, &search_delim, is_state,
&skiplen, &patlen))
{
restore_last_search_pattern();
return OK;
}
if (patlen == 0 && ccline.cmdbuff[skiplen] == NUL)
{
restore_last_search_pattern();
return FAIL;
}
if (search_delim == ccline.cmdbuff[skiplen])
{
pat = last_search_pattern();
if (pat == NULL)
{
restore_last_search_pattern();
return FAIL;
}
skiplen = 0;
patlen = (int)STRLEN(pat);
}
else
pat = ccline.cmdbuff + skiplen;
cursor_off();
out_flush();
if (c == Ctrl_G)
{
t = is_state->match_end;
if (LT_POS(is_state->match_start, is_state->match_end))
// Start searching at the end of the match not at the beginning of
// the next column.
(void)decl(&t);
search_flags += SEARCH_COL;
}
else
t = is_state->match_start;
if (!p_hls)
search_flags += SEARCH_KEEP;
++emsg_off;
save = pat[patlen];
pat[patlen] = NUL;
i = searchit(curwin, curbuf, &t, NULL,
c == Ctrl_G ? FORWARD : BACKWARD,
pat, count, search_flags, RE_SEARCH, NULL);
--emsg_off;
pat[patlen] = save;
if (i)
{
is_state->search_start = is_state->match_start;
is_state->match_end = t;
is_state->match_start = t;
if (c == Ctrl_T && firstc != '?')
{
// Move just before the current match, so that when nv_search
// finishes the cursor will be put back on the match.
is_state->search_start = t;
(void)decl(&is_state->search_start);
}
else if (c == Ctrl_G && firstc == '?')
{
// Move just after the current match, so that when nv_search
// finishes the cursor will be put back on the match.
is_state->search_start = t;
(void)incl(&is_state->search_start);
}
if (LT_POS(t, is_state->search_start) && c == Ctrl_G)
{
// wrap around
is_state->search_start = t;
if (firstc == '?')
(void)incl(&is_state->search_start);
else
(void)decl(&is_state->search_start);
}
set_search_match(&is_state->match_end);
curwin->w_cursor = is_state->match_start;
changed_cline_bef_curs();
update_topline();
validate_cursor();
highlight_match = TRUE;
save_viewstate(&is_state->old_viewstate);
update_screen(NOT_VALID);
highlight_match = FALSE;
redrawcmdline();
curwin->w_cursor = is_state->match_end;
}
else
vim_beep(BO_ERROR);
restore_last_search_pattern();
return FAIL;
}
|
Safe
|
[
"CWE-122",
"CWE-787"
] |
vim
|
85b6747abc15a7a81086db31289cf1b8b17e6cb1
|
8.983146273033442e+37
| 113 |
patch 8.2.4214: illegal memory access with large 'tabstop' in Ex mode
Problem: Illegal memory access with large 'tabstop' in Ex mode.
Solution: Allocate enough memory.
| 0 |
static int tls12_do_shared_sigalgs(TLS_SIGALGS *shsig,
const unsigned char *pref, size_t preflen,
const unsigned char *allow, size_t allowlen)
{
const unsigned char *ptmp, *atmp;
size_t i, j, nmatch = 0;
for (i = 0, ptmp = pref; i < preflen; i+=2, ptmp+=2)
{
/* Skip disabled hashes or signature algorithms */
if (tls12_get_hash(ptmp[0]) == NULL)
continue;
if (tls12_get_pkey_idx(ptmp[1]) == -1)
continue;
for (j = 0, atmp = allow; j < allowlen; j+=2, atmp+=2)
{
if (ptmp[0] == atmp[0] && ptmp[1] == atmp[1])
{
nmatch++;
if (shsig)
{
shsig->rhash = ptmp[0];
shsig->rsign = ptmp[1];
tls1_lookup_sigalg(&shsig->hash_nid,
&shsig->sign_nid,
&shsig->signandhash_nid,
ptmp);
shsig++;
}
break;
}
}
}
return nmatch;
}
|
Safe
|
[] |
openssl
|
c70a1fee71119a9005b1f304a3bf47694b4a53ac
|
2.725152389210321e+38
| 34 |
Reorganise supported signature algorithm extension processing.
Only store encoded versions of peer and configured signature algorithms.
Determine shared signature algorithms and cache the result along with NID
equivalents of each algorithm.
(backport from HEAD)
| 0 |
//! Resize image to half-size along XY axes, using an optimized filter \newinstance.
CImg<T> get_resize_halfXY() const {
if (is_empty()) return *this;
static const Tfloat kernel[9] = { 0.07842776544f, 0.1231940459f, 0.07842776544f,
0.1231940459f, 0.1935127547f, 0.1231940459f,
0.07842776544f, 0.1231940459f, 0.07842776544f };
CImg<T> I(9), res(_width/2,_height/2,_depth,_spectrum);
T *ptrd = res._data;
cimg_forZC(*this,z,c) cimg_for3x3(*this,x,y,z,c,I,T)
if (x%2 && y%2) *(ptrd++) = (T)
(I[0]*kernel[0] + I[1]*kernel[1] + I[2]*kernel[2] +
I[3]*kernel[3] + I[4]*kernel[4] + I[5]*kernel[5] +
I[6]*kernel[6] + I[7]*kernel[7] + I[8]*kernel[8]);
return res;
|
Safe
|
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
|
2.9071156515181274e+38
| 14 |
Fix other issues in 'CImg<T>::load_bmp()'.
| 0 |
void EncryptedPreMasterSecret::read(SSL& ssl, input_buffer& input)
{
if (input.get_error()) {
ssl.SetError(bad_input);
return;
}
const CertManager& cert = ssl.getCrypto().get_certManager();
RSA rsa(cert.get_privateKey(), cert.get_privateKeyLength(), false);
uint16 cipherLen = rsa.get_cipherLength();
if (ssl.isTLS()) {
byte len[2];
len[0] = input[AUTO];
len[1] = input[AUTO];
ato16(len, cipherLen);
}
alloc(cipherLen);
input.read(secret_, length_);
if (input.get_error()) {
ssl.SetError(bad_input);
return;
}
opaque preMasterSecret[SECRET_LEN];
memset(preMasterSecret, 0, sizeof(preMasterSecret));
rsa.decrypt(preMasterSecret, secret_, length_,
ssl.getCrypto().get_random());
ProtocolVersion pv = ssl.getSecurity().get_connection().chVersion_;
if (pv.major_ != preMasterSecret[0] || pv.minor_ != preMasterSecret[1])
ssl.SetError(pms_version_error); // continue deriving for timing attack
ssl.set_preMaster(preMasterSecret, SECRET_LEN);
ssl.makeMasterSecret();
}
|
Safe
|
[] |
mysql-server
|
b9768521bdeb1a8069c7b871f4536792b65fd79b
|
2.735725404915903e+38
| 35 |
Updated yassl to yassl-2.3.8
(cherry picked from commit 7f9941eab55ed672bfcccd382dafbdbcfdc75aaa)
| 0 |
PJ_DEF(pj_status_t) pjsip_transport_register( pjsip_tpmgr *mgr,
pjsip_transport *tp )
{
int key_len;
pj_uint32_t hval;
transport *tp_ref = NULL;
transport *tp_add = NULL;
/* Init. */
tp->tpmgr = mgr;
pj_bzero(&tp->idle_timer, sizeof(tp->idle_timer));
tp->idle_timer.user_data = tp;
tp->idle_timer.cb = &transport_idle_callback;
/*
* Register to hash table (see Trac ticket #42).
*/
key_len = sizeof(tp->key.type) + tp->addr_len;
pj_lock_acquire(mgr->lock);
hval = 0;
tp_ref = (transport *)pj_hash_get(mgr->table, &tp->key, key_len, &hval);
/* Get an empty entry from the freelist. */
if (pj_list_empty(&mgr->tp_entry_freelist)) {
unsigned i = 0;
TRACE_((THIS_FILE, "Transport list is full, allocate new entry"));
/* Allocate new entry for the freelist. */
for (; i < PJSIP_TRANSPORT_ENTRY_ALLOC_CNT; ++i) {
tp_add = PJ_POOL_ZALLOC_T(mgr->pool, transport);
if (!tp_add)
return PJ_ENOMEM;
pj_list_init(tp_add);
pj_list_push_back(&mgr->tp_entry_freelist, tp_add);
}
}
tp_add = mgr->tp_entry_freelist.next;
tp_add->tp = tp;
pj_list_erase(tp_add);
if (tp_ref) {
/* There'a already a transport list from the hash table. Add the
* new transport to the list.
*/
pj_list_push_back(tp_ref, tp_add);
TRACE_((THIS_FILE, "Remote address already registered, "
"appended the transport to the list"));
} else {
/* Transport list not found, add it to the hash table. */
pj_hash_set_np(mgr->table, &tp->key, key_len, hval, tp_add->tp_buf,
tp_add);
TRACE_((THIS_FILE, "Remote address not registered, "
"added the transport to the hash"));
}
/* Add ref transport group lock, if any */
if (tp->grp_lock)
pj_grp_lock_add_ref(tp->grp_lock);
pj_lock_release(mgr->lock);
TRACE_((THIS_FILE, "Transport %s registered: type=%s, remote=%s:%d",
tp->obj_name,
pjsip_transport_get_type_name(tp->key.type),
pj_sockaddr_has_addr(&tp->key.rem_addr)?
addr_string(&tp->key.rem_addr):"",
pj_sockaddr_has_addr(&tp->key.rem_addr)?
pj_sockaddr_get_port(&tp->key.rem_addr):0));
return PJ_SUCCESS;
}
|
Safe
|
[
"CWE-297",
"CWE-295"
] |
pjproject
|
67e46c1ac45ad784db5b9080f5ed8b133c122872
|
2.369668514576192e+37
| 72 |
Merge pull request from GHSA-8hcp-hm38-mfph
* Check hostname during TLS transport selection
* revision based on feedback
* remove the code in create_request that has been moved
| 0 |
size_t rand_pool_bytes_remaining(RAND_POOL *pool)
{
return pool->max_len - pool->len;
}
|
Safe
|
[
"CWE-330"
] |
openssl
|
1b0fe00e2704b5e20334a16d3c9099d1ba2ef1be
|
1.3815706942621539e+38
| 4 |
drbg: ensure fork-safety without using a pthread_atfork handler
When the new OpenSSL CSPRNG was introduced in version 1.1.1,
it was announced in the release notes that it would be fork-safe,
which the old CSPRNG hadn't been.
The fork-safety was implemented using a fork count, which was
incremented by a pthread_atfork handler. Initially, this handler
was enabled by default. Unfortunately, the default behaviour
had to be changed for other reasons in commit b5319bdbd095, so
the new OpenSSL CSPRNG failed to keep its promise.
This commit restores the fork-safety using a different approach.
It replaces the fork count by a fork id, which coincides with
the process id on UNIX-like operating systems and is zero on other
operating systems. It is used to detect when an automatic reseed
after a fork is necessary.
To prevent a future regression, it also adds a test to verify that
the child reseeds after fork.
CVE-2019-1549
Reviewed-by: Paul Dale <paul.dale@oracle.com>
Reviewed-by: Matt Caswell <matt@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/9802)
| 0 |
static int nested_vmx_check_tpr_shadow_controls(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
if (!nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW))
return 0;
if (!page_address_valid(vcpu, vmcs12->virtual_apic_page_addr))
return -EINVAL;
return 0;
}
|
Safe
|
[
"CWE-284"
] |
linux
|
727ba748e110b4de50d142edca9d6a9b7e6111d8
|
2.3708227773799316e+38
| 11 |
kvm: nVMX: Enforce cpl=0 for VMX instructions
VMX instructions executed inside a L1 VM will always trigger a VM exit
even when executed with cpl 3. This means we must perform the
privilege check in software.
Fixes: 70f3aac964ae("kvm: nVMX: Remove superfluous VMX instruction fault checks")
Cc: stable@vger.kernel.org
Signed-off-by: Felix Wilhelm <fwilhelm@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
const char* blosc_cbuffer_complib(const void* cbuffer) {
uint8_t* _src = (uint8_t*)(cbuffer); /* current pos for source buffer */
int clibcode;
const char* complib;
/* Read the compressor format/library info */
clibcode = (_src[2] & 0xe0) >> 5;
complib = clibcode_to_clibname(clibcode);
return complib;
}
|
Safe
|
[
"CWE-787"
] |
c-blosc2
|
c4c6470e88210afc95262c8b9fcc27e30ca043ee
|
8.811799519250497e+35
| 10 |
Fixed asan heap buffer overflow when not enough space to write compressed block size.
| 0 |
*/
void bfq_weights_tree_remove(struct bfq_data *bfqd,
struct bfq_queue *bfqq)
{
struct bfq_entity *entity = bfqq->entity.parent;
for_each_entity(entity) {
struct bfq_sched_data *sd = entity->my_sched_data;
if (sd->next_in_service || sd->in_service_entity) {
/*
* entity is still active, because either
* next_in_service or in_service_entity is not
* NULL (see the comments on the definition of
* next_in_service for details on why
* in_service_entity must be checked too).
*
* As a consequence, its parent entities are
* active as well, and thus this loop must
* stop here.
*/
break;
}
/*
* The decrement of num_groups_with_pending_reqs is
* not performed immediately upon the deactivation of
* entity, but it is delayed to when it also happens
* that the first leaf descendant bfqq of entity gets
* all its pending requests completed. The following
* instructions perform this delayed decrement, if
* needed. See the comments on
* num_groups_with_pending_reqs for details.
*/
if (entity->in_groups_with_pending_reqs) {
entity->in_groups_with_pending_reqs = false;
bfqd->num_groups_with_pending_reqs--;
}
}
/*
* Next function is invoked last, because it causes bfqq to be
* freed if the following holds: bfqq is not in service and
* has no dispatched request. DO NOT use bfqq after the next
* function invocation.
*/
__bfq_weights_tree_remove(bfqd, bfqq,
&bfqd->queue_weights_tree);
|
Safe
|
[
"CWE-416"
] |
linux
|
2f95fa5c955d0a9987ffdc3a095e2f4e62c5f2a9
|
3.0979685313291495e+38
| 48 |
block, bfq: fix use-after-free in bfq_idle_slice_timer_body
In bfq_idle_slice_timer func, bfqq = bfqd->in_service_queue is
not in bfqd-lock critical section. The bfqq, which is not
equal to NULL in bfq_idle_slice_timer, may be freed after passing
to bfq_idle_slice_timer_body. So we will access the freed memory.
In addition, considering the bfqq may be in race, we should
firstly check whether bfqq is in service before doing something
on it in bfq_idle_slice_timer_body func. If the bfqq in race is
not in service, it means the bfqq has been expired through
__bfq_bfqq_expire func, and wait_request flags has been cleared in
__bfq_bfqd_reset_in_service func. So we do not need to re-clear the
wait_request of bfqq which is not in service.
KASAN log is given as follows:
[13058.354613] ==================================================================
[13058.354640] BUG: KASAN: use-after-free in bfq_idle_slice_timer+0xac/0x290
[13058.354644] Read of size 8 at addr ffffa02cf3e63f78 by task fork13/19767
[13058.354646]
[13058.354655] CPU: 96 PID: 19767 Comm: fork13
[13058.354661] Call trace:
[13058.354667] dump_backtrace+0x0/0x310
[13058.354672] show_stack+0x28/0x38
[13058.354681] dump_stack+0xd8/0x108
[13058.354687] print_address_description+0x68/0x2d0
[13058.354690] kasan_report+0x124/0x2e0
[13058.354697] __asan_load8+0x88/0xb0
[13058.354702] bfq_idle_slice_timer+0xac/0x290
[13058.354707] __hrtimer_run_queues+0x298/0x8b8
[13058.354710] hrtimer_interrupt+0x1b8/0x678
[13058.354716] arch_timer_handler_phys+0x4c/0x78
[13058.354722] handle_percpu_devid_irq+0xf0/0x558
[13058.354731] generic_handle_irq+0x50/0x70
[13058.354735] __handle_domain_irq+0x94/0x110
[13058.354739] gic_handle_irq+0x8c/0x1b0
[13058.354742] el1_irq+0xb8/0x140
[13058.354748] do_wp_page+0x260/0xe28
[13058.354752] __handle_mm_fault+0x8ec/0x9b0
[13058.354756] handle_mm_fault+0x280/0x460
[13058.354762] do_page_fault+0x3ec/0x890
[13058.354765] do_mem_abort+0xc0/0x1b0
[13058.354768] el0_da+0x24/0x28
[13058.354770]
[13058.354773] Allocated by task 19731:
[13058.354780] kasan_kmalloc+0xe0/0x190
[13058.354784] kasan_slab_alloc+0x14/0x20
[13058.354788] kmem_cache_alloc_node+0x130/0x440
[13058.354793] bfq_get_queue+0x138/0x858
[13058.354797] bfq_get_bfqq_handle_split+0xd4/0x328
[13058.354801] bfq_init_rq+0x1f4/0x1180
[13058.354806] bfq_insert_requests+0x264/0x1c98
[13058.354811] blk_mq_sched_insert_requests+0x1c4/0x488
[13058.354818] blk_mq_flush_plug_list+0x2d4/0x6e0
[13058.354826] blk_flush_plug_list+0x230/0x548
[13058.354830] blk_finish_plug+0x60/0x80
[13058.354838] read_pages+0xec/0x2c0
[13058.354842] __do_page_cache_readahead+0x374/0x438
[13058.354846] ondemand_readahead+0x24c/0x6b0
[13058.354851] page_cache_sync_readahead+0x17c/0x2f8
[13058.354858] generic_file_buffered_read+0x588/0xc58
[13058.354862] generic_file_read_iter+0x1b4/0x278
[13058.354965] ext4_file_read_iter+0xa8/0x1d8 [ext4]
[13058.354972] __vfs_read+0x238/0x320
[13058.354976] vfs_read+0xbc/0x1c0
[13058.354980] ksys_read+0xdc/0x1b8
[13058.354984] __arm64_sys_read+0x50/0x60
[13058.354990] el0_svc_common+0xb4/0x1d8
[13058.354994] el0_svc_handler+0x50/0xa8
[13058.354998] el0_svc+0x8/0xc
[13058.354999]
[13058.355001] Freed by task 19731:
[13058.355007] __kasan_slab_free+0x120/0x228
[13058.355010] kasan_slab_free+0x10/0x18
[13058.355014] kmem_cache_free+0x288/0x3f0
[13058.355018] bfq_put_queue+0x134/0x208
[13058.355022] bfq_exit_icq_bfqq+0x164/0x348
[13058.355026] bfq_exit_icq+0x28/0x40
[13058.355030] ioc_exit_icq+0xa0/0x150
[13058.355035] put_io_context_active+0x250/0x438
[13058.355038] exit_io_context+0xd0/0x138
[13058.355045] do_exit+0x734/0xc58
[13058.355050] do_group_exit+0x78/0x220
[13058.355054] __wake_up_parent+0x0/0x50
[13058.355058] el0_svc_common+0xb4/0x1d8
[13058.355062] el0_svc_handler+0x50/0xa8
[13058.355066] el0_svc+0x8/0xc
[13058.355067]
[13058.355071] The buggy address belongs to the object at ffffa02cf3e63e70#012 which belongs to the cache bfq_queue of size 464
[13058.355075] The buggy address is located 264 bytes inside of#012 464-byte region [ffffa02cf3e63e70, ffffa02cf3e64040)
[13058.355077] The buggy address belongs to the page:
[13058.355083] page:ffff7e80b3cf9800 count:1 mapcount:0 mapping:ffff802db5c90780 index:0xffffa02cf3e606f0 compound_mapcount: 0
[13058.366175] flags: 0x2ffffe0000008100(slab|head)
[13058.370781] raw: 2ffffe0000008100 ffff7e80b53b1408 ffffa02d730c1c90 ffff802db5c90780
[13058.370787] raw: ffffa02cf3e606f0 0000000000370023 00000001ffffffff 0000000000000000
[13058.370789] page dumped because: kasan: bad access detected
[13058.370791]
[13058.370792] Memory state around the buggy address:
[13058.370797] ffffa02cf3e63e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fb fb
[13058.370801] ffffa02cf3e63e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[13058.370805] >ffffa02cf3e63f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[13058.370808] ^
[13058.370811] ffffa02cf3e63f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[13058.370815] ffffa02cf3e64000: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[13058.370817] ==================================================================
[13058.370820] Disabling lock debugging due to kernel taint
Here, we directly pass the bfqd to bfq_idle_slice_timer_body func.
--
V2->V3: rewrite the comment as suggested by Paolo Valente
V1->V2: add one comment, and add Fixes and Reported-by tag.
Fixes: aee69d78d ("block, bfq: introduce the BFQ-v0 I/O scheduler as an extra scheduler")
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reported-by: Wang Wang <wangwang2@huawei.com>
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
Signed-off-by: Feilong Lin <linfeilong@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| 0 |
decode_preR13 (Bit_Chain *restrict dat, Dwg_Data *restrict dwg)
{
BITCODE_RL entities_start, entities_end, blocks_start, blocks_end;
BITCODE_RL rl1, rl2;
BITCODE_RS rs2;
Dwg_Object *obj = NULL;
int tbl_id;
int error = 0;
{
int i;
struct Dwg_Header *_obj = &dwg->header;
Bit_Chain *hdl_dat = dat;
dat->byte = 0x06;
// clang-format off
#include "header.spec"
// clang-format on
}
LOG_TRACE ("@0x%lx\n", dat->byte); // 0x14
// tables really
dwg->header.num_sections = 12;
dwg->header.section = (Dwg_Section *)calloc (
1, sizeof (Dwg_Section) * dwg->header.num_sections);
if (!dwg->header.section)
{
LOG_ERROR ("Out of memory");
return DWG_ERR_OUTOFMEM;
}
entities_start = bit_read_RL (dat);
entities_end = bit_read_RL (dat);
LOG_TRACE ("entities 0x%x - 0x%x\n", entities_start, entities_end);
blocks_start = bit_read_RL (dat);
rl1 = bit_read_RL (dat); // 0x40
blocks_end = bit_read_RL (dat);
rl2 = bit_read_RL (dat); // 0x80
LOG_TRACE ("blocks 0x%x (0x%x) - 0x%x (0x%x)\n", blocks_start, rl1,
blocks_end, rl2);
tbl_id = 0;
dwg->header.section[0].number = 0;
dwg->header.section[0].type = (Dwg_Section_Type)SECTION_HEADER_R11;
strcpy (dwg->header.section[0].name, "HEADER");
decode_preR13_section_ptr ("BLOCK", SECTION_BLOCK, dat, dwg);
decode_preR13_section_ptr ("LAYER", SECTION_LAYER, dat, dwg);
decode_preR13_section_ptr ("STYLE", SECTION_STYLE, dat, dwg);
// skip one
decode_preR13_section_ptr ("LTYPE", SECTION_LTYPE, dat, dwg);
decode_preR13_section_ptr ("VIEW", SECTION_VIEW, dat, dwg);
if (dwg->header.section[SECTION_BLOCK].size > dat->size)
{
LOG_ERROR ("BLOCK.size overflow")
return DWG_ERR_INVALIDDWG;
}
LOG_TRACE ("@0x%lx\n", dat->byte); // 0x5e
{
Dwg_Header_Variables *_obj = &dwg->header_vars;
Bit_Chain *hdl_dat = dat;
// clang-format off
#include "header_variables_r11.spec"
// clang-format on
}
LOG_TRACE ("@0x%lx\n", dat->byte); // 0x23a
dat->byte = 0x3ef;
LOG_TRACE ("@0x%lx\n", dat->byte);
decode_preR13_section_ptr ("UCS", SECTION_UCS, dat, dwg);
// skip: 0x500 - dat->bytes
dat->byte = 0x500;
LOG_TRACE ("@0x%lx\n", dat->byte); // 0x23a
decode_preR13_section_ptr ("VPORT", SECTION_VPORT, dat, dwg);
rl1 = bit_read_RL (dat);
rl2 = bit_read_RL (dat);
LOG_TRACE ("?2 long: 0x%x 0x%x\n", rl1, rl2);
decode_preR13_section_ptr ("APPID", SECTION_APPID, dat, dwg);
rl1 = bit_read_RL (dat);
rs2 = bit_read_RS (dat);
LOG_TRACE ("?long+short: 0x%x 0x%x\n", rl1, (unsigned)rs2);
decode_preR13_section_ptr ("DIMSTYLE", SECTION_DIMSTYLE, dat, dwg);
// skip: 0x69f - dat->bytes
dat->byte = 0x69f;
decode_preR13_section_ptr ("VPORT_ENTITY", SECTION_VPORT_ENTITY, dat, dwg);
dat->byte += 38;
// entities
error |= decode_preR13_entities (entities_start, entities_end, 0, dat, dwg);
if (error >= DWG_ERR_CRITICAL)
return error;
dat->byte += 19; /* crc + sentinel? */
error |= decode_preR13_section (SECTION_BLOCK, dat, dwg);
error |= decode_preR13_section (SECTION_LAYER, dat, dwg);
error |= decode_preR13_section (SECTION_STYLE, dat, dwg);
error |= decode_preR13_section (SECTION_LTYPE, dat, dwg);
error |= decode_preR13_section (SECTION_VIEW, dat, dwg);
error |= decode_preR13_section (SECTION_UCS, dat, dwg);
error |= decode_preR13_section (SECTION_VPORT, dat, dwg);
error |= decode_preR13_section (SECTION_APPID, dat, dwg);
error |= decode_preR13_section (SECTION_DIMSTYLE, dat, dwg);
error |= decode_preR13_section (SECTION_VPORT_ENTITY, dat, dwg);
if (error >= DWG_ERR_CRITICAL)
return error;
// blocks
error |= decode_preR13_entities (blocks_start, blocks_end, blocks_start - 0x40000000,
dat, dwg);
if (error >= DWG_ERR_CRITICAL)
return error;
LOG_TRACE ("@0x%lx\n", dat->byte);
// 36 byte: 9x long
rl1 = bit_read_RL (dat);
rl2 = bit_read_RL (dat);
LOG_TRACE ("?2long: 0x%x 0x%x %f\n", rl1, rl2,
(double)dat->chain[dat->byte - 8]);
rl1 = bit_read_RL (dat);
rl2 = bit_read_RL (dat);
LOG_TRACE ("?2long: 0x%x 0x%x %f\n", rl1, rl2,
(double)dat->chain[dat->byte - 8]);
rl1 = bit_read_RL (dat);
rl2 = bit_read_RL (dat);
LOG_TRACE ("?2long: 0x%x 0x%x %f\n", rl1, rl2,
(double)dat->chain[dat->byte - 8]);
rl1 = bit_read_RL (dat);
rl2 = bit_read_RL (dat);
LOG_TRACE ("?2long: 0x%x 0x%x %f\n", rl1, rl2,
(double)dat->chain[dat->byte - 8]);
rl1 = bit_read_RL (dat);
LOG_TRACE ("?1long: 0x%x\n", rl1);
LOG_TRACE ("@0x%lx: 4 block ptrs chk\n", dat->byte);
if ((rl1 = bit_read_RL (dat)) != entities_start)
{
LOG_WARN ("entities_start %x/%x", rl1, entities_start);
}
if ((rl1 = bit_read_RL (dat)) != entities_end)
{
LOG_WARN ("entities_end %x/%x", rl1, entities_end);
}
if ((rl1 = bit_read_RL (dat)) != blocks_start)
{
LOG_WARN ("blocks_start %x/%x", rl1, blocks_start);
}
if ((rl1 = bit_read_RL (dat)) != blocks_end)
{
LOG_WARN ("blocks_end %x/%x", rl1, blocks_end);
}
// 12 byte
LOG_TRACE ("@0x%lx\n", dat->byte);
rl1 = bit_read_RL (dat);
rl2 = bit_read_RL (dat);
LOG_TRACE ("?2long: 0x%x 0x%x\n", rl1, rl2);
rl1 = bit_read_RL (dat);
LOG_TRACE ("?1long: 0x%x\n", rl1);
dat->byte = blocks_end + 36 + 4 * 4 + 12;
LOG_TRACE ("@0x%lx\n", dat->byte);
decode_preR13_section_chk (SECTION_BLOCK, dat, dwg);
decode_preR13_section_chk (SECTION_LAYER, dat, dwg);
decode_preR13_section_chk (SECTION_STYLE, dat, dwg);
decode_preR13_section_chk (SECTION_LTYPE, dat, dwg);
decode_preR13_section_chk (SECTION_VIEW, dat, dwg);
decode_preR13_section_chk (SECTION_UCS, dat, dwg);
decode_preR13_section_chk (SECTION_VPORT, dat, dwg);
decode_preR13_section_chk (SECTION_APPID, dat, dwg);
decode_preR13_section_chk (SECTION_DIMSTYLE, dat, dwg);
decode_preR13_section_chk (SECTION_VPORT_ENTITY, dat, dwg);
rl1 = bit_read_RL (dat);
LOG_TRACE ("long 0x%x\n", rl1); // address
return 0;
}
|
Safe
|
[
"CWE-703",
"CWE-835"
] |
libredwg
|
c6f6668b82bfe595899cc820279ac37bb9ef16f5
|
5.803392690845433e+37
| 171 |
cleanup tio.unknown
not needed anymore, we only have UNKNOWN_OBJ or UNKNOWN_ENT with full common
entity_data.
Fixes GH #178 heap_overflow2
| 0 |
static inline u64 perf_cgroup_event_time(struct perf_event *event)
{
struct perf_cgroup_info *t;
t = per_cpu_ptr(event->cgrp->info, event->cpu);
return t->time;
}
|
Safe
|
[
"CWE-703",
"CWE-189"
] |
linux
|
8176cced706b5e5d15887584150764894e94e02f
|
1.6177424102783752e+38
| 7 |
perf: Treat attr.config as u64 in perf_swevent_init()
Trinity discovered that we fail to check all 64 bits of
attr.config passed by user space, resulting to out-of-bounds
access of the perf_swevent_enabled array in
sw_perf_event_destroy().
Introduced in commit b0a873ebb ("perf: Register PMU
implementations").
Signed-off-by: Tommi Rantala <tt.rantala@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: davej@redhat.com
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Link: http://lkml.kernel.org/r/1365882554-30259-1-git-send-email-tt.rantala@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
| 0 |
addBackwardRuleWithMultipleCells(widechar *cells, int count,
TranslationTableOffset *newRuleOffset, TranslationTableRule *newRule,
TranslationTableHeader *table) {
/* direction = 1, newRule->dotslen > 1 */
TranslationTableRule *currentRule = NULL;
TranslationTableOffset *currentOffsetPtr = &table->backRules[_lou_stringHash(cells)];
if (newRule->opcode == CTO_SwapCc) return;
while (*currentOffsetPtr) {
int currentLength;
int newLength;
currentRule = (TranslationTableRule *)&table->ruleArea[*currentOffsetPtr];
currentLength = currentRule->dotslen + currentRule->charslen;
newLength = count + newRule->charslen;
if (newLength > currentLength) break;
if (currentLength == newLength)
if ((currentRule->opcode == CTO_Always) && (newRule->opcode != CTO_Always))
break;
currentOffsetPtr = ¤tRule->dotsnext;
}
newRule->dotsnext = *currentOffsetPtr;
*currentOffsetPtr = *newRuleOffset;
}
|
Safe
|
[
"CWE-787"
] |
liblouis
|
fb2bfce4ed49ac4656a8f7e5b5526e4838da1dde
|
2.7595973656453135e+38
| 22 |
Fix yet another buffer overflow in the braille table parser
Reported by Henri Salo
Fixes #592
| 0 |
WandExport DrawingWand *CloneDrawingWand(const DrawingWand *wand)
{
DrawingWand
*clone_wand;
register ssize_t
i;
assert(wand != (DrawingWand *) NULL);
assert(wand->signature == MagickWandSignature);
if (wand->debug != MagickFalse)
(void) LogMagickEvent(WandEvent,GetMagickModule(),"%s",wand->name);
clone_wand=(DrawingWand *) AcquireMagickMemory(sizeof(*clone_wand));
if (clone_wand == (DrawingWand *) NULL)
ThrowWandFatalException(ResourceLimitFatalError,
"MemoryAllocationFailed",GetExceptionMessage(errno));
(void) ResetMagickMemory(clone_wand,0,sizeof(*clone_wand));
clone_wand->id=AcquireWandId();
(void) FormatLocaleString(clone_wand->name,MagickPathExtent,
"DrawingWand-%.20g",(double) clone_wand->id);
clone_wand->exception=AcquireExceptionInfo();
InheritException(clone_wand->exception,wand->exception);
clone_wand->mvg=AcquireString(wand->mvg);
clone_wand->mvg_length=strlen(clone_wand->mvg);
clone_wand->mvg_alloc=wand->mvg_length+1;
clone_wand->mvg_width=wand->mvg_width;
clone_wand->pattern_id=AcquireString(wand->pattern_id);
clone_wand->pattern_offset=wand->pattern_offset;
clone_wand->pattern_bounds=wand->pattern_bounds;
clone_wand->index=wand->index;
clone_wand->graphic_context=(DrawInfo **) AcquireQuantumMemory((size_t)
wand->index+1UL,sizeof(*wand->graphic_context));
if (clone_wand->graphic_context == (DrawInfo **) NULL)
ThrowWandFatalException(ResourceLimitFatalError,"MemoryAllocationFailed",
GetExceptionMessage(errno));
for (i=0; i <= (ssize_t) wand->index; i++)
clone_wand->graphic_context[i]=CloneDrawInfo((ImageInfo *) NULL,
wand->graphic_context[i]);
clone_wand->filter_off=wand->filter_off;
clone_wand->indent_depth=wand->indent_depth;
clone_wand->path_operation=wand->path_operation;
clone_wand->path_mode=wand->path_mode;
clone_wand->image=wand->image;
if (wand->image != (Image *) NULL)
clone_wand->image=CloneImage(wand->image,0,0,MagickTrue,
clone_wand->exception);
clone_wand->destroy=MagickTrue;
clone_wand->debug=IsEventLogging();
if (clone_wand->debug != MagickFalse)
(void) LogMagickEvent(WandEvent,GetMagickModule(),"%s",clone_wand->name);
clone_wand->signature=MagickWandSignature;
return(clone_wand);
}
|
Safe
|
[
"CWE-476"
] |
ImageMagick
|
6ad5fc3c9b652eec27fc0b1a0817159f8547d5d9
|
1.8649988734531673e+38
| 53 |
https://github.com/ImageMagick/ImageMagick/issues/716
| 0 |
static int airo_get_scan(struct net_device *dev,
struct iw_request_info *info,
struct iw_point *dwrq,
char *extra)
{
struct airo_info *ai = dev->ml_priv;
BSSListElement *net;
int err = 0;
char *current_ev = extra;
/* If a scan is in-progress, return -EAGAIN */
if (ai->scan_timeout > 0)
return -EAGAIN;
if (down_interruptible(&ai->sem))
return -EAGAIN;
list_for_each_entry (net, &ai->network_list, list) {
/* Translate to WE format this entry */
current_ev = airo_translate_scan(dev, info, current_ev,
extra + dwrq->length,
&net->bss);
/* Check if there is space for one more entry */
if((extra + dwrq->length - current_ev) <= IW_EV_ADDR_LEN) {
/* Ask user space to try again with a bigger buffer */
err = -E2BIG;
goto out;
}
}
/* Length of data */
dwrq->length = (current_ev - extra);
dwrq->flags = 0; /* todo */
out:
up(&ai->sem);
return err;
}
|
Safe
|
[
"CWE-703",
"CWE-264"
] |
linux
|
550fd08c2cebad61c548def135f67aba284c6162
|
1.8728027928615287e+38
| 39 |
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Karsten Keil <isdn@linux-pingi.de>
CC: "David S. Miller" <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Krzysztof Halasa <khc@pm.waw.pl>
CC: "John W. Linville" <linville@tuxdriver.com>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: Marcel Holtmann <marcel@holtmann.org>
CC: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
void CairoOutputDev::drawImageMask(GfxState *state, Object *ref, Stream *str,
int width, int height, GBool invert,
GBool interpolate, GBool inlineImg) {
/* FIXME: Doesn't the image mask support any colorspace? */
cairo_set_source (cairo, fill_pattern);
/* work around a cairo bug when scaling 1x1 surfaces */
if (width == 1 && height == 1) {
ImageStream *imgStr;
Guchar pix;
int invert_bit;
imgStr = new ImageStream(str, width, 1, 1);
imgStr->reset();
imgStr->getPixel(&pix);
imgStr->close();
delete imgStr;
invert_bit = invert ? 1 : 0;
if (pix ^ invert_bit)
return;
cairo_save (cairo);
cairo_rectangle (cairo, 0., 0., width, height);
cairo_fill (cairo);
cairo_restore (cairo);
if (cairo_shape) {
cairo_save (cairo_shape);
cairo_rectangle (cairo_shape, 0., 0., width, height);
cairo_fill (cairo_shape);
cairo_restore (cairo_shape);
}
return;
}
if (state->getFillColorSpace()->getMode() == csPattern)
cairo_push_group_with_content (cairo, CAIRO_CONTENT_ALPHA);
/* shape is 1.0 for painted areas, 0.0 for unpainted ones */
cairo_matrix_t matrix;
cairo_get_matrix (cairo, &matrix);
//XXX: it is possible that we should only do sub pixel positioning if
// we are rendering fonts */
if (!printing && prescaleImages && matrix.xy == 0.0 && matrix.yx == 0.0) {
drawImageMaskPrescaled(state, ref, str, width, height, invert, interpolate, inlineImg);
} else {
drawImageMaskRegular(state, ref, str, width, height, invert, interpolate, inlineImg);
}
if (state->getFillColorSpace()->getMode() == csPattern) {
if (mask)
cairo_pattern_destroy (mask);
mask = cairo_pop_group (cairo);
}
}
|
Safe
|
[] |
poppler
|
abf167af8b15e5f3b510275ce619e6fdb42edd40
|
2.852780067622364e+38
| 57 |
Implement tiling/patterns in SplashOutputDev
Fixes bug 13518
| 0 |
void test_nghttp2_session_open_stream(void) {
nghttp2_session *session;
nghttp2_session_callbacks callbacks;
nghttp2_stream *stream;
nghttp2_priority_spec pri_spec;
memset(&callbacks, 0, sizeof(nghttp2_session_callbacks));
nghttp2_session_server_new(&session, &callbacks, NULL);
nghttp2_priority_spec_init(&pri_spec, 0, 245, 0);
stream = nghttp2_session_open_stream(session, 1, NGHTTP2_STREAM_FLAG_NONE,
&pri_spec, NGHTTP2_STREAM_OPENED, NULL);
CU_ASSERT(1 == session->num_incoming_streams);
CU_ASSERT(0 == session->num_outgoing_streams);
CU_ASSERT(NGHTTP2_STREAM_OPENED == stream->state);
CU_ASSERT(245 == stream->weight);
CU_ASSERT(&session->root == stream->dep_prev);
CU_ASSERT(NGHTTP2_SHUT_NONE == stream->shut_flags);
stream = nghttp2_session_open_stream(session, 2, NGHTTP2_STREAM_FLAG_NONE,
&pri_spec_default,
NGHTTP2_STREAM_OPENING, NULL);
CU_ASSERT(1 == session->num_incoming_streams);
CU_ASSERT(1 == session->num_outgoing_streams);
CU_ASSERT(&session->root == stream->dep_prev);
CU_ASSERT(NGHTTP2_DEFAULT_WEIGHT == stream->weight);
CU_ASSERT(NGHTTP2_SHUT_NONE == stream->shut_flags);
stream = nghttp2_session_open_stream(session, 4, NGHTTP2_STREAM_FLAG_NONE,
&pri_spec_default,
NGHTTP2_STREAM_RESERVED, NULL);
CU_ASSERT(1 == session->num_incoming_streams);
CU_ASSERT(1 == session->num_outgoing_streams);
CU_ASSERT(&session->root == stream->dep_prev);
CU_ASSERT(NGHTTP2_DEFAULT_WEIGHT == stream->weight);
CU_ASSERT(NGHTTP2_SHUT_RD == stream->shut_flags);
nghttp2_priority_spec_init(&pri_spec, 1, 17, 1);
stream = nghttp2_session_open_stream(session, 3, NGHTTP2_STREAM_FLAG_NONE,
&pri_spec, NGHTTP2_STREAM_OPENED, NULL);
CU_ASSERT(17 == stream->weight);
CU_ASSERT(1 == stream->dep_prev->stream_id);
/* Dependency to idle stream */
nghttp2_priority_spec_init(&pri_spec, 1000000007, 240, 1);
stream = nghttp2_session_open_stream(session, 5, NGHTTP2_STREAM_FLAG_NONE,
&pri_spec, NGHTTP2_STREAM_OPENED, NULL);
CU_ASSERT(240 == stream->weight);
CU_ASSERT(1000000007 == stream->dep_prev->stream_id);
stream = nghttp2_session_get_stream_raw(session, 1000000007);
CU_ASSERT(NGHTTP2_DEFAULT_WEIGHT == stream->weight);
CU_ASSERT(&session->root == stream->dep_prev);
/* Dependency to closed stream which is not in dependency tree */
session->last_recv_stream_id = 7;
nghttp2_priority_spec_init(&pri_spec, 7, 10, 0);
stream = nghttp2_session_open_stream(session, 9, NGHTTP2_FLAG_NONE, &pri_spec,
NGHTTP2_STREAM_OPENED, NULL);
CU_ASSERT(NGHTTP2_DEFAULT_WEIGHT == stream->weight);
CU_ASSERT(&session->root == stream->dep_prev);
nghttp2_session_del(session);
nghttp2_session_client_new(&session, &callbacks, NULL);
stream = nghttp2_session_open_stream(session, 4, NGHTTP2_STREAM_FLAG_NONE,
&pri_spec_default,
NGHTTP2_STREAM_RESERVED, NULL);
CU_ASSERT(0 == session->num_incoming_streams);
CU_ASSERT(0 == session->num_outgoing_streams);
CU_ASSERT(&session->root == stream->dep_prev);
CU_ASSERT(NGHTTP2_DEFAULT_WEIGHT == stream->weight);
CU_ASSERT(NGHTTP2_SHUT_WR == stream->shut_flags);
nghttp2_session_del(session);
}
|
Safe
|
[] |
nghttp2
|
0a6ce87c22c69438ecbffe52a2859c3a32f1620f
|
6.226976429621787e+37
| 83 |
Add nghttp2_option_set_max_outbound_ack
| 0 |
static char *argv_group_get_help(RCmd *cmd, RCmdDesc *cd, bool use_color) {
RStrBuf *sb = r_strbuf_new (NULL);
fill_usage_strbuf (sb, cd, use_color);
void **it_cd;
size_t max_len = 0;
r_cmd_desc_children_foreach (cd, it_cd) {
RCmdDesc *child = *(RCmdDesc **)it_cd;
max_len = update_max_len (child, max_len);
}
r_cmd_desc_children_foreach (cd, it_cd) {
RCmdDesc *child = *(RCmdDesc **)it_cd;
print_child_help (sb, child, max_len, use_color);
}
return r_strbuf_drain (sb);
}
|
Safe
|
[
"CWE-125",
"CWE-787"
] |
radare2
|
0052500c1ed5bf8263b26b9fd7773dbdc6f170c4
|
2.751317886783818e+38
| 18 |
Fix heap OOB read in macho.iterate_chained_fixups ##crash
* Reported by peacock-doris via huntr.dev
* Reproducer 'tests_65305'
mrmacete:
* Return early if segs_count is 0
* Initialize segs_count also for reconstructed fixups
Co-authored-by: pancake <pancake@nopcode.org>
Co-authored-by: Francesco Tamagni <mrmacete@protonmail.ch>
| 0 |
static av_cold int hevc_init_context(AVCodecContext *avctx)
{
HEVCContext *s = avctx->priv_data;
int i;
s->avctx = avctx;
s->HEVClc = av_mallocz(sizeof(HEVCLocalContext));
if (!s->HEVClc)
goto fail;
s->HEVClcList[0] = s->HEVClc;
s->sList[0] = s;
s->cabac_state = av_malloc(HEVC_CONTEXTS);
if (!s->cabac_state)
goto fail;
s->output_frame = av_frame_alloc();
if (!s->output_frame)
goto fail;
for (i = 0; i < FF_ARRAY_ELEMS(s->DPB); i++) {
s->DPB[i].frame = av_frame_alloc();
if (!s->DPB[i].frame)
goto fail;
s->DPB[i].tf.f = s->DPB[i].frame;
}
s->max_ra = INT_MAX;
s->md5_ctx = av_md5_alloc();
if (!s->md5_ctx)
goto fail;
ff_bswapdsp_init(&s->bdsp);
s->context_initialized = 1;
s->eos = 0;
ff_hevc_reset_sei(&s->sei);
return 0;
fail:
hevc_decode_free(avctx);
return AVERROR(ENOMEM);
}
|
Safe
|
[
"CWE-476"
] |
FFmpeg
|
54655623a82632e7624714d7b2a3e039dc5faa7e
|
2.6484754887729636e+38
| 47 |
avcodec/hevcdec: Avoid only partly skiping duplicate first slices
Fixes: NULL pointer dereference and out of array access
Fixes: 13871/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-5746167087890432
Fixes: 13845/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-5650370728034304
This also fixes the return code for explode mode
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
| 0 |
static void hda_codec_device_class_init(ObjectClass *klass, void *data)
{
DeviceClass *k = DEVICE_CLASS(klass);
k->realize = hda_codec_dev_realize;
k->unrealize = hda_codec_dev_unrealize;
set_bit(DEVICE_CATEGORY_SOUND, k->categories);
k->bus_type = TYPE_HDA_BUS;
device_class_set_props(k, hda_props);
|
Safe
|
[
"CWE-787"
] |
qemu
|
79fa99831debc9782087e834382c577215f2f511
|
1.219825382083477e+38
| 9 |
hw/audio/intel-hda: Restrict DMA engine to memories (not MMIO devices)
Issue #542 reports a reentrancy problem when the DMA engine accesses
the HDA controller I/O registers. Fix by restricting the DMA engine
to memories regions (forbidding MMIO devices such the HDA controller).
Reported-by: OSS-Fuzz (Issue 28435)
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/542
CVE: CVE-2021-3611
Message-Id: <20211218160912.1591633-3-philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
| 0 |
uint64_t vhost_vsock_common_get_features(VirtIODevice *vdev, uint64_t features,
Error **errp)
{
VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
if (vvc->seqpacket != ON_OFF_AUTO_OFF) {
virtio_add_feature(&features, VIRTIO_VSOCK_F_SEQPACKET);
}
features = vhost_get_features(&vvc->vhost_dev, feature_bits, features);
if (vvc->seqpacket == ON_OFF_AUTO_ON &&
!virtio_has_feature(features, VIRTIO_VSOCK_F_SEQPACKET)) {
error_setg(errp, "vhost-vsock backend doesn't support seqpacket");
}
return features;
}
|
Safe
|
[
"CWE-772"
] |
qemu
|
8d1b247f3748ac4078524130c6d7ae42b6140aaf
|
1.7317173907016447e+38
| 18 |
vhost-vsock: detach the virqueue element in case of error
In vhost_vsock_common_send_transport_reset(), if an element popped from
the virtqueue is invalid, we should call virtqueue_detach_element() to
detach it from the virtqueue before freeing its memory.
Fixes: fc0b9b0e1c ("vhost-vsock: add virtio sockets device")
Fixes: CVE-2022-26354
Cc: qemu-stable@nongnu.org
Reported-by: VictorV <vv474172261@gmail.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20220228095058.27899-1-sgarzare@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| 0 |
static int for_each_bisect_ref(const char *submodule, each_ref_fn fn, void *cb_data, const char *term) {
struct strbuf bisect_refs = STRBUF_INIT;
int status;
strbuf_addf(&bisect_refs, "refs/bisect/%s", term);
status = for_each_ref_in_submodule(submodule, bisect_refs.buf, fn, cb_data);
strbuf_release(&bisect_refs);
return status;
}
|
Safe
|
[] |
git
|
a937b37e766479c8e780b17cce9c4b252fd97e40
|
2.8006953145228406e+38
| 8 |
revision: quit pruning diff more quickly when possible
When the revision traversal machinery is given a pathspec,
we must compute the parent-diff for each commit to determine
which ones are TREESAME. We set the QUICK diff flag to avoid
looking at more entries than we need; we really just care
whether there are any changes at all.
But there is one case where we want to know a bit more: if
--remove-empty is set, we care about finding cases where the
change consists only of added entries (in which case we may
prune the parent in try_to_simplify_commit()). To cover that
case, our file_add_remove() callback does not quit the diff
upon seeing an added entry; it keeps looking for other types
of entries.
But this means when --remove-empty is not set (and it is not
by default), we compute more of the diff than is necessary.
You can see this in a pathological case where a commit adds
a very large number of entries, and we limit based on a
broad pathspec. E.g.:
perl -e '
chomp(my $blob = `git hash-object -w --stdin </dev/null`);
for my $a (1..1000) {
for my $b (1..1000) {
print "100644 $blob\t$a/$b\n";
}
}
' | git update-index --index-info
git commit -qm add
git rev-list HEAD -- .
This case takes about 100ms now, but after this patch only
needs 6ms. That's not a huge improvement, but it's easy to
get and it protects us against even more pathological cases
(e.g., going from 1 million to 10 million files would take
ten times as long with the current code, but not increase at
all after this patch).
This is reported to minorly speed-up pathspec limiting in
real world repositories (like the 100-million-file Windows
repository), but probably won't make a noticeable difference
outside of pathological setups.
This patch actually covers the case without --remove-empty,
and the case where we see only deletions. See the in-code
comment for details.
Note that we have to add a new member to the diff_options
struct so that our callback can see the value of
revs->remove_empty_trees. This callback parameter could be
passed to the "add_remove" and "change" callbacks, but
there's not much point. They already receive the
diff_options struct, and doing it this way avoids having to
update the function signature of the other callbacks
(arguably the format_callback and output_prefix functions
could benefit from the same simplification).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
| 0 |
struct md_thread *md_register_thread(void (*run) (struct md_thread *),
struct mddev *mddev, const char *name)
{
struct md_thread *thread;
thread = kzalloc(sizeof(struct md_thread), GFP_KERNEL);
if (!thread)
return NULL;
init_waitqueue_head(&thread->wqueue);
thread->run = run;
thread->mddev = mddev;
thread->timeout = MAX_SCHEDULE_TIMEOUT;
thread->tsk = kthread_run(md_thread, thread,
"%s_%s",
mdname(thread->mddev),
name);
if (IS_ERR(thread->tsk)) {
kfree(thread);
return NULL;
}
return thread;
}
|
Safe
|
[
"CWE-200"
] |
linux
|
b6878d9e03043695dbf3fa1caa6dfc09db225b16
|
2.233782022489129e+38
| 24 |
md: use kzalloc() when bitmap is disabled
In drivers/md/md.c get_bitmap_file() uses kmalloc() for creating a
mdu_bitmap_file_t called "file".
5769 file = kmalloc(sizeof(*file), GFP_NOIO);
5770 if (!file)
5771 return -ENOMEM;
This structure is copied to user space at the end of the function.
5786 if (err == 0 &&
5787 copy_to_user(arg, file, sizeof(*file)))
5788 err = -EFAULT
But if bitmap is disabled only the first byte of "file" is initialized
with zero, so it's possible to read some bytes (up to 4095) of kernel
space memory from user space. This is an information leak.
5775 /* bitmap disabled, zero the first byte and copy out */
5776 if (!mddev->bitmap_info.file)
5777 file->pathname[0] = '\0';
Signed-off-by: Benjamin Randazzo <benjamin@randazzo.fr>
Signed-off-by: NeilBrown <neilb@suse.com>
| 0 |
dissect_aggregation_extension(tvbuff_t *tvb, packet_info *pinfo _U_, proto_tree *tree, int offset, int data_len)
{
proto_tree *ftree;
ptvcursor_t *csr;
ftree = proto_tree_add_subtree(tree, tvb, offset, data_len, ett_aggregation_extension, NULL, "Aggregation Extension");
add_ppi_field_header(tvb, ftree, &offset);
data_len -= 4; /* Subtract field header length */
if (data_len != PPI_AGGREGATION_EXTENSION_LEN) {
proto_tree_add_expert_format(ftree, pinfo, &ei_ppi_invalid_length, tvb, offset, data_len, "Invalid length: %u", data_len);
THROW(ReportedBoundsError);
}
csr = ptvcursor_new(ftree, tvb, offset);
ptvcursor_add(csr, hf_aggregation_extension_interface_id, 4, ENC_LITTLE_ENDIAN); /* Last */
ptvcursor_free(csr);
}
|
Safe
|
[
"CWE-20"
] |
wireshark
|
2c13e97d656c1c0ac4d76eb9d307664aae0e0cf7
|
3.089153579839885e+38
| 19 |
The WTAP_ENCAP_ETHERNET dissector needs to be passed a struct eth_phdr.
We now require that. Make it so.
Bug: 12440
Change-Id: Iffee520976b013800699bde3c6092a3e86be0d76
Reviewed-on: https://code.wireshark.org/review/15424
Reviewed-by: Guy Harris <guy@alum.mit.edu>
| 0 |
utf16be_get_case_fold_codes_by_str(OnigCaseFoldType flag,
const OnigUChar* p, const OnigUChar* end, OnigCaseFoldCodeItem items[])
{
return onigenc_unicode_get_case_fold_codes_by_str(ONIG_ENCODING_UTF16_BE,
flag, p, end, items);
}
|
Safe
|
[
"CWE-125"
] |
php-src
|
9d6c59eeea88a3e9d7039cb4fed5126ef704593a
|
3.0154037379574885e+38
| 6 |
Fix bug #77418 - Heap overflow in utf32be_mbc_to_code
| 0 |
static void ov511_i2c_w(struct sd *sd, u8 reg, u8 value)
{
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
int rc, retries;
gspca_dbg(gspca_dev, D_USBO, "ov511_i2c_w %02x %02x\n", reg, value);
/* Three byte write cycle */
for (retries = 6; ; ) {
/* Select camera register */
reg_w(sd, R51x_I2C_SADDR_3, reg);
/* Write "value" to I2C data port of OV511 */
reg_w(sd, R51x_I2C_DATA, value);
/* Initiate 3-byte write cycle */
reg_w(sd, R511_I2C_CTL, 0x01);
do {
rc = reg_r(sd, R511_I2C_CTL);
} while (rc > 0 && ((rc & 1) == 0)); /* Retry until idle */
if (rc < 0)
return;
if ((rc & 2) == 0) /* Ack? */
break;
if (--retries < 0) {
gspca_dbg(gspca_dev, D_USBO, "i2c write retries exhausted\n");
return;
}
}
}
|
Safe
|
[
"CWE-476"
] |
linux
|
998912346c0da53a6dbb71fab3a138586b596b30
|
3.152455188093568e+38
| 33 |
media: ov519: add missing endpoint sanity checks
Make sure to check that we have at least one endpoint before accessing
the endpoint array to avoid dereferencing a NULL-pointer on stream
start.
Note that these sanity checks are not redundant as the driver is mixing
looking up altsettings by index and by number, which need not coincide.
Fixes: 1876bb923c98 ("V4L/DVB (12079): gspca_ov519: add support for the ov511 bridge")
Fixes: b282d87332f5 ("V4L/DVB (12080): gspca_ov519: Fix ov518+ with OV7620AE (Trust spacecam 320)")
Cc: stable <stable@vger.kernel.org> # 2.6.31
Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
| 0 |
void protocol_filter_save(void) {
// save protocol filter configuration in PROTOCOL_CFG
FILE *fp = fopen(RUN_PROTOCOL_CFG, "wxe");
if (!fp)
errExit("fopen");
fprintf(fp, "%s\n", cfg.protocol);
SET_PERMS_STREAM(fp, 0, 0, 0600);
fclose(fp);
}
|
Safe
|
[
"CWE-269",
"CWE-94"
] |
firejail
|
27cde3d7d1e4e16d4190932347c7151dc2a84c50
|
2.6087431380992092e+38
| 9 |
fixing CVE-2022-31214
| 0 |
ssize_t sys_pread(int fd, void *buf, size_t count, off_t off)
{
ssize_t ret;
do {
ret = pread(fd, buf, count, off);
} while (ret == -1 && errno == EINTR);
return ret;
}
|
Safe
|
[
"CWE-20"
] |
samba
|
d77a74237e660dd2ce9f1e14b02635f8a2569653
|
3.09899180374604e+38
| 9 |
s3: nmbd: Fix bug 10633 - nmbd denial of service
The Linux kernel has a bug in that it can give spurious
wakeups on a non-blocking UDP socket for a non-deliverable packet.
When nmbd was changed to use non-blocking sockets it
became vulnerable to a spurious wakeup from poll/epoll.
Fix sys_recvfile() to return on EWOULDBLOCK/EAGAIN.
CVE-2014-0244
https://bugzilla.samba.org/show_bug.cgi?id=10633
Signed-off-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Andreas Schneider <asn@samba.org>
| 0 |
_zip_dirent_finalize(struct zip_dirent *zde)
{
if (zde->filename_len > 0) {
free(zde->filename);
}
zde->filename = NULL;
if (zde->extrafield_len > 0) {
free(zde->extrafield);
}
zde->extrafield = NULL;
if (zde->comment_len > 0) {
free(zde->comment);
}
zde->comment = NULL;
}
|
Safe
|
[
"CWE-189"
] |
php-src
|
ef8fc4b53d92fbfcd8ef1abbd6f2f5fe2c4a11e5
|
2.206731780108335e+38
| 15 |
Fix bug #69253 - ZIP Integer Overflow leads to writing past heap boundary
| 0 |
int __perf_event_disable(void *info)
{
struct perf_event *event = info;
struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
/*
* If this is a per-task event, need to check whether this
* event's task is the current task on this cpu.
*
* Can trigger due to concurrent perf_event_context_sched_out()
* flipping contexts around.
*/
if (ctx->task && cpuctx->task_ctx != ctx)
return -EINVAL;
raw_spin_lock(&ctx->lock);
/*
* If the event is on, turn it off.
* If it is in error state, leave it in error state.
*/
if (event->state >= PERF_EVENT_STATE_INACTIVE) {
update_context_time(ctx);
update_cgrp_time_from_event(event);
update_group_times(event);
if (event == event->group_leader)
group_sched_out(event, cpuctx, ctx);
else
event_sched_out(event, cpuctx, ctx);
event->state = PERF_EVENT_STATE_OFF;
}
raw_spin_unlock(&ctx->lock);
return 0;
}
|
Safe
|
[
"CWE-703",
"CWE-189"
] |
linux
|
8176cced706b5e5d15887584150764894e94e02f
|
2.207923090288398e+38
| 37 |
perf: Treat attr.config as u64 in perf_swevent_init()
Trinity discovered that we fail to check all 64 bits of
attr.config passed by user space, resulting to out-of-bounds
access of the perf_swevent_enabled array in
sw_perf_event_destroy().
Introduced in commit b0a873ebb ("perf: Register PMU
implementations").
Signed-off-by: Tommi Rantala <tt.rantala@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: davej@redhat.com
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Link: http://lkml.kernel.org/r/1365882554-30259-1-git-send-email-tt.rantala@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
| 0 |
int xt_compat_check_entry_offsets(const void *base,
unsigned int target_offset,
unsigned int next_offset)
{
const struct compat_xt_entry_target *t;
const char *e = base;
if (target_offset + sizeof(*t) > next_offset)
return -EINVAL;
t = (void *)(e + target_offset);
if (t->u.target_size < sizeof(*t))
return -EINVAL;
if (target_offset + t->u.target_size > next_offset)
return -EINVAL;
if (strcmp(t->u.user.name, XT_STANDARD_TARGET) == 0 &&
target_offset + sizeof(struct compat_xt_standard_target) != next_offset)
return -EINVAL;
return 0;
}
|
Vulnerable
|
[
"CWE-284",
"CWE-264"
] |
linux
|
ce683e5f9d045e5d67d1312a42b359cb2ab2a13c
|
2.6311211808952817e+38
| 23 |
netfilter: x_tables: check for bogus target offset
We're currently asserting that targetoff + targetsize <= nextoff.
Extend it to also check that targetoff is >= sizeof(xt_entry).
Since this is generic code, add an argument pointing to the start of the
match/target, we can then derive the base structure size from the delta.
We also need the e->elems pointer in a followup change to validate matches.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| 1 |
get_function_body(
exarg_T *eap,
garray_T *newlines,
char_u *line_arg_in,
char_u **line_to_free)
{
linenr_T sourcing_lnum_top = SOURCING_LNUM;
linenr_T sourcing_lnum_off;
int saved_wait_return = need_wait_return;
char_u *line_arg = line_arg_in;
int vim9_function = eap->cmdidx == CMD_def
|| eap->cmdidx == CMD_block;
#define MAX_FUNC_NESTING 50
char nesting_def[MAX_FUNC_NESTING];
char nesting_inline[MAX_FUNC_NESTING];
int nesting = 0;
getline_opt_T getline_options;
int indent = 2;
char_u *skip_until = NULL;
int ret = FAIL;
int is_heredoc = FALSE;
int heredoc_concat_len = 0;
garray_T heredoc_ga;
char_u *heredoc_trimmed = NULL;
ga_init2(&heredoc_ga, 1, 500);
// Detect having skipped over comment lines to find the return
// type. Add NULL lines to keep the line count correct.
sourcing_lnum_off = get_sourced_lnum(eap->getline, eap->cookie);
if (SOURCING_LNUM < sourcing_lnum_off)
{
sourcing_lnum_off -= SOURCING_LNUM;
if (ga_grow(newlines, sourcing_lnum_off) == FAIL)
goto theend;
while (sourcing_lnum_off-- > 0)
((char_u **)(newlines->ga_data))[newlines->ga_len++] = NULL;
}
nesting_def[0] = vim9_function;
nesting_inline[0] = eap->cmdidx == CMD_block;
getline_options = vim9_function
? GETLINE_CONCAT_CONTBAR : GETLINE_CONCAT_CONT;
for (;;)
{
char_u *theline;
char_u *p;
char_u *arg;
if (KeyTyped)
{
msg_scroll = TRUE;
saved_wait_return = FALSE;
}
need_wait_return = FALSE;
if (line_arg != NULL)
{
// Use eap->arg, split up in parts by line breaks.
theline = line_arg;
p = vim_strchr(theline, '\n');
if (p == NULL)
line_arg += STRLEN(line_arg);
else
{
*p = NUL;
line_arg = p + 1;
}
}
else
{
if (eap->getline == NULL)
theline = getcmdline(':', 0L, indent, getline_options);
else
theline = eap->getline(':', eap->cookie, indent,
getline_options);
if (*eap->cmdlinep == *line_to_free)
*eap->cmdlinep = theline;
vim_free(*line_to_free);
*line_to_free = theline;
}
if (KeyTyped)
lines_left = Rows - 1;
if (theline == NULL)
{
// Use the start of the function for the line number.
SOURCING_LNUM = sourcing_lnum_top;
if (skip_until != NULL)
semsg(_(e_missing_heredoc_end_marker_str), skip_until);
else if (nesting_inline[nesting])
emsg(_(e_missing_end_block));
else if (eap->cmdidx == CMD_def)
emsg(_(e_missing_enddef));
else
emsg(_("E126: Missing :endfunction"));
goto theend;
}
// Detect line continuation: SOURCING_LNUM increased more than one.
sourcing_lnum_off = get_sourced_lnum(eap->getline, eap->cookie);
if (SOURCING_LNUM < sourcing_lnum_off)
sourcing_lnum_off -= SOURCING_LNUM;
else
sourcing_lnum_off = 0;
if (skip_until != NULL)
{
// Don't check for ":endfunc"/":enddef" between
// * ":append" and "."
// * ":python <<EOF" and "EOF"
// * ":let {var-name} =<< [trim] {marker}" and "{marker}"
if (heredoc_trimmed == NULL
|| (is_heredoc && skipwhite(theline) == theline)
|| STRNCMP(theline, heredoc_trimmed,
STRLEN(heredoc_trimmed)) == 0)
{
if (heredoc_trimmed == NULL)
p = theline;
else if (is_heredoc)
p = skipwhite(theline) == theline
? theline : theline + STRLEN(heredoc_trimmed);
else
p = theline + STRLEN(heredoc_trimmed);
if (STRCMP(p, skip_until) == 0)
{
VIM_CLEAR(skip_until);
VIM_CLEAR(heredoc_trimmed);
getline_options = vim9_function
? GETLINE_CONCAT_CONTBAR : GETLINE_CONCAT_CONT;
is_heredoc = FALSE;
if (heredoc_concat_len > 0)
{
// Replace the starting line with all the concatenated
// lines.
ga_concat(&heredoc_ga, theline);
vim_free(((char_u **)(newlines->ga_data))[
heredoc_concat_len - 1]);
((char_u **)(newlines->ga_data))[
heredoc_concat_len - 1] = heredoc_ga.ga_data;
ga_init(&heredoc_ga);
heredoc_concat_len = 0;
theline += STRLEN(theline); // skip the "EOF"
}
}
}
}
else
{
int c;
char_u *end;
// skip ':' and blanks
for (p = theline; VIM_ISWHITE(*p) || *p == ':'; ++p)
;
// Check for "endfunction", "enddef" or "}".
// When a ":" follows it must be a dict key; "enddef: value,"
if (nesting_inline[nesting]
? *p == '}'
: (checkforcmd(&p, nesting_def[nesting]
? "enddef" : "endfunction", 4)
&& *p != ':'))
{
if (nesting-- == 0)
{
char_u *nextcmd = NULL;
if (*p == '|' || *p == '}')
nextcmd = p + 1;
else if (line_arg != NULL && *skipwhite(line_arg) != NUL)
nextcmd = line_arg;
else if (*p != NUL && *p != (vim9_function ? '#' : '"')
&& (vim9_function || p_verbose > 0))
{
SOURCING_LNUM = sourcing_lnum_top
+ newlines->ga_len + 1;
if (eap->cmdidx == CMD_def)
semsg(_(e_text_found_after_enddef_str), p);
else
give_warning2((char_u *)
_("W22: Text found after :endfunction: %s"),
p, TRUE);
}
if (nextcmd != NULL && *skipwhite(nextcmd) != NUL)
{
// Another command follows. If the line came from "eap"
// we can simply point into it, otherwise we need to
// change "eap->cmdlinep".
eap->nextcmd = nextcmd;
if (*line_to_free != NULL
&& *eap->cmdlinep != *line_to_free)
{
vim_free(*eap->cmdlinep);
*eap->cmdlinep = *line_to_free;
*line_to_free = NULL;
}
}
break;
}
}
// Check for mismatched "endfunc" or "enddef".
// We don't check for "def" inside "func" thus we also can't check
// for "enddef".
// We continue to find the end of the function, although we might
// not find it.
else if (nesting_def[nesting])
{
if (checkforcmd(&p, "endfunction", 4) && *p != ':')
emsg(_(e_mismatched_endfunction));
}
else if (eap->cmdidx == CMD_def && checkforcmd(&p, "enddef", 4))
emsg(_(e_mismatched_enddef));
// Increase indent inside "if", "while", "for" and "try", decrease
// at "end".
if (indent > 2 && (*p == '}' || STRNCMP(p, "end", 3) == 0))
indent -= 2;
else if (STRNCMP(p, "if", 2) == 0
|| STRNCMP(p, "wh", 2) == 0
|| STRNCMP(p, "for", 3) == 0
|| STRNCMP(p, "try", 3) == 0)
indent += 2;
// Check for defining a function inside this function.
// Only recognize "def" inside "def", not inside "function",
// For backwards compatibility, see Test_function_python().
c = *p;
if (is_function_cmd(&p)
|| (eap->cmdidx == CMD_def && checkforcmd(&p, "def", 3)))
{
if (*p == '!')
p = skipwhite(p + 1);
p += eval_fname_script(p);
vim_free(trans_function_name(&p, NULL, TRUE, 0, NULL,
NULL, NULL));
if (*skipwhite(p) == '(')
{
if (nesting == MAX_FUNC_NESTING - 1)
emsg(_(e_function_nesting_too_deep));
else
{
++nesting;
nesting_def[nesting] = (c == 'd');
nesting_inline[nesting] = FALSE;
indent += 2;
}
}
}
if (nesting_def[nesting] ? *p != '#' : *p != '"')
{
// Not a comment line: check for nested inline function.
end = p + STRLEN(p) - 1;
while (end > p && VIM_ISWHITE(*end))
--end;
if (end > p + 1 && *end == '{' && VIM_ISWHITE(end[-1]))
{
int is_block;
// check for trailing "=> {": start of an inline function
--end;
while (end > p && VIM_ISWHITE(*end))
--end;
is_block = end > p + 2 && end[-1] == '=' && end[0] == '>';
if (!is_block)
{
char_u *s = p;
// check for line starting with "au" for :autocmd or
// "com" for :command, these can use a {} block
is_block = checkforcmd_noparen(&s, "autocmd", 2)
|| checkforcmd_noparen(&s, "command", 3);
}
if (is_block)
{
if (nesting == MAX_FUNC_NESTING - 1)
emsg(_(e_function_nesting_too_deep));
else
{
++nesting;
nesting_def[nesting] = TRUE;
nesting_inline[nesting] = TRUE;
indent += 2;
}
}
}
}
// Check for ":append", ":change", ":insert". Not for :def.
p = skip_range(p, FALSE, NULL);
if (!vim9_function
&& ((p[0] == 'a' && (!ASCII_ISALPHA(p[1]) || p[1] == 'p'))
|| (p[0] == 'c'
&& (!ASCII_ISALPHA(p[1]) || (p[1] == 'h'
&& (!ASCII_ISALPHA(p[2]) || (p[2] == 'a'
&& (STRNCMP(&p[3], "nge", 3) != 0
|| !ASCII_ISALPHA(p[6])))))))
|| (p[0] == 'i'
&& (!ASCII_ISALPHA(p[1]) || (p[1] == 'n'
&& (!ASCII_ISALPHA(p[2])
|| (p[2] == 's'
&& (!ASCII_ISALPHA(p[3])
|| p[3] == 'e'))))))))
skip_until = vim_strsave((char_u *)".");
// Check for ":python <<EOF", ":tcl <<EOF", etc.
arg = skipwhite(skiptowhite(p));
if (arg[0] == '<' && arg[1] =='<'
&& ((p[0] == 'p' && p[1] == 'y'
&& (!ASCII_ISALNUM(p[2]) || p[2] == 't'
|| ((p[2] == '3' || p[2] == 'x')
&& !ASCII_ISALPHA(p[3]))))
|| (p[0] == 'p' && p[1] == 'e'
&& (!ASCII_ISALPHA(p[2]) || p[2] == 'r'))
|| (p[0] == 't' && p[1] == 'c'
&& (!ASCII_ISALPHA(p[2]) || p[2] == 'l'))
|| (p[0] == 'l' && p[1] == 'u' && p[2] == 'a'
&& !ASCII_ISALPHA(p[3]))
|| (p[0] == 'r' && p[1] == 'u' && p[2] == 'b'
&& (!ASCII_ISALPHA(p[3]) || p[3] == 'y'))
|| (p[0] == 'm' && p[1] == 'z'
&& (!ASCII_ISALPHA(p[2]) || p[2] == 's'))
))
{
// ":python <<" continues until a dot, like ":append"
p = skipwhite(arg + 2);
if (STRNCMP(p, "trim", 4) == 0)
{
// Ignore leading white space.
p = skipwhite(p + 4);
heredoc_trimmed = vim_strnsave(theline,
skipwhite(theline) - theline);
}
if (*p == NUL)
skip_until = vim_strsave((char_u *)".");
else
skip_until = vim_strnsave(p, skiptowhite(p) - p);
getline_options = GETLINE_NONE;
is_heredoc = TRUE;
if (eap->cmdidx == CMD_def)
heredoc_concat_len = newlines->ga_len + 1;
}
// Check for ":cmd v =<< [trim] EOF"
// and ":cmd [a, b] =<< [trim] EOF"
// and "lines =<< [trim] EOF" for Vim9
// Where "cmd" can be "let", "var", "final" or "const".
arg = skipwhite(skiptowhite(p));
if (*arg == '[')
arg = vim_strchr(arg, ']');
if (arg != NULL)
{
int found = (eap->cmdidx == CMD_def && arg[0] == '='
&& arg[1] == '<' && arg[2] =='<');
if (!found)
// skip over the argument after "cmd"
arg = skipwhite(skiptowhite(arg));
if (found || (arg[0] == '=' && arg[1] == '<' && arg[2] =='<'
&& (checkforcmd(&p, "let", 2)
|| checkforcmd(&p, "var", 3)
|| checkforcmd(&p, "final", 5)
|| checkforcmd(&p, "const", 5))))
{
p = skipwhite(arg + 3);
if (STRNCMP(p, "trim", 4) == 0)
{
// Ignore leading white space.
p = skipwhite(p + 4);
heredoc_trimmed = vim_strnsave(theline,
skipwhite(theline) - theline);
}
skip_until = vim_strnsave(p, skiptowhite(p) - p);
getline_options = GETLINE_NONE;
is_heredoc = TRUE;
}
}
}
// Add the line to the function.
if (ga_grow(newlines, 1 + sourcing_lnum_off) == FAIL)
goto theend;
if (heredoc_concat_len > 0)
{
// For a :def function "python << EOF" concatenats all the lines,
// to be used for the instruction later.
ga_concat(&heredoc_ga, theline);
ga_concat(&heredoc_ga, (char_u *)"\n");
p = vim_strsave((char_u *)"");
}
else
{
// Copy the line to newly allocated memory. get_one_sourceline()
// allocates 250 bytes per line, this saves 80% on average. The
// cost is an extra alloc/free.
p = vim_strsave(theline);
}
if (p == NULL)
goto theend;
((char_u **)(newlines->ga_data))[newlines->ga_len++] = p;
// Add NULL lines for continuation lines, so that the line count is
// equal to the index in the growarray.
while (sourcing_lnum_off-- > 0)
((char_u **)(newlines->ga_data))[newlines->ga_len++] = NULL;
// Check for end of eap->arg.
if (line_arg != NULL && *line_arg == NUL)
line_arg = NULL;
}
// Return OK when no error was detected.
if (!did_emsg)
ret = OK;
theend:
vim_free(skip_until);
vim_free(heredoc_trimmed);
vim_free(heredoc_ga.ga_data);
need_wait_return |= saved_wait_return;
return ret;
}
|
Safe
|
[
"CWE-416"
] |
vim
|
9c23f9bb5fe435b28245ba8ac65aa0ca6b902c04
|
3.1298260357300694e+38
| 426 |
patch 8.2.3902: Vim9: double free with nested :def function
Problem: Vim9: double free with nested :def function.
Solution: Pass "line_to_free" from compile_def_function() and make sure
cmdlinep is valid.
| 0 |
void parseInclusion(ParseContext* ctx,
BSONElement elem,
ProjectionPathASTNode* parent,
boost::optional<FieldPath> fullPathToParent) {
// There are special rules about _id being included. _id may be included in both inclusion and
// exclusion projections.
const bool isTopLevelIdProjection = elem.fieldNameStringData() == "_id" && parent->isRoot();
const bool hasPositional = hasPositionalOperator(elem.fieldNameStringData());
if (!hasPositional) {
FieldPath path(elem.fieldNameStringData());
addNodeAtPath(parent, path, std::make_unique<BooleanConstantASTNode>(true));
if (isTopLevelIdProjection) {
ctx->idIncludedEntirely = true;
}
} else {
verifyComputedFieldsAllowed(ctx->policies);
uassert(31276,
"Cannot specify more than one positional projection per query.",
!ctx->hasPositional);
uassert(31256, "Cannot specify positional operator and $elemMatch.", !ctx->hasElemMatch);
uassert(51050, "Projections with a positional operator require a matcher", ctx->query);
// Get everything up to the first positional operator.
// Should at least be ".$"
StringData elemFieldName = elem.fieldNameStringData();
invariant(elemFieldName.size() > 2);
StringData pathWithoutPositionalOperator =
elemFieldName.substr(0, elemFieldName.size() - 2);
FieldPath path(pathWithoutPositionalOperator);
auto matcher = CopyableMatchExpression{ctx->queryObj,
ctx->expCtx,
std::make_unique<ExtensionsCallbackNoop>(),
MatchExpressionParser::kBanAllSpecialFeatures,
true /* optimize expression */};
invariant(ctx->query);
addNodeAtPath(parent,
path,
std::make_unique<ProjectionPositionalASTNode>(
std::make_unique<MatchExpressionASTNode>(matcher)));
ctx->hasPositional = true;
}
if (!isTopLevelIdProjection) {
uassert(31253,
str::stream() << "Cannot do inclusion on field " << elem.fieldNameStringData()
<< " in exclusion projection",
!ctx->type || *ctx->type == ProjectType::kInclusion);
ctx->type = ProjectType::kInclusion;
}
}
|
Vulnerable
|
[
"CWE-732"
] |
mongo
|
cd583b6c4d8aa2364f255992708b9bb54e110cf4
|
9.683439742456502e+37
| 59 |
SERVER-53929 Add stricter parser checks around positional projection
| 1 |
static void core_analysis_color_curr_node(RzCore *core, RzAnalysisBlock *bbi) {
bool color_current = rz_config_get_i(core->config, "graph.gv.current");
char *pal_curr = palColorFor("graph.current");
bool current = rz_analysis_block_contains(bbi, core->offset);
if (current && color_current) {
rz_cons_printf("\t\"0x%08" PFMT64x "\" ", bbi->addr);
rz_cons_printf("\t[fillcolor=%s style=filled shape=box];\n", pal_curr);
}
free(pal_curr);
}
|
Safe
|
[
"CWE-703"
] |
rizin
|
6ce71d8aa3dafe3cdb52d5d72ae8f4b95916f939
|
2.0517224639132005e+38
| 11 |
Initialize retctx,ctx before freeing the inner elements
In rz_core_analysis_type_match retctx structure was initialized on the
stack only after a "goto out_function", where a field of that structure
was freed. When the goto path is taken, the field is not properly
initialized and it cause cause a crash of Rizin or have other effects.
Fixes: CVE-2021-4022
| 0 |
ExecAssignProjectionInfo(PlanState *planstate,
TupleDesc inputDesc)
{
planstate->ps_ProjInfo =
ExecBuildProjectionInfo(planstate->targetlist,
planstate->ps_ExprContext,
planstate->ps_ResultTupleSlot,
inputDesc);
}
|
Safe
|
[
"CWE-209"
] |
postgres
|
804b6b6db4dcfc590a468e7be390738f9f7755fb
|
6.54395232195693e+37
| 9 |
Fix column-privilege leak in error-message paths
While building error messages to return to the user,
BuildIndexValueDescription, ExecBuildSlotValueDescription and
ri_ReportViolation would happily include the entire key or entire row in
the result returned to the user, even if the user didn't have access to
view all of the columns being included.
Instead, include only those columns which the user is providing or which
the user has select rights on. If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription
and ExecBuildSlotValueDescription. Note that, for key cases, the user
must have access to all of the columns for the key to be shown; a
partial key will not be returned.
Further, in master only, do not return any data for cases where row
security is enabled on the relation and row security should be applied
for the user. This required a bit of refactoring and moving of things
around related to RLS- note the addition of utils/misc/rls.c.
Back-patch all the way, as column-level privileges are now in all
supported versions.
This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
| 0 |
void Item_hex_constant::hex_string_init(const char *str, uint str_length)
{
max_length=(str_length+1)/2;
char *ptr=(char*) sql_alloc(max_length+1);
if (!ptr)
{
str_value.set("", 0, &my_charset_bin);
return;
}
str_value.set(ptr,max_length,&my_charset_bin);
char *end=ptr+max_length;
if (max_length*2 != str_length)
*ptr++=char_val(*str++); // Not even, assume 0 prefix
while (ptr != end)
{
*ptr++= (char) (char_val(str[0])*16+char_val(str[1]));
str+=2;
}
*ptr=0; // Keep purify happy
collation.set(&my_charset_bin, DERIVATION_COERCIBLE);
fixed= 1;
unsigned_flag= 1;
}
|
Safe
|
[] |
server
|
b000e169562697aa072600695d4f0c0412f94f4f
|
3.0355043454537924e+38
| 23 |
Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST, COL), NAME_CONST('NAME', NULL))
based on:
commit f7316aa0c9a
Author: Ajo Robert <ajo.robert@oracle.com>
Date: Thu Aug 24 17:03:21 2017 +0530
Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST,
COL), NAME_CONST('NAME', NULL))
Backport of Bug#19143243 fix.
NAME_CONST item can return NULL_ITEM type in case of incorrect arguments.
NULL_ITEM has special processing in Item_func_in function.
In Item_func_in::fix_length_and_dec an array of possible comparators is
created. Since NAME_CONST function has NULL_ITEM type, corresponding
array element is empty. Then NAME_CONST is wrapped to ITEM_CACHE.
ITEM_CACHE can not return proper type(NULL_ITEM) in Item_func_in::val_int(),
so the NULL_ITEM is attempted compared with an empty comparator.
The fix is to disable the caching of Item_name_const item.
| 0 |
Image::UniquePtr newJp2Instance(BasicIo::UniquePtr io, bool create)
{
Image::UniquePtr image(new Jp2Image(std::move(io), create));
if (!image->good())
{
image.reset();
}
return image;
}
|
Safe
|
[
"CWE-703",
"CWE-787"
] |
exiv2
|
f9308839198aca5e68a65194f151a1de92398f54
|
2.4585120398583393e+38
| 9 |
Better bounds checking in Jp2Image::encodeJp2Header()
| 0 |
ClientHttpRequest::noteAdaptationAnswer(const Adaptation::Answer &answer)
{
assert(cbdataReferenceValid(this)); // indicates bug
clearAdaptation(virginHeadSource);
assert(!adaptedBodySource);
switch (answer.kind) {
case Adaptation::Answer::akForward:
handleAdaptedHeader(const_cast<HttpMsg*>(answer.message.getRaw()));
break;
case Adaptation::Answer::akBlock:
handleAdaptationBlock(answer);
break;
case Adaptation::Answer::akError:
handleAdaptationFailure(ERR_DETAIL_CLT_REQMOD_ABORT, !answer.final);
break;
}
}
|
Safe
|
[
"CWE-116"
] |
squid
|
e7cf864f938f24eea8af0692c04d16790983c823
|
1.0659400630423344e+38
| 20 |
Handle more Range requests (#790)
Also removed some effectively unused code.
| 0 |
static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
{
unsigned long prev_energy = ULONG_MAX, best_energy = ULONG_MAX;
struct root_domain *rd = cpu_rq(smp_processor_id())->rd;
int cpu, best_energy_cpu = prev_cpu;
struct perf_domain *head, *pd;
unsigned long cpu_cap, util;
struct sched_domain *sd;
rcu_read_lock();
pd = rcu_dereference(rd->pd);
if (!pd || READ_ONCE(rd->overutilized))
goto fail;
head = pd;
/*
* Energy-aware wake-up happens on the lowest sched_domain starting
* from sd_asym_cpucapacity spanning over this_cpu and prev_cpu.
*/
sd = rcu_dereference(*this_cpu_ptr(&sd_asym_cpucapacity));
while (sd && !cpumask_test_cpu(prev_cpu, sched_domain_span(sd)))
sd = sd->parent;
if (!sd)
goto fail;
sync_entity_load_avg(&p->se);
if (!task_util_est(p))
goto unlock;
for (; pd; pd = pd->next) {
unsigned long cur_energy, spare_cap, max_spare_cap = 0;
int max_spare_cap_cpu = -1;
for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
continue;
/* Skip CPUs that will be overutilized. */
util = cpu_util_next(cpu, p, cpu);
cpu_cap = capacity_of(cpu);
if (cpu_cap * 1024 < util * capacity_margin)
continue;
/* Always use prev_cpu as a candidate. */
if (cpu == prev_cpu) {
prev_energy = compute_energy(p, prev_cpu, head);
best_energy = min(best_energy, prev_energy);
continue;
}
/*
* Find the CPU with the maximum spare capacity in
* the performance domain
*/
spare_cap = cpu_cap - util;
if (spare_cap > max_spare_cap) {
max_spare_cap = spare_cap;
max_spare_cap_cpu = cpu;
}
}
/* Evaluate the energy impact of using this CPU. */
if (max_spare_cap_cpu >= 0) {
cur_energy = compute_energy(p, max_spare_cap_cpu, head);
if (cur_energy < best_energy) {
best_energy = cur_energy;
best_energy_cpu = max_spare_cap_cpu;
}
}
}
unlock:
rcu_read_unlock();
/*
* Pick the best CPU if prev_cpu cannot be used, or if it saves at
* least 6% of the energy used by prev_cpu.
*/
if (prev_energy == ULONG_MAX)
return best_energy_cpu;
if ((prev_energy - best_energy) > (prev_energy >> 4))
return best_energy_cpu;
return prev_cpu;
fail:
rcu_read_unlock();
return -1;
}
|
Safe
|
[
"CWE-400",
"CWE-703",
"CWE-835"
] |
linux
|
c40f7d74c741a907cfaeb73a7697081881c497d0
|
3.9543679388954226e+37
| 90 |
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the
scheduler under high loads, starting at around the v4.18 time frame,
and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list
manipulation.
Do a (manual) revert of:
a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
It turns out that the list_del_leaf_cfs_rq() introduced by this commit
is a surprising property that was not considered in followup commits
such as:
9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list")
As Vincent Guittot explains:
"I think that there is a bigger problem with commit a9e7f6544b9c and
cfs_rq throttling:
Let take the example of the following topology TG2 --> TG1 --> root:
1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1
cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in
one path because it has never been used and can't be throttled so
tmp_alone_branch will point to leaf_cfs_rq_list at the end.
2) Then TG1 is throttled
3) and we add TG3 as a new child of TG1.
4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1
cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list.
With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list.
So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1
cfs_rq is removed from the list.
Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list
but tmp_alone_branch still points to TG3 cfs_rq because its throttled
parent can't be enqueued when the lock is released.
tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should.
So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch
points on another TG cfs_rq, the next TG cfs_rq that will be added,
will be linked outside rq->leaf_cfs_rq_list - which is bad.
In addition, we can break the ordering of the cfs_rq in
rq->leaf_cfs_rq_list but this ordering is used to update and
propagate the update from leaf down to root."
Instead of trying to work through all these cases and trying to reproduce
the very high loads that produced the lockup to begin with, simplify
the code temporarily by reverting a9e7f6544b9c - which change was clearly
not thought through completely.
This (hopefully) gives us a kernel that doesn't lock up so people
can continue to enjoy their holidays without worrying about regressions. ;-)
[ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ]
Analyzed-by: Xie XiuQi <xiexiuqi@huawei.com>
Analyzed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reported-by: Zhipeng Xie <xiezhipeng1@huawei.com>
Reported-by: Sargun Dhillon <sargun@sargun.me>
Reported-by: Xie XiuQi <xiexiuqi@huawei.com>
Tested-by: Zhipeng Xie <xiezhipeng1@huawei.com>
Tested-by: Sargun Dhillon <sargun@sargun.me>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: <stable@vger.kernel.org> # v4.13+
Cc: Bin Li <huawei.libin@huawei.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
Link: http://lkml.kernel.org/r/1545879866-27809-1-git-send-email-xiexiuqi@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
| 0 |
EXPORTED int mboxlist_unsetquota(const char *root)
{
struct quota q;
int r=0;
if (!root[0] || root[0] == '.' || strchr(root, '/')
|| strchr(root, '*') || strchr(root, '%') || strchr(root, '?')) {
return IMAP_MAILBOX_BADNAME;
}
quota_init(&q, root);
r = quota_read(&q, NULL, 0);
/* already unset */
if (r == IMAP_QUOTAROOT_NONEXISTENT) {
r = 0;
goto done;
}
if (r) goto done;
r = quota_changelock();
/*
* Have to remove it from all affected mailboxes
*/
mboxlist_mboxtree(root, mboxlist_rmquota, (void *)root, /*flags*/0);
r = quota_deleteroot(root);
quota_changelockrelease();
if (!r) sync_log_quota(root);
done:
quota_free(&q);
return r;
}
|
Safe
|
[
"CWE-20"
] |
cyrus-imapd
|
6bd33275368edfa71ae117de895488584678ac79
|
2.651640203961328e+38
| 35 |
mboxlist: fix uninitialised memory use where pattern is "Other Users"
| 0 |
e_ews_connection_create_folder (EEwsConnection *cnc,
gint pri,
const gchar *parent_folder_id,
gboolean is_distinguished_id,
const gchar *folder_name,
EEwsFolderType folder_type,
GCancellable *cancellable,
GAsyncReadyCallback callback,
gpointer user_data)
{
ESoapMessage *msg;
GSimpleAsyncResult *simple;
EwsAsyncData *async_data;
const gchar *folder_element;
g_return_if_fail (cnc != NULL);
msg = e_ews_message_new_with_header (
cnc->priv->settings,
cnc->priv->uri,
cnc->priv->impersonate_user,
"CreateFolder",
NULL,
NULL,
cnc->priv->version,
E_EWS_EXCHANGE_2007_SP1,
FALSE,
TRUE);
e_soap_message_start_element (msg, "ParentFolderId", "messages", NULL);
/* If NULL passed for parent_folder_id, use "msgfolderroot" */
if (is_distinguished_id || !parent_folder_id) {
e_soap_message_start_element (msg, "DistinguishedFolderId", NULL, NULL);
e_soap_message_add_attribute (
msg, "Id", parent_folder_id ? parent_folder_id : "msgfolderroot", NULL, NULL);
if (is_distinguished_id && cnc->priv->email) {
e_soap_message_start_element (msg, "Mailbox", NULL, NULL);
e_ews_message_write_string_parameter(
msg, "EmailAddress", NULL, cnc->priv->email);
e_soap_message_end_element (msg);
}
e_soap_message_end_element (msg);
} else {
e_ews_message_write_string_parameter_with_attribute (msg, "FolderId", NULL, NULL, "Id", parent_folder_id);
}
e_soap_message_end_element (msg);
switch (folder_type) {
case E_EWS_FOLDER_TYPE_MAILBOX:
folder_element = "Folder";
break;
case E_EWS_FOLDER_TYPE_CALENDAR:
folder_element = "CalendarFolder";
break;
case E_EWS_FOLDER_TYPE_CONTACTS:
folder_element = "ContactsFolder";
break;
case E_EWS_FOLDER_TYPE_SEARCH:
folder_element = "SearchFolder";
break;
case E_EWS_FOLDER_TYPE_TASKS:
folder_element = "TasksFolder";
break;
default:
g_warn_if_reached ();
folder_element = "Folder";
break;
}
e_soap_message_start_element (msg, "Folders", "messages", NULL);
e_soap_message_start_element (msg, folder_element, NULL, NULL);
e_ews_message_write_string_parameter (msg, "DisplayName", NULL, folder_name);
e_soap_message_end_element (msg);
e_soap_message_end_element (msg);
e_ews_message_write_footer (msg);
simple = g_simple_async_result_new (
G_OBJECT (cnc), callback, user_data,
e_ews_connection_create_folder);
async_data = g_new0 (EwsAsyncData, 1);
async_data->folder_type = folder_type;
g_simple_async_result_set_op_res_gpointer (
simple, async_data, (GDestroyNotify) async_data_free);
e_ews_connection_queue_request (
cnc, msg, create_folder_response_cb,
pri, cancellable, simple);
g_object_unref (simple);
}
|
Safe
|
[
"CWE-295"
] |
evolution-ews
|
915226eca9454b8b3e5adb6f2fff9698451778de
|
1.71913015429551e+38
| 96 |
I#27 - SSL Certificates are not validated
This depends on https://gitlab.gnome.org/GNOME/evolution-data-server/commit/6672b8236139bd6ef41ecb915f4c72e2a052dba5 too.
Closes https://gitlab.gnome.org/GNOME/evolution-ews/issues/27
| 0 |
int luaRedisSetReplCommand(lua_State *lua) {
int argc = lua_gettop(lua);
int flags;
if (server.lua_replicate_commands == 0) {
lua_pushstring(lua, "You can set the replication behavior only after turning on single commands replication with redis.replicate_commands().");
return lua_error(lua);
} else if (argc != 1) {
lua_pushstring(lua, "redis.set_repl() requires two arguments.");
return lua_error(lua);
}
flags = lua_tonumber(lua,-1);
if ((flags & ~(PROPAGATE_AOF|PROPAGATE_REPL)) != 0) {
lua_pushstring(lua, "Invalid replication flags. Use REPL_AOF, REPL_REPLICA, REPL_ALL or REPL_NONE.");
return lua_error(lua);
}
server.lua_repl = flags;
return 0;
}
|
Safe
|
[
"CWE-703",
"CWE-125"
] |
redis
|
6ac3c0b7abd35f37201ed2d6298ecef4ea1ae1dd
|
2.947647071798331e+38
| 20 |
Fix protocol parsing on 'ldbReplParseCommand' (CVE-2021-32672)
The protocol parsing on 'ldbReplParseCommand' (LUA debugging)
Assumed protocol correctness. This means that if the following
is given:
*1
$100
test
The parser will try to read additional 94 unallocated bytes after
the client buffer.
This commit fixes this issue by validating that there are actually enough
bytes to read. It also limits the amount of data that can be sent by
the debugger client to 1M so the client will not be able to explode
the memory.
| 0 |
llsec_key_get(struct mac802154_llsec_key *key)
{
kref_get(&key->ref);
return key;
}
|
Safe
|
[
"CWE-416"
] |
linux
|
1165affd484889d4986cf3b724318935a0b120d8
|
3.084243202671471e+38
| 5 |
net: mac802154: Fix general protection fault
syzbot found general protection fault in crypto_destroy_tfm()[1].
It was caused by wrong clean up loop in llsec_key_alloc().
If one of the tfm array members is in IS_ERR() range it will
cause general protection fault in clean up function [1].
Call Trace:
crypto_free_aead include/crypto/aead.h:191 [inline] [1]
llsec_key_alloc net/mac802154/llsec.c:156 [inline]
mac802154_llsec_key_add+0x9e0/0xcc0 net/mac802154/llsec.c:249
ieee802154_add_llsec_key+0x56/0x80 net/mac802154/cfg.c:338
rdev_add_llsec_key net/ieee802154/rdev-ops.h:260 [inline]
nl802154_add_llsec_key+0x3d3/0x560 net/ieee802154/nl802154.c:1584
genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:739
genl_family_rcv_msg net/netlink/genetlink.c:783 [inline]
genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:800
netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502
genl_rcv+0x24/0x40 net/netlink/genetlink.c:811
netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline]
netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1338
netlink_sendmsg+0x856/0xd90 net/netlink/af_netlink.c:1927
sock_sendmsg_nosec net/socket.c:654 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:674
____sys_sendmsg+0x6e8/0x810 net/socket.c:2350
___sys_sendmsg+0xf3/0x170 net/socket.c:2404
__sys_sendmsg+0xe5/0x1b0 net/socket.c:2433
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xae
Signed-off-by: Pavel Skripkin <paskripkin@gmail.com>
Reported-by: syzbot+9ec037722d2603a9f52e@syzkaller.appspotmail.com
Acked-by: Alexander Aring <aahringo@redhat.com>
Link: https://lore.kernel.org/r/20210304152125.1052825-1-paskripkin@gmail.com
Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.