func
string | target
string | cwe
list | project
string | commit_id
string | hash
string | size
int64 | message
string | vul
int64 |
---|---|---|---|---|---|---|---|---|
static inline int security_inode_alloc(struct inode *inode)
{
return 0;
}
|
Safe
|
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
|
5.898259668173664e+37
| 4 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
| 0 |
//! Linearly normalize pixel values \newinstance.
CImg<Tfloat> get_normalize(const T& min_value, const T& max_value) const {
return CImg<Tfloat>(*this,false).normalize((Tfloat)min_value,(Tfloat)max_value);
|
Safe
|
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
|
1.1093553261923413e+37
| 3 |
Fix other issues in 'CImg<T>::load_bmp()'.
| 0 |
TEST_F(RouterTest, ResponseCodeDetailsSetByUpstream) {
NiceMock<Http::MockRequestEncoder> encoder1;
Http::ResponseDecoder* response_decoder = nullptr;
EXPECT_CALL(cm_.thread_local_cluster_.conn_pool_, newStream(_, _))
.WillOnce(Invoke(
[&](Http::ResponseDecoder& decoder,
Http::ConnectionPool::Callbacks& callbacks) -> Http::ConnectionPool::Cancellable* {
response_decoder = &decoder;
callbacks.onPoolReady(encoder1, cm_.thread_local_cluster_.conn_pool_.host_,
upstream_stream_info_, Http::Protocol::Http10);
return nullptr;
}));
expectResponseTimerCreate();
Http::TestRequestHeaderMapImpl headers;
HttpTestUtility::addDefaultHeaders(headers);
router_.decodeHeaders(headers, true);
Http::ResponseHeaderMapPtr response_headers(
new Http::TestResponseHeaderMapImpl{{":status", "200"}});
response_decoder->decodeHeaders(std::move(response_headers), true);
EXPECT_TRUE(verifyHostUpstreamStats(1, 0));
}
|
Safe
|
[
"CWE-703"
] |
envoy
|
18871dbfb168d3512a10c78dd267ff7c03f564c6
|
5.316902312600517e+37
| 23 |
[1.18] CVE-2022-21655
Crash with direct_response
Signed-off-by: Otto van der Schaaf <ovanders@redhat.com>
| 0 |
void CLASS parse_ciff(int offset, int length, int depth)
{
int tboff, nrecs, c, type, len, save, wbi = -1;
ushort key[] = {0x410, 0x45f3};
fseek(ifp, offset + length - 4, SEEK_SET);
tboff = get4() + offset;
fseek(ifp, tboff, SEEK_SET);
nrecs = get2();
if ((nrecs | depth) > 127)
return;
while (nrecs--)
{
type = get2();
len = get4();
save = ftell(ifp) + 4;
fseek(ifp, offset + get4(), SEEK_SET);
if ((((type >> 8) + 8) | 8) == 0x38)
{
parse_ciff(ftell(ifp), len, depth + 1); /* Parse a sub-table */
}
#ifdef LIBRAW_LIBRARY_BUILD
if (type == 0x3004)
parse_ciff(ftell(ifp), len, depth + 1);
#endif
if (type == 0x0810)
fread(artist, 64, 1, ifp);
if (type == 0x080a)
{
fread(make, 64, 1, ifp);
fseek(ifp, strbuflen(make) - 63, SEEK_CUR);
fread(model, 64, 1, ifp);
}
if (type == 0x1810)
{
width = get4();
height = get4();
pixel_aspect = int_to_float(get4());
flip = get4();
}
if (type == 0x1835) /* Get the decoder table */
tiff_compress = get4();
if (type == 0x2007)
{
thumb_offset = ftell(ifp);
thumb_length = len;
}
if (type == 0x1818)
{
shutter = powf64(2.0f, -int_to_float((get4(), get4())));
aperture = powf64(2.0f, int_to_float(get4()) / 2);
#ifdef LIBRAW_LIBRARY_BUILD
imgdata.lens.makernotes.CurAp = aperture;
#endif
}
if (type == 0x102a)
{
// iso_speed = pow (2.0, (get4(),get2())/32.0 - 4) * 50;
iso_speed = powf64(2.0f, ((get2(), get2()) + get2()) / 32.0f - 5.0f) * 100.0f;
#ifdef LIBRAW_LIBRARY_BUILD
aperture = _CanonConvertAperture((get2(), get2()));
imgdata.lens.makernotes.CurAp = aperture;
#else
aperture = powf64(2.0, (get2(), (short)get2()) / 64.0);
#endif
shutter = powf64(2.0, -((short)get2()) / 32.0);
wbi = (get2(), get2());
if (wbi > 17)
wbi = 0;
fseek(ifp, 32, SEEK_CUR);
if (shutter > 1e6)
shutter = get2() / 10.0;
}
if (type == 0x102c)
{
if (get2() > 512)
{ /* Pro90, G1 */
fseek(ifp, 118, SEEK_CUR);
FORC4 cam_mul[c ^ 2] = get2();
}
else
{ /* G2, S30, S40 */
fseek(ifp, 98, SEEK_CUR);
FORC4 cam_mul[c ^ (c >> 1) ^ 1] = get2();
}
}
#ifdef LIBRAW_LIBRARY_BUILD
if (type == 0x10a9)
{
INT64 o = ftell(ifp);
fseek(ifp, (0x1 << 1), SEEK_CUR);
FORC4 imgdata.color.WB_Coeffs[LIBRAW_WBI_Auto][c ^ (c >> 1)] = get2();
Canon_WBpresets(0, 0);
fseek(ifp, o, SEEK_SET);
}
if (type == 0x102d)
{
INT64 o = ftell(ifp);
Canon_CameraSettings();
fseek(ifp, o, SEEK_SET);
}
if (type == 0x580b)
{
if (strcmp(model, "Canon EOS D30"))
sprintf(imgdata.shootinginfo.BodySerial, "%d", len);
else
sprintf(imgdata.shootinginfo.BodySerial, "%0x-%05d", len >> 16, len & 0xffff);
}
#endif
if (type == 0x0032)
{
if (len == 768)
{ /* EOS D30 */
fseek(ifp, 72, SEEK_CUR);
FORC4 cam_mul[c ^ (c >> 1)] = 1024.0 / get2();
if (!wbi)
cam_mul[0] = -1; /* use my auto white balance */
}
else if (!cam_mul[0])
{
if (get2() == key[0]) /* Pro1, G6, S60, S70 */
c = (strstr(model, "Pro1") ? "012346000000000000" : "01345:000000006008")[LIM(0, wbi, 17)] - '0' + 2;
else
{ /* G3, G5, S45, S50 */
c = "023457000000006000"[LIM(0, wbi, 17)] - '0';
key[0] = key[1] = 0;
}
fseek(ifp, 78 + c * 8, SEEK_CUR);
FORC4 cam_mul[c ^ (c >> 1) ^ 1] = get2() ^ key[c & 1];
if (!wbi)
cam_mul[0] = -1;
}
}
if (type == 0x10a9)
{ /* D60, 10D, 300D, and clones */
if (len > 66)
wbi = "0134567028"[LIM(0, wbi, 9)] - '0';
fseek(ifp, 2 + wbi * 8, SEEK_CUR);
FORC4 cam_mul[c ^ (c >> 1)] = get2();
}
if (type == 0x1030 && wbi >= 0 && (0x18040 >> wbi & 1))
ciff_block_1030(); /* all that don't have 0x10a9 */
if (type == 0x1031)
{
raw_width = (get2(), get2());
raw_height = get2();
}
if (type == 0x501c)
{
iso_speed = len & 0xffff;
}
if (type == 0x5029)
{
#ifdef LIBRAW_LIBRARY_BUILD
imgdata.lens.makernotes.CurFocal = len >> 16;
imgdata.lens.makernotes.FocalType = len & 0xffff;
if (imgdata.lens.makernotes.FocalType == 2)
{
imgdata.lens.makernotes.CanonFocalUnits = 32;
if (imgdata.lens.makernotes.CanonFocalUnits > 1)
imgdata.lens.makernotes.CurFocal /= (float)imgdata.lens.makernotes.CanonFocalUnits;
}
focal_len = imgdata.lens.makernotes.CurFocal;
#else
focal_len = len >> 16;
if ((len & 0xffff) == 2)
focal_len /= 32;
#endif
}
if (type == 0x5813)
flash_used = int_to_float(len);
if (type == 0x5814)
canon_ev = int_to_float(len);
if (type == 0x5817)
shot_order = len;
if (type == 0x5834)
{
unique_id = len;
#ifdef LIBRAW_LIBRARY_BUILD
unique_id = setCanonBodyFeatures(unique_id);
#endif
}
if (type == 0x580e)
timestamp = len;
if (type == 0x180e)
timestamp = get4();
#ifdef LOCALTIME
if ((type | 0x4000) == 0x580e)
timestamp = mktime(gmtime(×tamp));
#endif
fseek(ifp, save, SEEK_SET);
}
}
|
Safe
|
[
"CWE-119",
"CWE-125"
] |
LibRaw
|
f1394822a0152ceed77815eafa5cac4e8baab10a
|
2.233812975586607e+38
| 193 |
SECUNIA advisory 76000 #1 (wrong fuji width set via tiff tag
| 0 |
sdap_ad_tokengroups_get_posix_members(TALLOC_CTX *mem_ctx,
struct sdap_ad_tokengroups_initgr_posix_state *state,
size_t num_sids,
char **sids,
size_t *_num_missing,
char ***_missing,
size_t *_num_valid,
char ***_valid_groups)
{
TALLOC_CTX *tmp_ctx = NULL;
struct sss_domain_info *domain = NULL;
struct ldb_message *msg = NULL;
const char *attrs[] = {SYSDB_NAME, NULL};
const char *name = NULL;
char *sid = NULL;
char **valid_groups = NULL;
size_t num_valid_groups;
char **missing_sids = NULL;
size_t num_missing_sids;
size_t i;
errno_t ret;
tmp_ctx = talloc_new(NULL);
if (tmp_ctx == NULL) {
DEBUG(SSSDBG_CRIT_FAILURE, "talloc_new() failed\n");
ret = ENOMEM;
goto done;
}
num_valid_groups = 0;
valid_groups = talloc_zero_array(tmp_ctx, char*, num_sids + 1);
if (valid_groups == NULL) {
ret = ENOMEM;
goto done;
}
num_missing_sids = 0;
missing_sids = talloc_zero_array(tmp_ctx, char*, num_sids + 1);
if (missing_sids == NULL) {
ret = ENOMEM;
goto done;
}
/* For each SID check if it is already present in the cache. If yes, we
* will get name of the group and update the membership. Otherwise we need
* to remember the SID and download missing groups one by one. */
for (i = 0; i < num_sids; i++) {
sid = sids[i];
DEBUG(SSSDBG_TRACE_LIBS, "Processing membership SID [%s]\n", sid);
domain = sss_get_domain_by_sid_ldap_fallback(state->domain, sid);
if (domain == NULL) {
DEBUG(SSSDBG_MINOR_FAILURE, "Domain not found for SID %s\n", sid);
continue;
}
ret = sysdb_search_group_by_sid_str(tmp_ctx, domain->sysdb, domain,
sid, attrs, &msg);
if (ret == EOK) {
/* we will update membership of this group */
name = ldb_msg_find_attr_as_string(msg, SYSDB_NAME, NULL);
if (name == NULL) {
DEBUG(SSSDBG_MINOR_FAILURE,
"Could not retrieve group name from sysdb\n");
ret = EINVAL;
goto done;
}
valid_groups[num_valid_groups] = sysdb_group_strdn(valid_groups,
domain->name,
name);
if (valid_groups[num_valid_groups] == NULL) {
ret = ENOMEM;
goto done;
}
num_valid_groups++;
} else if (ret == ENOENT) {
if (_missing != NULL) {
/* we need to download this group */
missing_sids[num_missing_sids] = talloc_steal(missing_sids,
sid);
num_missing_sids++;
DEBUG(SSSDBG_TRACE_FUNC, "Missing SID %s will be downloaded\n",
sid);
}
/* else: We have downloaded missing groups but some of them may
* remained missing because they are outside of search base. We
* will just ignore them and continue with the next group. */
} else {
DEBUG(SSSDBG_MINOR_FAILURE, "Could not look up SID %s in sysdb: "
"[%s]\n", sid, strerror(ret));
goto done;
}
}
valid_groups[num_valid_groups] = NULL;
missing_sids[num_missing_sids] = NULL;
/* return list of missing groups */
if (_missing != NULL) {
*_missing = talloc_steal(mem_ctx, missing_sids);
*_num_missing = num_missing_sids;
}
/* return list of missing groups */
if (_valid_groups != NULL) {
*_valid_groups = talloc_steal(mem_ctx, valid_groups);
*_num_valid = num_valid_groups;
}
ret = EOK;
done:
talloc_free(tmp_ctx);
return ret;
}
|
Safe
|
[
"CWE-264"
] |
sssd
|
191d7f7ce3de10d9e19eaa0a6ab3319bcd4ca95d
|
3.18511719583691e+38
| 118 |
AD: process non-posix nested groups using tokenGroups
When initgr is performed for AD supporting tokenGroups, do not skip
non-posix groups.
Resolves:
https://fedorahosted.org/sssd/ticket/2343
Reviewed-by: Michal Židek <mzidek@redhat.com>
(cherry picked from commit 4932db6258ccfb612a3a28eb6a618c2f042b9d58)
| 0 |
CImg<T> *data() {
return _data;
}
|
Safe
|
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
|
3.339041368032532e+38
| 3 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
| 0 |
static struct sock *x25_alloc_socket(struct net *net, int kern)
{
struct x25_sock *x25;
struct sock *sk = sk_alloc(net, AF_X25, GFP_ATOMIC, &x25_proto, kern);
if (!sk)
goto out;
sock_init_data(NULL, sk);
x25 = x25_sk(sk);
skb_queue_head_init(&x25->ack_queue);
skb_queue_head_init(&x25->fragment_queue);
skb_queue_head_init(&x25->interrupt_in_queue);
skb_queue_head_init(&x25->interrupt_out_queue);
out:
return sk;
}
|
Safe
|
[] |
net
|
7781607938c8371d4c2b243527430241c62e39c2
|
1.4924839228746965e+38
| 18 |
net/x25: Fix null-ptr-deref caused by x25_disconnect
When the link layer is terminating, x25->neighbour will be set to NULL
in x25_disconnect(). As a result, it could cause null-ptr-deref bugs in
x25_sendmsg(),x25_recvmsg() and x25_connect(). One of the bugs is
shown below.
(Thread 1) | (Thread 2)
x25_link_terminated() | x25_recvmsg()
x25_kill_by_neigh() | ...
x25_disconnect() | lock_sock(sk)
... | ...
x25->neighbour = NULL //(1) |
... | x25->neighbour->extended //(2)
The code sets NULL to x25->neighbour in position (1) and dereferences
x25->neighbour in position (2), which could cause null-ptr-deref bug.
This patch adds lock_sock() in x25_kill_by_neigh() in order to synchronize
with x25_sendmsg(), x25_recvmsg() and x25_connect(). What`s more, the
sock held by lock_sock() is not NULL, because it is extracted from x25_list
and uses x25_list_lock to synchronize.
Fixes: 4becb7ee5b3d ("net/x25: Fix x25_neigh refcnt leak when x25 disconnect")
Signed-off-by: Duoming Zhou <duoming@zju.edu.cn>
Reviewed-by: Lin Ma <linma@zju.edu.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static int snd_mem_proc_read(struct seq_file *seq, void *offset)
{
long pages = snd_allocated_pages >> (PAGE_SHIFT-12);
struct snd_mem_list *mem;
int devno;
static char *types[] = { "UNKNOWN", "CONT", "DEV", "DEV-SG", "SBUS" };
mutex_lock(&list_mutex);
seq_printf(seq, "pages : %li bytes (%li pages per %likB)\n",
pages * PAGE_SIZE, pages, PAGE_SIZE / 1024);
devno = 0;
list_for_each_entry(mem, &mem_list_head, list) {
devno++;
seq_printf(seq, "buffer %d : ID %08x : type %s\n",
devno, mem->id, types[mem->buffer.dev.type]);
seq_printf(seq, " addr = 0x%lx, size = %d bytes\n",
(unsigned long)mem->buffer.addr,
(int)mem->buffer.bytes);
}
mutex_unlock(&list_mutex);
return 0;
}
|
Safe
|
[] |
linux-2.6
|
ccec6e2c4a74adf76ed4e2478091a311b1806212
|
2.8760758032226622e+38
| 22 |
Convert snd-page-alloc proc file to use seq_file
Use seq_file for the proc file read/write of snd-page-alloc module.
This automatically fixes bugs in the old proc code.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
static int jp2_putuint16(jas_stream_t *out, uint_fast16_t val)
{
if (jas_stream_putc(out, (val >> 8) & 0xff) == EOF ||
jas_stream_putc(out, val & 0xff) == EOF) {
return -1;
}
return 0;
}
|
Safe
|
[
"CWE-189"
] |
jasper
|
3c55b399c36ef46befcb21e4ebc4799367f89684
|
2.031754187758568e+38
| 8 |
At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems.
| 0 |
static int binder_set_stop_on_user_error(const char *val,
const struct kernel_param *kp)
{
int ret;
ret = param_set_int(val, kp);
if (binder_stop_on_user_error < 2)
wake_up(&binder_user_error_wait);
return ret;
}
|
Safe
|
[
"CWE-416"
] |
linux
|
7bada55ab50697861eee6bb7d60b41e68a961a9c
|
1.0461768416654483e+38
| 10 |
binder: fix race that allows malicious free of live buffer
Malicious code can attempt to free buffers using the BC_FREE_BUFFER
ioctl to binder. There are protections against a user freeing a buffer
while in use by the kernel, however there was a window where
BC_FREE_BUFFER could be used to free a recently allocated buffer that
was not completely initialized. This resulted in a use-after-free
detected by KASAN with a malicious test program.
This window is closed by setting the buffer's allow_user_free attribute
to 0 when the buffer is allocated or when the user has previously freed
it instead of waiting for the caller to set it. The problem was that
when the struct buffer was recycled, allow_user_free was stale and set
to 1 allowing a free to go through.
Signed-off-by: Todd Kjos <tkjos@google.com>
Acked-by: Arve Hjønnevåg <arve@android.com>
Cc: stable <stable@vger.kernel.org> # 4.14
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
static int doDescribeResource(struct nc_state_t *nc, ncMetadata * pMeta, char *resourceType, ncResource ** outRes)
{
ncResource *res = NULL;
ncInstance *inst = NULL;
// stats to re-calculate now
long long mem_free = 0;
long long disk_free = 0;
int cores_free = 0;
// intermediate sums
long long sum_mem = 0; // for known domains: sum of requested memory
long long sum_disk = 0; // for known domains: sum of requested disk sizes
int sum_cores = 0; // for known domains: sum of requested cores
*outRes = NULL;
sem_p(inst_copy_sem);
while ((inst = get_instance(&global_instances_copy)) != NULL) {
if (inst->state == TEARDOWN)
continue; // they don't take up resources
sum_mem += inst->params.mem;
sum_disk += (inst->params.disk);
sum_cores += inst->params.cores;
}
sem_v(inst_copy_sem);
disk_free = nc->disk_max - sum_disk;
if (disk_free < 0)
disk_free = 0; // should not happen
cores_free = nc->cores_max - sum_cores; //! @todo should we -1 for dom0?
if (cores_free < 0)
cores_free = 0; // due to timesharing
mem_free = nc->mem_max - sum_mem;
if (mem_free < 0)
mem_free = 0; // should not happen
// check for potential overflow - should not happen
if (nc->mem_max > INT_MAX || mem_free > INT_MAX || nc->disk_max > INT_MAX || disk_free > INT_MAX) {
LOGERROR("stats integer overflow error (bump up the units?)\n");
LOGERROR(" memory: max=%-10lld free=%-10lld\n", nc->mem_max, mem_free);
LOGERROR(" disk: max=%-10lld free=%-10lld\n", nc->disk_max, disk_free);
LOGERROR(" cores: max=%-10lld free=%-10d\n", nc->cores_max, cores_free);
LOGERROR(" INT_MAX=%-10d\n", INT_MAX);
return EUCA_OVERFLOW_ERROR;
}
res = allocate_resource(nc->is_enabled ? "enabled" : "disabled",
nc->migration_capable, nc->iqn, nc->mem_max, mem_free, nc->disk_max, disk_free, nc->cores_max, cores_free, "none");
if (res == NULL) {
LOGERROR("out of memory\n");
return EUCA_MEMORY_ERROR;
}
*outRes = res;
LOGDEBUG("Core status: in-use %d physical %lld over-committed %s\n", sum_cores, nc->phy_max_cores, (((sum_cores - cores_free) > nc->phy_max_cores) ? "yes" : "no"));
LOGDEBUG("Memory status: in-use %lld physical %lld over-committed %s\n", sum_mem, nc->phy_max_mem, (((sum_mem - mem_free) > nc->phy_max_mem) ? "yes" : "no"));
LOGDEBUG("returning status=%s cores=%d/%d mem=%d/%d disk=%d/%d iqn=%s\n",
res->nodeStatus, res->numberOfCoresAvailable, res->numberOfCoresMax, res->memorySizeAvailable, res->memorySizeMax, res->diskSizeAvailable, res->diskSizeMax, res->iqn);
return EUCA_OK;
}
|
Safe
|
[] |
eucalyptus
|
c252889a46f41b4c396b89e005ec89836f2524be
|
7.846730889348525e+37
| 62 |
Input validation, shellout hardening on back-end
- validating bucketName and bucketPath in BundleInstance
- validating device name in Attach and DetachVolume
- removed some uses of system() and popen()
Fixes EUCA-7572, EUCA-7520
| 0 |
static NTSTATUS dcesrv_lsa_CREDRRENAME(struct dcesrv_call_state *dce_call, TALLOC_CTX *mem_ctx,
struct lsa_CREDRRENAME *r)
{
DCESRV_FAULT(DCERPC_FAULT_OP_RNG_ERROR);
}
|
Safe
|
[
"CWE-200"
] |
samba
|
0a3aa5f908e351201dc9c4d4807b09ed9eedff77
|
3.0053494356681567e+38
| 5 |
CVE-2022-32746 ldb: Make use of functions for appending to an ldb_message
This aims to minimise usage of the error-prone pattern of searching for
a just-added message element in order to make modifications to it (and
potentially finding the wrong element).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
| 0 |
ZEND_VM_HANDLER(113, ZEND_INIT_STATIC_METHOD_CALL, UNUSED|CLASS_FETCH|CONST|VAR, CONST|TMPVAR|UNUSED|CONSTRUCTOR|CV, NUM|CACHE_SLOT)
{
USE_OPLINE
zval *function_name;
zend_class_entry *ce;
uint32_t call_info;
zend_function *fbc;
zend_execute_data *call;
SAVE_OPLINE();
if (OP1_TYPE == IS_CONST) {
/* no function found. try a static method in class */
ce = CACHED_PTR(opline->result.num);
if (UNEXPECTED(ce == NULL)) {
ce = zend_fetch_class_by_name(Z_STR_P(RT_CONSTANT(opline, opline->op1)), Z_STR_P(RT_CONSTANT(opline, opline->op1) + 1), ZEND_FETCH_CLASS_DEFAULT | ZEND_FETCH_CLASS_EXCEPTION);
if (UNEXPECTED(ce == NULL)) {
ZEND_ASSERT(EG(exception));
FREE_UNFETCHED_OP2();
HANDLE_EXCEPTION();
}
if (OP2_TYPE != IS_CONST) {
CACHE_PTR(opline->result.num, ce);
}
}
} else if (OP1_TYPE == IS_UNUSED) {
ce = zend_fetch_class(NULL, opline->op1.num);
if (UNEXPECTED(ce == NULL)) {
ZEND_ASSERT(EG(exception));
FREE_UNFETCHED_OP2();
HANDLE_EXCEPTION();
}
} else {
ce = Z_CE_P(EX_VAR(opline->op1.var));
}
if (OP1_TYPE == IS_CONST &&
OP2_TYPE == IS_CONST &&
EXPECTED((fbc = CACHED_PTR(opline->result.num + sizeof(void*))) != NULL)) {
/* nothing to do */
} else if (OP1_TYPE != IS_CONST &&
OP2_TYPE == IS_CONST &&
EXPECTED(CACHED_PTR(opline->result.num) == ce)) {
fbc = CACHED_PTR(opline->result.num + sizeof(void*));
} else if (OP2_TYPE != IS_UNUSED) {
zend_free_op free_op2;
function_name = GET_OP2_ZVAL_PTR_UNDEF(BP_VAR_R);
if (OP2_TYPE != IS_CONST) {
if (UNEXPECTED(Z_TYPE_P(function_name) != IS_STRING)) {
do {
if (OP2_TYPE & (IS_VAR|IS_CV) && Z_ISREF_P(function_name)) {
function_name = Z_REFVAL_P(function_name);
if (EXPECTED(Z_TYPE_P(function_name) == IS_STRING)) {
break;
}
} else if (OP2_TYPE == IS_CV && UNEXPECTED(Z_TYPE_P(function_name) == IS_UNDEF)) {
ZVAL_UNDEFINED_OP2();
if (UNEXPECTED(EG(exception) != NULL)) {
HANDLE_EXCEPTION();
}
}
zend_throw_error(NULL, "Function name must be a string");
FREE_OP2();
HANDLE_EXCEPTION();
} while (0);
}
}
if (ce->get_static_method) {
fbc = ce->get_static_method(ce, Z_STR_P(function_name));
} else {
fbc = zend_std_get_static_method(ce, Z_STR_P(function_name), ((OP2_TYPE == IS_CONST) ? (RT_CONSTANT(opline, opline->op2) + 1) : NULL));
}
if (UNEXPECTED(fbc == NULL)) {
if (EXPECTED(!EG(exception))) {
zend_undefined_method(ce, Z_STR_P(function_name));
}
FREE_OP2();
HANDLE_EXCEPTION();
}
if (OP2_TYPE == IS_CONST &&
EXPECTED(fbc->type <= ZEND_USER_FUNCTION) &&
EXPECTED(!(fbc->common.fn_flags & (ZEND_ACC_CALL_VIA_TRAMPOLINE|ZEND_ACC_NEVER_CACHE)))) {
CACHE_POLYMORPHIC_PTR(opline->result.num, ce, fbc);
}
if (EXPECTED(fbc->type == ZEND_USER_FUNCTION) && UNEXPECTED(!RUN_TIME_CACHE(&fbc->op_array))) {
init_func_run_time_cache(&fbc->op_array);
}
if (OP2_TYPE != IS_CONST) {
FREE_OP2();
}
} else {
if (UNEXPECTED(ce->constructor == NULL)) {
zend_throw_error(NULL, "Cannot call constructor");
HANDLE_EXCEPTION();
}
if (Z_TYPE(EX(This)) == IS_OBJECT && Z_OBJ(EX(This))->ce != ce->constructor->common.scope && (ce->constructor->common.fn_flags & ZEND_ACC_PRIVATE)) {
zend_throw_error(NULL, "Cannot call private %s::__construct()", ZSTR_VAL(ce->name));
HANDLE_EXCEPTION();
}
fbc = ce->constructor;
if (EXPECTED(fbc->type == ZEND_USER_FUNCTION) && UNEXPECTED(!RUN_TIME_CACHE(&fbc->op_array))) {
init_func_run_time_cache(&fbc->op_array);
}
}
if (!(fbc->common.fn_flags & ZEND_ACC_STATIC)) {
if (Z_TYPE(EX(This)) == IS_OBJECT && instanceof_function(Z_OBJCE(EX(This)), ce)) {
ce = (zend_class_entry*)Z_OBJ(EX(This));
call_info = ZEND_CALL_NESTED_FUNCTION | ZEND_CALL_HAS_THIS;
} else {
zend_non_static_method_call(fbc);
if (UNEXPECTED(EG(exception) != NULL)) {
HANDLE_EXCEPTION();
}
ZEND_VM_C_GOTO(check_parent_and_self);
}
} else {
ZEND_VM_C_LABEL(check_parent_and_self):
/* previous opcode is ZEND_FETCH_CLASS */
if (OP1_TYPE == IS_UNUSED
&& ((opline->op1.num & ZEND_FETCH_CLASS_MASK) == ZEND_FETCH_CLASS_PARENT ||
(opline->op1.num & ZEND_FETCH_CLASS_MASK) == ZEND_FETCH_CLASS_SELF)) {
if (Z_TYPE(EX(This)) == IS_OBJECT) {
ce = Z_OBJCE(EX(This));
} else {
ce = Z_CE(EX(This));
}
}
call_info = ZEND_CALL_NESTED_FUNCTION;
}
call = zend_vm_stack_push_call_frame(call_info,
fbc, opline->extended_value, ce);
call->prev_execute_data = EX(call);
EX(call) = call;
ZEND_VM_NEXT_OPCODE();
}
|
Safe
|
[
"CWE-787"
] |
php-src
|
f1ce8d5f5839cb2069ea37ff424fb96b8cd6932d
|
7.389888525533534e+37
| 140 |
Fix #73122: Integer Overflow when concatenating strings
We must avoid integer overflows in memory allocations, so we introduce
an additional check in the VM, and bail out in the rare case of an
overflow. Since the recent fix for bug #74960 still doesn't catch all
possible overflows, we fix that right away.
| 0 |
int dbg_io_get_char(void)
{
int ret = dbg_io_ops->read_char();
if (ret == NO_POLL_CHAR)
return -1;
if (!dbg_kdb_mode)
return ret;
if (ret == 127)
return 8;
return ret;
}
|
Safe
|
[
"CWE-787"
] |
linux
|
eadb2f47a3ced5c64b23b90fd2a3463f63726066
|
1.5492504360610307e+38
| 11 |
lockdown: also lock down previous kgdb use
KGDB and KDB allow read and write access to kernel memory, and thus
should be restricted during lockdown. An attacker with access to a
serial port (for example, via a hypervisor console, which some cloud
vendors provide over the network) could trigger the debugger so it is
important that the debugger respect the lockdown mode when/if it is
triggered.
Fix this by integrating lockdown into kdb's existing permissions
mechanism. Unfortunately kgdb does not have any permissions mechanism
(although it certainly could be added later) so, for now, kgdb is simply
and brutally disabled by immediately exiting the gdb stub without taking
any action.
For lockdowns established early in the boot (e.g. the normal case) then
this should be fine but on systems where kgdb has set breakpoints before
the lockdown is enacted than "bad things" will happen.
CVE: CVE-2022-21499
Co-developed-by: Stephen Brennan <stephen.s.brennan@oracle.com>
Signed-off-by: Stephen Brennan <stephen.s.brennan@oracle.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
void PackLinuxElf32mipsel::defineSymbols(Filter const *ft)
{
PackLinuxElf32::defineSymbols(ft);
unsigned const hlen = sz_elf_hdrs + sizeof(l_info) + sizeof(p_info);
// We want to know if compressed data, plus stub, plus a couple pages,
// will fit below the uncompressed program in memory. But we don't
// know the final total compressed size yet, so use the uncompressed
// size (total over all PT_LOAD32) as an upper bound.
unsigned len = 0;
unsigned lo_va_user = ~0u; // infinity
for (int j= e_phnum; --j>=0; ) {
if (PT_LOAD32 == get_te32(&phdri[j].p_type)) {
len += (unsigned)get_te32(&phdri[j].p_filesz);
unsigned const va = get_te32(&phdri[j].p_vaddr);
if (va < lo_va_user) {
lo_va_user = va;
}
}
}
lsize = /*getLoaderSize()*/ 64 * 1024; // XXX: upper bound; avoid circularity
unsigned lo_va_stub = get_te32(&elfout.phdr[0].p_vaddr);
unsigned adrc;
unsigned adrm;
unsigned adru;
unsigned adrx;
unsigned lenm;
unsigned lenu;
len += (7&-lsize) + lsize;
is_big = (lo_va_user < (lo_va_stub + len + 2*page_size));
if (is_big) {
set_te32( &elfout.ehdr.e_entry,
get_te32(&elfout.ehdr.e_entry) + lo_va_user - lo_va_stub);
set_te32(&elfout.phdr[0].p_vaddr, lo_va_user);
set_te32(&elfout.phdr[0].p_paddr, lo_va_user);
lo_va_stub = lo_va_user;
adrc = lo_va_stub;
adrm = getbrk(phdri, e_phnum);
adru = page_mask & (~page_mask + adrm); // round up to page boundary
adrx = adru + hlen;
lenm = page_size + len;
lenu = page_size + len;
}
else {
adrm = lo_va_stub + len;
adrc = adrm;
adru = lo_va_stub;
adrx = lo_va_stub + hlen;
lenm = 2*page_size;
lenu = 2*page_size + len;
}
adrm = page_mask & (~page_mask + adrm); // round up to page boundary
adrc = page_mask & (~page_mask + adrc); // round up to page boundary
linker->defineSymbol("ADRX", adrx); // compressed input for eXpansion
linker->defineSymbol("ADRC", adrc); // addr for copy
linker->defineSymbol("LENU", lenu); // len for unmap
linker->defineSymbol("ADRU", adru); // addr for unmap
linker->defineSymbol("LENM", lenm); // len for map
linker->defineSymbol("ADRM", adrm); // addr for map
//linker->dumpSymbols(); // debug
}
|
Safe
|
[
"CWE-476"
] |
upx
|
ef336dbcc6dc8344482f8cf6c909ae96c3286317
|
2.0361691511264364e+38
| 65 |
Protect against bad crafted input.
https://github.com/upx/upx/issues/128
modified: p_lx_elf.cpp
| 0 |
TIFFWriteDirectoryTagCheckedByte(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, uint16 tag, uint8 value)
{
assert(sizeof(uint8)==1);
return(TIFFWriteDirectoryTagData(tif,ndir,dir,tag,TIFF_BYTE,1,1,&value));
}
|
Safe
|
[
"CWE-617"
] |
libtiff
|
de144fd228e4be8aa484c3caf3d814b6fa88c6d9
|
2.805237454749552e+38
| 5 |
TIFFWriteDirectorySec: avoid assertion. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2795. CVE-2018-10963
| 0 |
qf_free_items(qf_list_T *qfl)
{
qfline_T *qfp;
qfline_T *qfpnext;
int stop = FALSE;
while (qfl->qf_count && qfl->qf_start != NULL)
{
qfp = qfl->qf_start;
qfpnext = qfp->qf_next;
if (!stop)
{
vim_free(qfp->qf_module);
vim_free(qfp->qf_text);
vim_free(qfp->qf_pattern);
stop = (qfp == qfpnext);
vim_free(qfp);
if (stop)
// Somehow qf_count may have an incorrect value, set it to 1
// to avoid crashing when it's wrong.
// TODO: Avoid qf_count being incorrect.
qfl->qf_count = 1;
}
qfl->qf_start = qfpnext;
--qfl->qf_count;
}
qfl->qf_index = 0;
qfl->qf_start = NULL;
qfl->qf_last = NULL;
qfl->qf_ptr = NULL;
qfl->qf_nonevalid = TRUE;
qf_clean_dir_stack(&qfl->qf_dir_stack);
qfl->qf_directory = NULL;
qf_clean_dir_stack(&qfl->qf_file_stack);
qfl->qf_currfile = NULL;
qfl->qf_multiline = FALSE;
qfl->qf_multiignore = FALSE;
qfl->qf_multiscan = FALSE;
}
|
Safe
|
[
"CWE-416"
] |
vim
|
4f1b083be43f351bc107541e7b0c9655a5d2c0bb
|
2.376203550365677e+38
| 41 |
patch 9.0.0322: crash when no errors and 'quickfixtextfunc' is set
Problem: Crash when no errors and 'quickfixtextfunc' is set.
Solution: Do not handle errors if there aren't any.
| 0 |
static u32 ieee80211_idle_off(struct ieee80211_local *local,
const char *reason)
{
if (!(local->hw.conf.flags & IEEE80211_CONF_IDLE))
return 0;
#ifdef CONFIG_MAC80211_VERBOSE_DEBUG
wiphy_debug(local->hw.wiphy, "device no longer idle - %s\n", reason);
#endif
local->hw.conf.flags &= ~IEEE80211_CONF_IDLE;
return IEEE80211_CONF_CHANGE_IDLE;
}
|
Safe
|
[
"CWE-703",
"CWE-264"
] |
linux
|
550fd08c2cebad61c548def135f67aba284c6162
|
3.3769559347233993e+37
| 13 |
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Karsten Keil <isdn@linux-pingi.de>
CC: "David S. Miller" <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Krzysztof Halasa <khc@pm.waw.pl>
CC: "John W. Linville" <linville@tuxdriver.com>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: Marcel Holtmann <marcel@holtmann.org>
CC: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static MagickBooleanType WritePALMImage(const ImageInfo *image_info,
Image *image)
{
ExceptionInfo
*exception;
MagickBooleanType
status;
MagickOffsetType
currentOffset,
offset,
scene;
MagickSizeType
cc;
PixelPacket
transpix;
QuantizeInfo
*quantize_info;
register IndexPacket
*indexes;
register ssize_t
x;
register PixelPacket
*p;
ssize_t
y;
size_t
count,
bits_per_pixel,
bytes_per_row,
imageListLength,
nextDepthOffset,
one;
unsigned char
bit,
byte,
color,
*last_row,
*one_row,
*ptr,
version;
unsigned int
transparentIndex;
unsigned short
color16,
flags;
/*
Open output image file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickCoreSignature);
assert(image != (Image *) NULL);
assert(image->signature == MagickCoreSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename);
exception=AcquireExceptionInfo();
status=OpenBlob(image_info,image,WriteBinaryBlobMode,exception);
if (status == MagickFalse)
return(status);
quantize_info=AcquireQuantizeInfo(image_info);
flags=0;
currentOffset=0;
transparentIndex=0;
transpix.red=0;
transpix.green=0;
transpix.blue=0;
transpix.opacity=0;
one=1;
version=0;
scene=0;
imageListLength=GetImageListLength(image);
do
{
(void) TransformImageColorspace(image,sRGBColorspace);
count=GetNumberColors(image,NULL,exception);
for (bits_per_pixel=1; (one << bits_per_pixel) < count; bits_per_pixel*=2) ;
if (bits_per_pixel > 16)
bits_per_pixel=16;
else
if (bits_per_pixel < 16)
(void) TransformImageColorspace(image,image->colorspace);
if (bits_per_pixel < 8)
{
(void) TransformImageColorspace(image,GRAYColorspace);
(void) SetImageType(image,PaletteType);
(void) SortColormapByIntensity(image);
}
if ((image->storage_class == PseudoClass) && (image->colors > 256))
(void) SetImageStorageClass(image,DirectClass);
if (image->storage_class == PseudoClass)
flags|=PALM_HAS_COLORMAP_FLAG;
else
flags|=PALM_IS_DIRECT_COLOR;
(void) WriteBlobMSBShort(image,(unsigned short) image->columns); /* width */
(void) WriteBlobMSBShort(image,(unsigned short) image->rows); /* height */
bytes_per_row=((image->columns+(16/bits_per_pixel-1))/(16/
bits_per_pixel))*2;
(void) WriteBlobMSBShort(image,(unsigned short) bytes_per_row);
if ((image_info->compression == RLECompression) ||
(image_info->compression == FaxCompression))
flags|=PALM_IS_COMPRESSED_FLAG;
(void) WriteBlobMSBShort(image, flags);
(void) WriteBlobByte(image,(unsigned char) bits_per_pixel);
if (bits_per_pixel > 1)
version=1;
if ((image_info->compression == RLECompression) ||
(image_info->compression == FaxCompression))
version=2;
(void) WriteBlobByte(image,version);
(void) WriteBlobMSBShort(image,0); /* nextDepthOffset */
(void) WriteBlobByte(image,(unsigned char) transparentIndex);
if (image_info->compression == RLECompression)
(void) WriteBlobByte(image,PALM_COMPRESSION_RLE);
else
if (image_info->compression == FaxCompression)
(void) WriteBlobByte(image,PALM_COMPRESSION_SCANLINE);
else
(void) WriteBlobByte(image,PALM_COMPRESSION_NONE);
(void) WriteBlobMSBShort(image,0); /* reserved */
offset=16;
if (bits_per_pixel == 16)
{
(void) WriteBlobByte(image,5); /* # of bits of red */
(void) WriteBlobByte(image,6); /* # of bits of green */
(void) WriteBlobByte(image,5); /* # of bits of blue */
(void) WriteBlobByte(image,0); /* reserved by Palm */
(void) WriteBlobMSBLong(image,0); /* no transparent color, YET */
offset+=8;
}
if (bits_per_pixel == 8)
{
if (flags & PALM_HAS_COLORMAP_FLAG) /* Write out colormap */
{
quantize_info->dither=IsPaletteImage(image,&image->exception);
quantize_info->number_colors=image->colors;
(void) QuantizeImage(quantize_info,image);
(void) WriteBlobMSBShort(image,(unsigned short) image->colors);
for (count = 0; count < image->colors; count++)
{
(void) WriteBlobByte(image,(unsigned char) count);
(void) WriteBlobByte(image,ScaleQuantumToChar(
image->colormap[count].red));
(void) WriteBlobByte(image,
ScaleQuantumToChar(image->colormap[count].green));
(void) WriteBlobByte(image,
ScaleQuantumToChar(image->colormap[count].blue));
}
offset+=2+count*4;
}
else /* Map colors to Palm standard colormap */
{
Image
*affinity_image;
affinity_image=ConstituteImage(256,1,"RGB",CharPixel,&PalmPalette,
exception);
(void) TransformImageColorspace(affinity_image,
affinity_image->colorspace);
(void) RemapImage(quantize_info,image,affinity_image);
for (y=0; y < (ssize_t) image->rows; y++)
{
p=GetAuthenticPixels(image,0,y,image->columns,1,exception);
indexes=GetAuthenticIndexQueue(image);
for (x=0; x < (ssize_t) image->columns; x++)
SetPixelIndex(indexes+x,FindColor(&image->colormap[
(ssize_t) GetPixelIndex(indexes+x)]));
}
affinity_image=DestroyImage(affinity_image);
}
}
if (flags & PALM_IS_COMPRESSED_FLAG)
(void) WriteBlobMSBShort(image,0); /* fill in size later */
last_row=(unsigned char *) NULL;
if (image_info->compression == FaxCompression)
{
last_row=(unsigned char *) AcquireQuantumMemory(bytes_per_row,
sizeof(*last_row));
if (last_row == (unsigned char *) NULL)
{
quantize_info=DestroyQuantizeInfo(quantize_info);
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
}
}
one_row=(unsigned char *) AcquireQuantumMemory(bytes_per_row,
sizeof(*one_row));
if (one_row == (unsigned char *) NULL)
{
if (last_row != (unsigned char *) NULL)
last_row=(unsigned char *) RelinquishMagickMemory(last_row);
quantize_info=DestroyQuantizeInfo(quantize_info);
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
}
for (y=0; y < (ssize_t) image->rows; y++)
{
ptr=one_row;
(void) memset(ptr,0,bytes_per_row);
p=GetAuthenticPixels(image,0,y,image->columns,1,exception);
if (p == (PixelPacket *) NULL)
break;
indexes=GetAuthenticIndexQueue(image);
if (bits_per_pixel == 16)
{
for (x=0; x < (ssize_t) image->columns; x++)
{
color16=(unsigned short) ((((31*(size_t) GetPixelRed(p))/
(size_t) QuantumRange) << 11) |
(((63*(size_t) GetPixelGreen(p))/(size_t) QuantumRange) << 5) |
((31*(size_t) GetPixelBlue(p))/(size_t) QuantumRange));
if (GetPixelOpacity(p) == (Quantum) TransparentOpacity)
{
transpix.red=GetPixelRed(p);
transpix.green=GetPixelGreen(p);
transpix.blue=GetPixelBlue(p);
transpix.opacity=GetPixelOpacity(p);
flags|=PALM_HAS_TRANSPARENCY_FLAG;
}
*ptr++=(unsigned char) ((color16 >> 8) & 0xff);
*ptr++=(unsigned char) (color16 & 0xff);
p++;
}
}
else
{
byte=0x00;
bit=(unsigned char) (8-bits_per_pixel);
for (x=0; x < (ssize_t) image->columns; x++)
{
if (bits_per_pixel >= 8)
color=(unsigned char) GetPixelIndex(indexes+x);
else
color=(unsigned char) (GetPixelIndex(indexes+x)*
((one << bits_per_pixel)-1)/MagickMax(1*image->colors-1,1));
byte|=color << bit;
if (bit != 0)
bit-=(unsigned char) bits_per_pixel;
else
{
*ptr++=byte;
byte=0x00;
bit=(unsigned char) (8-bits_per_pixel);
}
}
if ((image->columns % (8/bits_per_pixel)) != 0)
*ptr++=byte;
}
if (image_info->compression == RLECompression)
{
x=0;
while (x < (ssize_t) bytes_per_row)
{
byte=one_row[x];
count=1;
while ((one_row[++x] == byte) && (count < 255) &&
(x < (ssize_t) bytes_per_row))
count++;
(void) WriteBlobByte(image,(unsigned char) count);
(void) WriteBlobByte(image,(unsigned char) byte);
}
}
else
if (image_info->compression == FaxCompression)
{
char
tmpbuf[8],
*tptr;
for (x = 0; x < (ssize_t) bytes_per_row; x += 8)
{
tptr = tmpbuf;
for (bit=0, byte=0; bit < (unsigned char) MagickMin(8,(ssize_t) bytes_per_row-x); bit++)
{
if ((y == 0) || (last_row[x + bit] != one_row[x + bit]))
{
byte |= (1 << (7 - bit));
*tptr++ = (char) one_row[x + bit];
}
}
(void) WriteBlobByte(image, byte);
(void) WriteBlob(image,tptr-tmpbuf,(unsigned char *) tmpbuf);
}
(void) memcpy(last_row,one_row,bytes_per_row);
}
else
(void) WriteBlob(image,bytes_per_row,one_row);
}
if (flags & PALM_HAS_TRANSPARENCY_FLAG)
{
offset=SeekBlob(image,currentOffset+6,SEEK_SET);
(void) WriteBlobMSBShort(image,flags);
offset=SeekBlob(image,currentOffset+12,SEEK_SET);
(void) WriteBlobByte(image,(unsigned char) transparentIndex); /* trans index */
}
if (bits_per_pixel == 16)
{
offset=SeekBlob(image,currentOffset+20,SEEK_SET);
(void) WriteBlobByte(image,0); /* reserved by Palm */
(void) WriteBlobByte(image,(unsigned char) ((31*transpix.red)/
QuantumRange));
(void) WriteBlobByte(image,(unsigned char) ((63*transpix.green)/
QuantumRange));
(void) WriteBlobByte(image,(unsigned char) ((31*transpix.blue)/
QuantumRange));
}
if (flags & PALM_IS_COMPRESSED_FLAG) /* fill in size now */
{
offset=SeekBlob(image,currentOffset+offset,SEEK_SET);
(void) WriteBlobMSBShort(image,(unsigned short) (GetBlobSize(image)-
currentOffset-offset));
}
if (one_row != (unsigned char *) NULL)
one_row=(unsigned char *) RelinquishMagickMemory(one_row);
if (last_row != (unsigned char *) NULL)
last_row=(unsigned char *) RelinquishMagickMemory(last_row);
if (GetNextImageInList(image) == (Image *) NULL)
break;
/* padding to 4 byte word */
for (cc=(GetBlobSize(image)) % 4; cc > 0; cc--)
(void) WriteBlobByte(image,0);
/* write nextDepthOffset and return to end of image */
(void) SeekBlob(image,currentOffset+10,SEEK_SET);
nextDepthOffset=(size_t) ((GetBlobSize(image)-currentOffset)/4);
(void) WriteBlobMSBShort(image,(unsigned short) nextDepthOffset);
currentOffset=(MagickOffsetType) GetBlobSize(image);
(void) SeekBlob(image,currentOffset,SEEK_SET);
image=SyncNextImageInList(image);
status=SetImageProgress(image,SaveImagesTag,scene++,imageListLength);
if (status == MagickFalse)
break;
} while (image_info->adjoin != MagickFalse);
quantize_info=DestroyQuantizeInfo(quantize_info);
(void) CloseBlob(image);
(void) DestroyExceptionInfo(exception);
return(MagickTrue);
}
|
Safe
|
[
"CWE-401"
] |
ImageMagick6
|
210474b2fac6a661bfa7ed563213920e93e76395
|
1.1917804849783477e+38
| 347 |
Fix ultra rare but potential memory-leak
| 0 |
VideoTrack::VideoTrack(unsigned int* seed)
: Track(seed),
display_height_(0),
display_width_(0),
pixel_height_(0),
pixel_width_(0),
crop_left_(0),
crop_right_(0),
crop_top_(0),
crop_bottom_(0),
frame_rate_(0.0),
height_(0),
stereo_mode_(0),
alpha_mode_(0),
width_(0),
colour_(NULL),
projection_(NULL) {}
|
Vulnerable
|
[
"CWE-20"
] |
libvpx
|
f00890eecdf8365ea125ac16769a83aa6b68792d
|
1.2091791753749584e+38
| 17 |
update libwebm to libwebm-1.0.0.27-352-g6ab9fcf
https://chromium.googlesource.com/webm/libwebm/+log/af81f26..6ab9fcf
Change-Id: I9d56e1fbaba9b96404b4fbabefddc1a85b79c25d
| 1 |
static signed short php_ifd_get16s(void *Short, int motorola_intel)
{
return (signed short)php_ifd_get16u(Short, motorola_intel);
}
|
Safe
|
[] |
php-src
|
87829c09a1d9e39bee994460d7ccf19dd20eda14
|
2.3038529994323827e+38
| 4 |
Fix #70052: getimagesize() fails for very large and very small WBMP
Very large WBMP (width or height greater than 2**31-1) cause an overflow and
circumvent the size limitation of 2048x2048 px. Very small WBMP (less than 12
bytes) cause a read error and are not recognized. This patch fixes both bugs.
| 0 |
static char* oidc_util_get_cookie_path(request_rec *r) {
char *rv = NULL, *requestPath = oidc_util_get_path(r);
char *cookie_path = oidc_cfg_dir_cookie_path(r);
if (cookie_path != NULL) {
if (strncmp(cookie_path, requestPath, strlen(cookie_path)) == 0)
rv = cookie_path;
else {
oidc_warn(r,
"" OIDCCookiePath " (%s) is not a substring of request path, using request path (%s) for cookie",
cookie_path, requestPath);
rv = requestPath;
}
} else {
rv = requestPath;
}
return (rv);
}
|
Safe
|
[
"CWE-79"
] |
mod_auth_openidc
|
55ea0a085290cd2c8cdfdd960a230cbc38ba8b56
|
4.793310313510166e+37
| 17 |
Add a function to escape Javascript characters
| 0 |
static int sb1054_get_register(struct sb_uart_port *port, int page, int reg)
{
int ret = 0;
unsigned int lcr = 0;
unsigned int mcr = 0;
unsigned int tmp = 0;
if( page <= 0)
{
printk(" page 0 can not use this fuction\n");
return -1;
}
switch(page)
{
case 1:
lcr = SB105X_GET_LCR(port);
tmp = lcr | SB105X_LCR_DLAB;
SB105X_PUT_LCR(port, tmp);
tmp = SB105X_GET_LCR(port);
ret = SB105X_GET_REG(port,reg);
SB105X_PUT_LCR(port,lcr);
break;
case 2:
mcr = SB105X_GET_MCR(port);
tmp = mcr | SB105X_MCR_P2S;
SB105X_PUT_MCR(port,tmp);
ret = SB105X_GET_REG(port,reg);
SB105X_PUT_MCR(port,mcr);
break;
case 3:
lcr = SB105X_GET_LCR(port);
tmp = lcr | SB105X_LCR_BF;
SB105X_PUT_LCR(port,tmp);
SB105X_PUT_REG(port,SB105X_PSR,SB105X_PSR_P3KEY);
ret = SB105X_GET_REG(port,reg);
SB105X_PUT_LCR(port,lcr);
break;
case 4:
lcr = SB105X_GET_LCR(port);
tmp = lcr | SB105X_LCR_BF;
SB105X_PUT_LCR(port,tmp);
SB105X_PUT_REG(port,SB105X_PSR,SB105X_PSR_P4KEY);
ret = SB105X_GET_REG(port,reg);
SB105X_PUT_LCR(port,lcr);
break;
default:
printk(" error invalid page number \n");
return -1;
}
return ret;
}
|
Safe
|
[
"CWE-200"
] |
linux
|
a8b33654b1e3b0c74d4a1fed041c9aae50b3c427
|
3.296887967965628e+37
| 61 |
Staging: sb105x: info leak in mp_get_count()
The icount.reserved[] array isn't initialized so it leaks stack
information to userspace.
Reported-by: Nico Golde <nico@ngolde.de>
Reported-by: Fabian Yamaguchi <fabs@goesec.de>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
bool Item_func_between::fix_length_and_dec()
{
max_length= 1;
/*
As some compare functions are generated after sql_yacc,
we have to check for out of memory conditions here
*/
if (!args[0] || !args[1] || !args[2])
return TRUE;
if (m_comparator.aggregate_for_comparison(Item_func_between::func_name(),
args, 3, false))
{
DBUG_ASSERT(current_thd->is_error());
return TRUE;
}
return m_comparator.type_handler()->
Item_func_between_fix_length_and_dec(this);
}
|
Safe
|
[
"CWE-617"
] |
server
|
807945f2eb5fa22e6f233cc17b85a2e141efe2c8
|
2.448952741759022e+38
| 20 |
MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order...
When doing condition pushdown from HAVING into WHERE,
Item_equal::create_pushable_equalities() calls
item->set_extraction_flag(IMMUTABLE_FL) for constant items.
Then, Item::cleanup_excluding_immutables_processor() checks for this flag
to see if it should call item->cleanup() or leave the item as-is.
The failure happens when a constant item has a non-constant one inside it,
like:
(tbl.col=0 AND impossible_cond)
item->walk(cleanup_excluding_immutables_processor) works in a bottom-up
way so it
1. will call Item_func_eq(tbl.col=0)->cleanup()
2. will not call Item_cond_and->cleanup (as the AND is constant)
This creates an item tree where a fixed Item has an un-fixed Item inside
it which eventually causes an assertion failure.
Fixed by introducing this rule: instead of just calling
item->set_extraction_flag(IMMUTABLE_FL);
we call Item::walk() to set the flag for all sub-items of the item.
| 0 |
Open_table_context::Open_table_context(THD *thd, uint flags)
:m_thd(thd),
m_failed_table(NULL),
m_start_of_statement_svp(thd->mdl_context.mdl_savepoint()),
m_timeout(flags & MYSQL_LOCK_IGNORE_TIMEOUT ?
LONG_TIMEOUT : thd->variables.lock_wait_timeout),
m_flags(flags),
m_action(OT_NO_ACTION),
m_has_locks(thd->mdl_context.has_locks()),
m_has_protection_against_grl(0)
{}
|
Safe
|
[
"CWE-416"
] |
server
|
0beed9b5e933f0ff79b3bb346524f7a451d14e38
|
2.317161049476545e+38
| 11 |
MDEV-28097 use-after-free when WHERE has subquery with an outer reference in HAVING
when resolving WHERE and ON clauses, do not look in
SELECT list/aliases.
| 0 |
const BIGNUM *ECDSA_SIG_get0_r(const ECDSA_SIG *sig)
{
return sig->r;
}
|
Safe
|
[
"CWE-125"
] |
openssl
|
94d23fcff9b2a7a8368dfe52214d5c2569882c11
|
2.980625751497692e+38
| 4 |
Fix EC_GROUP_new_from_ecparameters to check the base length
Check that there's at least one byte in params->base before trying to
read it.
CVE-2021-3712
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
Reviewed-by: Paul Dale <pauli@openssl.org>
| 0 |
void test_nghttp2_session_send_headers_push_reply(void) {
nghttp2_session *session;
nghttp2_session_callbacks callbacks;
nghttp2_outbound_item *item;
nghttp2_frame *frame;
nghttp2_stream *stream;
nghttp2_mem *mem;
mem = nghttp2_mem_default();
memset(&callbacks, 0, sizeof(nghttp2_session_callbacks));
callbacks.send_callback = null_send_callback;
CU_ASSERT(0 == nghttp2_session_server_new(&session, &callbacks, NULL));
open_sent_stream2(session, 2, NGHTTP2_STREAM_RESERVED);
item = mem->malloc(sizeof(nghttp2_outbound_item), NULL);
nghttp2_outbound_item_init(item);
frame = &item->frame;
nghttp2_frame_headers_init(&frame->headers, NGHTTP2_FLAG_END_HEADERS, 2,
NGHTTP2_HCAT_HEADERS, NULL, NULL, 0);
nghttp2_session_add_item(session, item);
CU_ASSERT(0 == session->num_outgoing_streams);
CU_ASSERT(0 == nghttp2_session_send(session));
CU_ASSERT(1 == session->num_outgoing_streams);
stream = nghttp2_session_get_stream(session, 2);
CU_ASSERT(NGHTTP2_STREAM_OPENED == stream->state);
CU_ASSERT(0 == (stream->flags & NGHTTP2_STREAM_FLAG_PUSH));
nghttp2_session_del(session);
}
|
Safe
|
[] |
nghttp2
|
0a6ce87c22c69438ecbffe52a2859c3a32f1620f
|
8.17292690934239e+37
| 33 |
Add nghttp2_option_set_max_outbound_ack
| 0 |
static int get_ncontrollers(void)
{
struct dirent **namelist;
struct udev_device *platform;
int n;
platform = udev_device_get_parent(vhci_driver->hc_device);
if (platform == NULL)
return -1;
n = scandir(udev_device_get_syspath(platform), &namelist, vhci_hcd_filter, NULL);
if (n < 0)
err("scandir failed");
else {
for (int i = 0; i < n; i++)
free(namelist[i]);
free(namelist);
}
return n;
}
|
Safe
|
[
"CWE-200"
] |
linux
|
2f2d0088eb93db5c649d2a5e34a3800a8a935fc5
|
2.0767678489408855e+38
| 21 |
usbip: prevent vhci_hcd driver from leaking a socket pointer address
When a client has a USB device attached over IP, the vhci_hcd driver is
locally leaking a socket pointer address via the
/sys/devices/platform/vhci_hcd/status file (world-readable) and in debug
output when "usbip --debug port" is run.
Fix it to not leak. The socket pointer address is not used at the moment
and it was made visible as a convenient way to find IP address from socket
pointer address by looking up /proc/net/{tcp,tcp6}.
As this opens a security hole, the fix replaces socket pointer address with
sockfd.
Reported-by: Secunia Research <vuln@secunia.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
writeCroppedImage(TIFF *in, TIFF *out, struct image_data *image,
struct dump_opts *dump, uint32 width, uint32 length,
unsigned char *crop_buff, int pagenum, int total_pages)
{
uint16 bps, spp;
uint16 input_compression, input_photometric;
uint16 input_planar;
struct cpTag* p;
input_compression = image->compression;
input_photometric = image->photometric;
spp = image->spp;
bps = image->bps;
TIFFSetField(out, TIFFTAG_IMAGEWIDTH, width);
TIFFSetField(out, TIFFTAG_IMAGELENGTH, length);
TIFFSetField(out, TIFFTAG_BITSPERSAMPLE, bps);
TIFFSetField(out, TIFFTAG_SAMPLESPERPIXEL, spp);
#ifdef DEBUG2
TIFFError("writeCroppedImage", "Input compression: %s",
(input_compression == COMPRESSION_OJPEG) ? "Old Jpeg" :
((input_compression == COMPRESSION_JPEG) ? "New Jpeg" : "Non Jpeg"));
#endif
if (compression != (uint16)-1)
TIFFSetField(out, TIFFTAG_COMPRESSION, compression);
else
{
if (input_compression == COMPRESSION_OJPEG)
{
compression = COMPRESSION_JPEG;
jpegcolormode = JPEGCOLORMODE_RAW;
TIFFSetField(out, TIFFTAG_COMPRESSION, COMPRESSION_JPEG);
}
else
CopyField(TIFFTAG_COMPRESSION, compression);
}
if (compression == COMPRESSION_JPEG)
{
if ((input_photometric == PHOTOMETRIC_PALETTE) || /* color map indexed */
(input_photometric == PHOTOMETRIC_MASK)) /* $holdout mask */
{
TIFFError ("writeCroppedImage",
"JPEG compression cannot be used with %s image data",
(input_photometric == PHOTOMETRIC_PALETTE) ?
"palette" : "mask");
return (-1);
}
if ((input_photometric == PHOTOMETRIC_RGB) &&
(jpegcolormode == JPEGCOLORMODE_RGB))
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_YCBCR);
else
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, input_photometric);
}
else
{
if (compression == COMPRESSION_SGILOG || compression == COMPRESSION_SGILOG24)
{
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, spp == 1 ?
PHOTOMETRIC_LOGL : PHOTOMETRIC_LOGLUV);
}
else
{
if (input_compression == COMPRESSION_SGILOG ||
input_compression == COMPRESSION_SGILOG24)
{
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, spp == 1 ?
PHOTOMETRIC_LOGL : PHOTOMETRIC_LOGLUV);
}
else
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, image->photometric);
}
}
if (((input_photometric == PHOTOMETRIC_LOGL) ||
(input_photometric == PHOTOMETRIC_LOGLUV)) &&
((compression != COMPRESSION_SGILOG) &&
(compression != COMPRESSION_SGILOG24)))
{
TIFFError("writeCroppedImage",
"LogL and LogLuv source data require SGI_LOG or SGI_LOG24 compression");
return (-1);
}
if (fillorder != 0)
TIFFSetField(out, TIFFTAG_FILLORDER, fillorder);
else
CopyTag(TIFFTAG_FILLORDER, 1, TIFF_SHORT);
/* The loadimage function reads input orientation and sets
* image->orientation. The correct_image_orientation function
* applies the required rotation and mirror operations to
* present the data in TOPLEFT orientation and updates
* image->orientation if any transforms are performed,
* as per EXIF standard.
*/
TIFFSetField(out, TIFFTAG_ORIENTATION, image->orientation);
/*
* Choose tiles/strip for the output image according to
* the command line arguments (-tiles, -strips) and the
* structure of the input image.
*/
if (outtiled == -1)
outtiled = TIFFIsTiled(in);
if (outtiled) {
/*
* Setup output file's tile width&height. If either
* is not specified, use either the value from the
* input image or, if nothing is defined, use the
* library default.
*/
if (tilewidth == (uint32) 0)
TIFFGetField(in, TIFFTAG_TILEWIDTH, &tilewidth);
if (tilelength == (uint32) 0)
TIFFGetField(in, TIFFTAG_TILELENGTH, &tilelength);
if (tilewidth == 0 || tilelength == 0)
TIFFDefaultTileSize(out, &tilewidth, &tilelength);
TIFFSetField(out, TIFFTAG_TILEWIDTH, tilewidth);
TIFFSetField(out, TIFFTAG_TILELENGTH, tilelength);
} else {
/*
* RowsPerStrip is left unspecified: use either the
* value from the input image or, if nothing is defined,
* use the library default.
*/
if (rowsperstrip == (uint32) 0)
{
if (!TIFFGetField(in, TIFFTAG_ROWSPERSTRIP, &rowsperstrip))
rowsperstrip = TIFFDefaultStripSize(out, rowsperstrip);
if (compression != COMPRESSION_JPEG)
{
if (rowsperstrip > length)
rowsperstrip = length;
}
}
else
if (rowsperstrip == (uint32) -1)
rowsperstrip = length;
TIFFSetField(out, TIFFTAG_ROWSPERSTRIP, rowsperstrip);
}
TIFFGetFieldDefaulted(in, TIFFTAG_PLANARCONFIG, &input_planar);
if (config != (uint16) -1)
TIFFSetField(out, TIFFTAG_PLANARCONFIG, config);
else
CopyField(TIFFTAG_PLANARCONFIG, config);
if (spp <= 4)
CopyTag(TIFFTAG_TRANSFERFUNCTION, 4, TIFF_SHORT);
CopyTag(TIFFTAG_COLORMAP, 4, TIFF_SHORT);
/* SMinSampleValue & SMaxSampleValue */
switch (compression) {
case COMPRESSION_JPEG:
if (((bps % 8) == 0) || ((bps % 12) == 0))
{
TIFFSetField(out, TIFFTAG_JPEGQUALITY, quality);
TIFFSetField(out, TIFFTAG_JPEGCOLORMODE, JPEGCOLORMODE_RGB);
}
else
{
TIFFError("writeCroppedImage",
"JPEG compression requires 8 or 12 bits per sample");
return (-1);
}
break;
case COMPRESSION_LZW:
case COMPRESSION_ADOBE_DEFLATE:
case COMPRESSION_DEFLATE:
if (predictor != (uint16)-1)
TIFFSetField(out, TIFFTAG_PREDICTOR, predictor);
else
CopyField(TIFFTAG_PREDICTOR, predictor);
break;
case COMPRESSION_CCITTFAX3:
case COMPRESSION_CCITTFAX4:
if (bps != 1)
{
TIFFError("writeCroppedImage",
"Group 3/4 compression is not usable with bps > 1");
return (-1);
}
if (compression == COMPRESSION_CCITTFAX3) {
if (g3opts != (uint32) -1)
TIFFSetField(out, TIFFTAG_GROUP3OPTIONS, g3opts);
else
CopyField(TIFFTAG_GROUP3OPTIONS, g3opts);
} else {
CopyTag(TIFFTAG_GROUP4OPTIONS, 1, TIFF_LONG);
}
CopyTag(TIFFTAG_BADFAXLINES, 1, TIFF_LONG);
CopyTag(TIFFTAG_CLEANFAXDATA, 1, TIFF_LONG);
CopyTag(TIFFTAG_CONSECUTIVEBADFAXLINES, 1, TIFF_LONG);
CopyTag(TIFFTAG_FAXRECVPARAMS, 1, TIFF_LONG);
CopyTag(TIFFTAG_FAXRECVTIME, 1, TIFF_LONG);
CopyTag(TIFFTAG_FAXSUBADDRESS, 1, TIFF_ASCII);
break;
case COMPRESSION_NONE:
break;
default: break;
}
{ uint32 len32;
void** data;
if (TIFFGetField(in, TIFFTAG_ICCPROFILE, &len32, &data))
TIFFSetField(out, TIFFTAG_ICCPROFILE, len32, data);
}
{ uint16 ninks;
const char* inknames;
if (TIFFGetField(in, TIFFTAG_NUMBEROFINKS, &ninks)) {
TIFFSetField(out, TIFFTAG_NUMBEROFINKS, ninks);
if (TIFFGetField(in, TIFFTAG_INKNAMES, &inknames)) {
int inknameslen = strlen(inknames) + 1;
const char* cp = inknames;
while (ninks > 1) {
cp = strchr(cp, '\0');
if (cp) {
cp++;
inknameslen += (strlen(cp) + 1);
}
ninks--;
}
TIFFSetField(out, TIFFTAG_INKNAMES, inknameslen, inknames);
}
}
}
{
unsigned short pg0, pg1;
if (TIFFGetField(in, TIFFTAG_PAGENUMBER, &pg0, &pg1)) {
TIFFSetField(out, TIFFTAG_PAGENUMBER, pagenum, total_pages);
}
}
for (p = tags; p < &tags[NTAGS]; p++)
CopyTag(p->tag, p->count, p->type);
/* Compute the tile or strip dimensions and write to disk */
if (outtiled)
{
if (config == PLANARCONFIG_CONTIG)
{
if (writeBufferToContigTiles (out, crop_buff, length, width, spp, dump))
TIFFError("","Unable to write contiguous tile data for page %d", pagenum);
}
else
{
if (writeBufferToSeparateTiles (out, crop_buff, length, width, spp, dump))
TIFFError("","Unable to write separate tile data for page %d", pagenum);
}
}
else
{
if (config == PLANARCONFIG_CONTIG)
{
if (writeBufferToContigStrips (out, crop_buff, length))
TIFFError("","Unable to write contiguous strip data for page %d", pagenum);
}
else
{
if (writeBufferToSeparateStrips(out, crop_buff, length, width, spp, dump))
TIFFError("","Unable to write separate strip data for page %d", pagenum);
}
}
if (!TIFFWriteDirectory(out))
{
TIFFError("","Failed to write IFD for page number %d", pagenum);
TIFFClose(out);
return (-1);
}
return (0);
} /* end writeCroppedImage */
|
Safe
|
[
"CWE-125"
] |
libtiff
|
21d39de1002a5e69caa0574b2cc05d795d6fbfad
|
2.6631729837241078e+38
| 275 |
* tools/tiffcrop.c: fix multiple uint32 overflows in
writeBufferToSeparateStrips(), writeBufferToContigTiles() and
writeBufferToSeparateTiles() that could cause heap buffer overflows.
Reported by Henri Salo from Nixu Corporation.
Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2592
| 0 |
TEST_P(DnsImplTest, RemoteAsyncLookup) {
server_->addHosts("some.good.domain", {"201.134.56.7"}, RecordType::A);
EXPECT_NE(nullptr,
resolveWithExpectations("some.bad.domain", DnsLookupFamily::Auto,
DnsResolver::ResolutionStatus::Failure, {}, {}, absl::nullopt));
dispatcher_->run(Event::Dispatcher::RunType::Block);
EXPECT_NE(nullptr, resolveWithExpectations("some.good.domain", DnsLookupFamily::Auto,
DnsResolver::ResolutionStatus::Success,
{"201.134.56.7"}, {}, absl::nullopt));
dispatcher_->run(Event::Dispatcher::RunType::Block);
}
|
Safe
|
[
"CWE-400"
] |
envoy
|
542f84c66e9f6479bc31c6f53157c60472b25240
|
7.414791729497182e+37
| 13 |
overload: Runtime configurable global connection limits (#147)
Signed-off-by: Tony Allen <tony@allen.gg>
| 0 |
static int ZEND_FASTCALL ZEND_SR_SPEC_CONST_CONST_HANDLER(ZEND_OPCODE_HANDLER_ARGS)
{
zend_op *opline = EX(opline);
shift_right_function(&EX_T(opline->result.u.var).tmp_var,
&opline->op1.u.constant,
&opline->op2.u.constant TSRMLS_CC);
ZEND_VM_NEXT_OPCODE();
}
|
Safe
|
[] |
php-src
|
ce96fd6b0761d98353761bf78d5bfb55291179fd
|
2.592526470680305e+37
| 12 |
- fix #39863, do not accept paths with NULL in them. See http://news.php.net/php.internals/50191, trunk will have the patch later (adding a macro and/or changing (some) APIs. Patch by Rasmus
| 0 |
size_t rand_pool_entropy_available(RAND_POOL *pool)
{
if (pool->entropy < pool->entropy_requested)
return 0;
if (pool->len < pool->min_len)
return 0;
return pool->entropy;
}
|
Safe
|
[
"CWE-330"
] |
openssl
|
1b0fe00e2704b5e20334a16d3c9099d1ba2ef1be
|
2.7704033747672592e+38
| 10 |
drbg: ensure fork-safety without using a pthread_atfork handler
When the new OpenSSL CSPRNG was introduced in version 1.1.1,
it was announced in the release notes that it would be fork-safe,
which the old CSPRNG hadn't been.
The fork-safety was implemented using a fork count, which was
incremented by a pthread_atfork handler. Initially, this handler
was enabled by default. Unfortunately, the default behaviour
had to be changed for other reasons in commit b5319bdbd095, so
the new OpenSSL CSPRNG failed to keep its promise.
This commit restores the fork-safety using a different approach.
It replaces the fork count by a fork id, which coincides with
the process id on UNIX-like operating systems and is zero on other
operating systems. It is used to detect when an automatic reseed
after a fork is necessary.
To prevent a future regression, it also adds a test to verify that
the child reseeds after fork.
CVE-2019-1549
Reviewed-by: Paul Dale <paul.dale@oracle.com>
Reviewed-by: Matt Caswell <matt@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/9802)
| 0 |
bool HeaderTable::isValid(uint32_t index) const {
return 0 < index && index <= size_;
}
|
Safe
|
[
"CWE-416"
] |
proxygen
|
f43b134cc5c19d8532e7fb670a1c02e85f7a8d4f
|
2.946715687995738e+38
| 3 |
Fixing HPACK header table resize issue
Summary: On resizing the header table down and then up again, a resize can be called against the underlying vector that actually sizes it down. This causes a lot of things to break as the code that does the resizing assumes the underlying vector is only ever resized up.
Reviewed By: afrind
Differential Revision: D4613681
fbshipit-source-id: 35b61cab53d5bc097424d6c779f90b7fdea42002
| 0 |
static enum print_line_t print_bin_fmt(struct trace_iterator *iter)
{
struct trace_array *tr = iter->tr;
struct trace_seq *s = &iter->seq;
struct trace_entry *entry;
struct trace_event *event;
entry = iter->ent;
if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO) {
SEQ_PUT_FIELD(s, entry->pid);
SEQ_PUT_FIELD(s, iter->cpu);
SEQ_PUT_FIELD(s, iter->ts);
if (trace_seq_has_overflowed(s))
return TRACE_TYPE_PARTIAL_LINE;
}
event = ftrace_find_event(entry->type);
return event ? event->funcs->binary(iter, 0, event) :
TRACE_TYPE_HANDLED;
}
|
Safe
|
[
"CWE-415"
] |
linux
|
4397f04575c44e1440ec2e49b6302785c95fd2f8
|
1.3798438738927244e+38
| 21 |
tracing: Fix possible double free on failure of allocating trace buffer
Jing Xia and Chunyan Zhang reported that on failing to allocate part of the
tracing buffer, memory is freed, but the pointers that point to them are not
initialized back to NULL, and later paths may try to free the freed memory
again. Jing and Chunyan fixed one of the locations that does this, but
missed a spot.
Link: http://lkml.kernel.org/r/20171226071253.8968-1-chunyan.zhang@spreadtrum.com
Cc: stable@vger.kernel.org
Fixes: 737223fbca3b1 ("tracing: Consolidate buffer allocation code")
Reported-by: Jing Xia <jing.xia@spreadtrum.com>
Reported-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| 0 |
void PSOutputDev::dumpColorSpaceL2(GfxColorSpace *colorSpace,
GBool genXform, GBool updateColors,
GBool map01) {
GfxCalGrayColorSpace *calGrayCS;
GfxCalRGBColorSpace *calRGBCS;
GfxLabColorSpace *labCS;
GfxIndexedColorSpace *indexedCS;
GfxSeparationColorSpace *separationCS;
GfxDeviceNColorSpace *deviceNCS;
GfxColorSpace *baseCS;
Guchar *lookup, *p;
double x[gfxColorMaxComps], y[gfxColorMaxComps];
double low[gfxColorMaxComps], range[gfxColorMaxComps];
GfxColor color;
GfxCMYK cmyk;
Function *func;
int n, numComps, numAltComps;
int byte;
int i, j, k;
switch (colorSpace->getMode()) {
case csDeviceGray:
writePS("/DeviceGray");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
processColors |= psProcessBlack;
}
break;
case csCalGray:
calGrayCS = (GfxCalGrayColorSpace *)colorSpace;
writePS("[/CIEBasedA <<\n");
writePSFmt(" /DecodeA {{{0:.4g} exp}} bind\n", calGrayCS->getGamma());
writePSFmt(" /MatrixA [{0:.4g} {1:.4g} {2:.4g}]\n",
calGrayCS->getWhiteX(), calGrayCS->getWhiteY(),
calGrayCS->getWhiteZ());
writePSFmt(" /WhitePoint [{0:.4g} {1:.4g} {2:.4g}]\n",
calGrayCS->getWhiteX(), calGrayCS->getWhiteY(),
calGrayCS->getWhiteZ());
writePSFmt(" /BlackPoint [{0:.4g} {1:.4g} {2:.4g}]\n",
calGrayCS->getBlackX(), calGrayCS->getBlackY(),
calGrayCS->getBlackZ());
writePS(">>]");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
processColors |= psProcessBlack;
}
break;
case csDeviceRGB:
writePS("/DeviceRGB");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
processColors |= psProcessCMYK;
}
break;
case csCalRGB:
calRGBCS = (GfxCalRGBColorSpace *)colorSpace;
writePS("[/CIEBasedABC <<\n");
writePSFmt(" /DecodeABC [{{{0:.4g} exp}} bind {{{1:.4g} exp}} bind {{{2:.4g} exp}} bind]\n",
calRGBCS->getGammaR(), calRGBCS->getGammaG(),
calRGBCS->getGammaB());
writePSFmt(" /MatrixABC [{0:.4g} {1:.4g} {2:.4g} {3:.4g} {4:.4g} {5:.4g} {6:.4g} {7:.4g} {8:.4g}]\n",
calRGBCS->getMatrix()[0], calRGBCS->getMatrix()[1],
calRGBCS->getMatrix()[2], calRGBCS->getMatrix()[3],
calRGBCS->getMatrix()[4], calRGBCS->getMatrix()[5],
calRGBCS->getMatrix()[6], calRGBCS->getMatrix()[7],
calRGBCS->getMatrix()[8]);
writePSFmt(" /WhitePoint [{0:.4g} {1:.4g} {2:.4g}]\n",
calRGBCS->getWhiteX(), calRGBCS->getWhiteY(),
calRGBCS->getWhiteZ());
writePSFmt(" /BlackPoint [{0:.4g} {1:.4g} {2:.4g}]\n",
calRGBCS->getBlackX(), calRGBCS->getBlackY(),
calRGBCS->getBlackZ());
writePS(">>]");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
processColors |= psProcessCMYK;
}
break;
case csDeviceCMYK:
writePS("/DeviceCMYK");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
processColors |= psProcessCMYK;
}
break;
case csLab:
labCS = (GfxLabColorSpace *)colorSpace;
writePS("[/CIEBasedABC <<\n");
if (map01) {
writePS(" /RangeABC [0 1 0 1 0 1]\n");
writePSFmt(" /DecodeABC [{{100 mul 16 add 116 div}} bind {{{0:.4g} mul {1:.4g} add}} bind {{{2:.4g} mul {3:.4g} add}} bind]\n",
(labCS->getAMax() - labCS->getAMin()) / 500.0,
labCS->getAMin() / 500.0,
(labCS->getBMax() - labCS->getBMin()) / 200.0,
labCS->getBMin() / 200.0);
} else {
writePSFmt(" /RangeABC [0 100 {0:.4g} {1:.4g} {2:.4g} {3:.4g}]\n",
labCS->getAMin(), labCS->getAMax(),
labCS->getBMin(), labCS->getBMax());
writePS(" /DecodeABC [{16 add 116 div} bind {500 div} bind {200 div} bind]\n");
}
writePS(" /MatrixABC [1 1 1 1 0 0 0 0 -1]\n");
writePS(" /DecodeLMN\n");
writePS(" [{dup 6 29 div ge {dup dup mul mul}\n");
writePSFmt(" {{4 29 div sub 108 841 div mul }} ifelse {0:.4g} mul}} bind\n",
labCS->getWhiteX());
writePS(" {dup 6 29 div ge {dup dup mul mul}\n");
writePSFmt(" {{4 29 div sub 108 841 div mul }} ifelse {0:.4g} mul}} bind\n",
labCS->getWhiteY());
writePS(" {dup 6 29 div ge {dup dup mul mul}\n");
writePSFmt(" {{4 29 div sub 108 841 div mul }} ifelse {0:.4g} mul}} bind]\n",
labCS->getWhiteZ());
writePSFmt(" /WhitePoint [{0:.4g} {1:.4g} {2:.4g}]\n",
labCS->getWhiteX(), labCS->getWhiteY(), labCS->getWhiteZ());
writePSFmt(" /BlackPoint [{0:.4g} {1:.4g} {2:.4g}]\n",
labCS->getBlackX(), labCS->getBlackY(), labCS->getBlackZ());
writePS(">>]");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
processColors |= psProcessCMYK;
}
break;
case csICCBased:
// there is no transform function to the alternate color space, so
// we can use it directly
dumpColorSpaceL2(((GfxICCBasedColorSpace *)colorSpace)->getAlt(),
genXform, updateColors, gFalse);
break;
case csIndexed:
indexedCS = (GfxIndexedColorSpace *)colorSpace;
baseCS = indexedCS->getBase();
writePS("[/Indexed ");
dumpColorSpaceL2(baseCS, gFalse, gFalse, gTrue);
n = indexedCS->getIndexHigh();
numComps = baseCS->getNComps();
lookup = indexedCS->getLookup();
writePSFmt(" {0:d} <\n", n);
if (baseCS->getMode() == csDeviceN) {
func = ((GfxDeviceNColorSpace *)baseCS)->getTintTransformFunc();
baseCS->getDefaultRanges(low, range, indexedCS->getIndexHigh());
if (((GfxDeviceNColorSpace *)baseCS)->getAlt()->getMode() == csLab) {
labCS = (GfxLabColorSpace *)((GfxDeviceNColorSpace *)baseCS)->getAlt();
} else {
labCS = NULL;
}
numAltComps = ((GfxDeviceNColorSpace *)baseCS)->getAlt()->getNComps();
p = lookup;
for (i = 0; i <= n; i += 8) {
writePS(" ");
for (j = i; j < i+8 && j <= n; ++j) {
for (k = 0; k < numComps; ++k) {
x[k] = low[k] + (*p++ / 255.0) * range[k];
}
func->transform(x, y);
if (labCS) {
y[0] /= 100.0;
y[1] = (y[1] - labCS->getAMin()) /
(labCS->getAMax() - labCS->getAMin());
y[2] = (y[2] - labCS->getBMin()) /
(labCS->getBMax() - labCS->getBMin());
}
for (k = 0; k < numAltComps; ++k) {
byte = (int)(y[k] * 255 + 0.5);
if (byte < 0) {
byte = 0;
} else if (byte > 255) {
byte = 255;
}
writePSFmt("{0:02x}", byte);
}
if (updateColors) {
color.c[0] = dblToCol(j);
indexedCS->getCMYK(&color, &cmyk);
addProcessColor(colToDbl(cmyk.c), colToDbl(cmyk.m),
colToDbl(cmyk.y), colToDbl(cmyk.k));
}
}
writePS("\n");
}
} else {
for (i = 0; i <= n; i += 8) {
writePS(" ");
for (j = i; j < i+8 && j <= n; ++j) {
for (k = 0; k < numComps; ++k) {
writePSFmt("{0:02x}", lookup[j * numComps + k]);
}
if (updateColors) {
color.c[0] = dblToCol(j);
indexedCS->getCMYK(&color, &cmyk);
addProcessColor(colToDbl(cmyk.c), colToDbl(cmyk.m),
colToDbl(cmyk.y), colToDbl(cmyk.k));
}
}
writePS("\n");
}
}
writePS(">]");
if (genXform) {
writePS(" {}");
}
break;
case csSeparation:
separationCS = (GfxSeparationColorSpace *)colorSpace;
writePS("[/Separation ");
writePSString(separationCS->getName());
writePS(" ");
dumpColorSpaceL2(separationCS->getAlt(), gFalse, gFalse, gFalse);
writePS("\n");
cvtFunction(separationCS->getFunc());
writePS("]");
if (genXform) {
writePS(" {}");
}
if (updateColors) {
addCustomColor(separationCS);
}
break;
case csDeviceN:
// DeviceN color spaces are a Level 3 PostScript feature.
deviceNCS = (GfxDeviceNColorSpace *)colorSpace;
dumpColorSpaceL2(deviceNCS->getAlt(), gFalse, updateColors, map01);
if (genXform) {
writePS(" ");
cvtFunction(deviceNCS->getTintTransformFunc());
}
break;
case csPattern:
//~ unimplemented
break;
}
}
|
Safe
|
[] |
poppler
|
abf167af8b15e5f3b510275ce619e6fdb42edd40
|
2.333215722591233e+38
| 254 |
Implement tiling/patterns in SplashOutputDev
Fixes bug 13518
| 0 |
static void spectral_to_sample(AACContext *ac)
{
int i, type;
void (*imdct_and_window)(AACContext *ac, SingleChannelElement *sce);
switch (ac->oc[1].m4ac.object_type) {
case AOT_ER_AAC_LD:
imdct_and_window = imdct_and_windowing_ld;
break;
case AOT_ER_AAC_ELD:
imdct_and_window = imdct_and_windowing_eld;
break;
default:
imdct_and_window = ac->imdct_and_windowing;
}
for (type = 3; type >= 0; type--) {
for (i = 0; i < MAX_ELEM_ID; i++) {
ChannelElement *che = ac->che[type][i];
if (che) {
if (type <= TYPE_CPE)
apply_channel_coupling(ac, che, type, i, BEFORE_TNS, apply_dependent_coupling);
if (ac->oc[1].m4ac.object_type == AOT_AAC_LTP) {
if (che->ch[0].ics.predictor_present) {
if (che->ch[0].ics.ltp.present)
ac->apply_ltp(ac, &che->ch[0]);
if (che->ch[1].ics.ltp.present && type == TYPE_CPE)
ac->apply_ltp(ac, &che->ch[1]);
}
}
if (che->ch[0].tns.present)
ac->apply_tns(che->ch[0].coeffs, &che->ch[0].tns, &che->ch[0].ics, 1);
if (che->ch[1].tns.present)
ac->apply_tns(che->ch[1].coeffs, &che->ch[1].tns, &che->ch[1].ics, 1);
if (type <= TYPE_CPE)
apply_channel_coupling(ac, che, type, i, BETWEEN_TNS_AND_IMDCT, apply_dependent_coupling);
if (type != TYPE_CCE || che->coup.coupling_point == AFTER_IMDCT) {
imdct_and_window(ac, &che->ch[0]);
if (ac->oc[1].m4ac.object_type == AOT_AAC_LTP)
ac->update_ltp(ac, &che->ch[0]);
if (type == TYPE_CPE) {
imdct_and_window(ac, &che->ch[1]);
if (ac->oc[1].m4ac.object_type == AOT_AAC_LTP)
ac->update_ltp(ac, &che->ch[1]);
}
if (ac->oc[1].m4ac.sbr > 0) {
ff_sbr_apply(ac, &che->sbr, type, che->ch[0].ret, che->ch[1].ret);
}
}
if (type <= TYPE_CCE)
apply_channel_coupling(ac, che, type, i, AFTER_IMDCT, apply_independent_coupling);
}
}
}
}
|
Safe
|
[
"CWE-703"
] |
FFmpeg
|
6e42ccb9dbc13836cd52cda594f819d17af9afa2
|
3.07868876734974e+38
| 53 |
avcodec/aacdec: Fix pulse position checks in decode_pulses()
Fixes out of array read
Fixes: asan_static-oob_1efed25_1887_cov_2013541199_HeyYa_RA10_AAC_192K_30s.rm
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
| 0 |
static void l2cap_start_connection(struct l2cap_chan *chan)
{
if (__amp_capable(chan)) {
BT_DBG("chan %p AMP capable: discover AMPs", chan);
a2mp_discover_amp(chan);
} else if (chan->conn->hcon->type == LE_LINK) {
l2cap_le_start(chan);
} else {
l2cap_send_conn_req(chan);
}
}
|
Safe
|
[
"CWE-787"
] |
linux
|
e860d2c904d1a9f38a24eb44c9f34b8f915a6ea3
|
4.551156073519913e+37
| 11 |
Bluetooth: Properly check L2CAP config option output buffer length
Validate the output buffer length for L2CAP config requests and responses
to avoid overflowing the stack buffer used for building the option blocks.
Cc: stable@vger.kernel.org
Signed-off-by: Ben Seri <ben@armis.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
static MagickBooleanType ReadHEICExifProfile(Image *image,
struct heif_image_handle *image_handle,ExceptionInfo *exception)
{
heif_item_id
exif_id;
int
count;
count=heif_image_handle_get_list_of_metadata_block_IDs(image_handle,"Exif",
&exif_id,1);
if (count > 0)
{
size_t
exif_size;
unsigned char
*exif_buffer;
/*
Read Exif profile.
*/
exif_size=heif_image_handle_get_metadata_size(image_handle,exif_id);
if ((MagickSizeType) exif_size > GetBlobSize(image))
ThrowBinaryException(CorruptImageError,"InsufficientImageDataInFile",
image->filename);
exif_buffer=(unsigned char *) AcquireMagickMemory(exif_size);
if (exif_buffer != (unsigned char *) NULL)
{
struct heif_error
error;
error=heif_image_handle_get_metadata(image_handle,
exif_id,exif_buffer);
if (error.code == 0)
{
StringInfo
*profile;
/*
The first 4 byte should be skipped since they indicate the
offset to the start of the TIFF header of the Exif data.
*/
profile=(StringInfo*) NULL;
if (exif_size > 8)
profile=BlobToStringInfo(exif_buffer+4,(size_t) exif_size-4);
if (profile != (StringInfo*) NULL)
{
(void) SetImageProfile(image,"exif",profile,exception);
profile=DestroyStringInfo(profile);
}
}
}
exif_buffer=(unsigned char *) RelinquishMagickMemory(exif_buffer);
}
return(MagickTrue);
}
|
Safe
|
[
"CWE-125"
] |
ImageMagick
|
868aad754ee599eb7153b84d610f2ecdf7b339f6
|
1.2286720265808363e+38
| 57 |
Always correct the width and height of the image (#1859).
| 0 |
void vStreamBufferSetStreamBufferNumber( StreamBufferHandle_t xStreamBuffer,
UBaseType_t uxStreamBufferNumber )
{
xStreamBuffer->uxStreamBufferNumber = uxStreamBufferNumber;
}
|
Safe
|
[
"CWE-190"
] |
FreeRTOS-Kernel
|
d05b9c123f2bf9090bce386a244fc934ae44db5b
|
1.3290989223291979e+38
| 5 |
Add addition overflow check for stream buffer (#226)
| 0 |
aac_rate_idx (gint rate)
{
if (92017 <= rate)
return 0;
else if (75132 <= rate)
return 1;
else if (55426 <= rate)
return 2;
else if (46009 <= rate)
return 3;
else if (37566 <= rate)
return 4;
else if (27713 <= rate)
return 5;
else if (23004 <= rate)
return 6;
else if (18783 <= rate)
return 7;
else if (13856 <= rate)
return 8;
else if (11502 <= rate)
return 9;
else if (9391 <= rate)
return 10;
else
return 11;
}
|
Safe
|
[] |
gst-plugins-good
|
9181191511f9c0be6a89c98b311f49d66bd46dc3
|
2.5413363569043602e+38
| 27 |
matroskademux: Fix extraction of multichannel WavPack
The old code had a couple of issues that all lead to potential memory
safety bugs.
- Use a constant for the Wavpack4Header size instead of using sizeof.
It's written out into the data and not from the struct and who knows
what special alignment/padding requirements some C compilers have.
- gst_buffer_set_size() does not realloc the buffer when setting a
bigger size than allocated, it only allows growing up to the maximum
allocated size. Instead use a GstAdapter to collect all the blocks
and take out everything at once in the end.
- Check that enough data is actually available in the input and
otherwise handle it an error in all cases instead of silently
ignoring it.
Among other things this fixes out of bounds writes because the code
assumed gst_buffer_set_size() can grow the buffer and simply wrote after
the end of the buffer.
Thanks to Natalie Silvanovich for reporting.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/859
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/903>
| 0 |
directory_fetches_dir_info_early(const or_options_t *options)
{
return directory_fetches_from_authorities(options);
}
|
Safe
|
[] |
tor
|
02e05bd74dbec614397b696cfcda6525562a4675
|
6.162660811938004e+37
| 4 |
When examining descriptors as a dirserver, reject ones with bad versions
This is an extra fix for bug 21278: it ensures that these
descriptors and platforms will never be listed in a legit consensus.
| 0 |
static int get_crl_delta(X509_STORE_CTX *ctx,
X509_CRL **pcrl, X509_CRL **pdcrl, X509 *x)
{
int ok;
X509 *issuer = NULL;
int crl_score = 0;
unsigned int reasons;
X509_CRL *crl = NULL, *dcrl = NULL;
STACK_OF(X509_CRL) *skcrl;
X509_NAME *nm = X509_get_issuer_name(x);
reasons = ctx->current_reasons;
ok = get_crl_sk(ctx, &crl, &dcrl,
&issuer, &crl_score, &reasons, ctx->crls);
if (ok)
goto done;
/* Lookup CRLs from store */
skcrl = ctx->lookup_crls(ctx, nm);
/* If no CRLs found and a near match from get_crl_sk use that */
if (!skcrl && crl)
goto done;
get_crl_sk(ctx, &crl, &dcrl, &issuer, &crl_score, &reasons, skcrl);
sk_X509_CRL_pop_free(skcrl, X509_CRL_free);
done:
/* If we got any kind of CRL use it and return success */
if (crl)
{
ctx->current_issuer = issuer;
ctx->current_crl_score = crl_score;
ctx->current_reasons = reasons;
*pcrl = crl;
*pdcrl = dcrl;
return 1;
}
return 0;
}
|
Safe
|
[] |
openssl
|
d65b8b2162f33ac0d53dace588a0847ed827626c
|
5.853230232197728e+37
| 44 |
Backport OCSP fixes.
| 0 |
static inline MagickSizeType ScaleQuantumToLongLong(const Quantum quantum)
{
#if !defined(MAGICKCORE_HDRI_SUPPORT)
return((MagickSizeType) quantum);
#else
if (quantum <= 0.0)
return(0);
if (quantum >= 18446744073709551615.0)
return(MagickULLConstant(18446744073709551615));
return((MagickSizeType) (quantum+0.5));
#endif
}
|
Vulnerable
|
[
"CWE-190"
] |
ImageMagick
|
95d4e94e0353e503b71a53f5e6fad173c7c70c90
|
1.5304005629676582e+38
| 12 |
https://github.com/ImageMagick/ImageMagick/issues/1751
| 1 |
get_progress_error (const GError *update_error)
{
g_autofree gchar *name = NULL;
name = g_dbus_error_encode_gerror (update_error);
/* Don't return weird dbus wrapped things from the portal */
if (g_str_has_prefix (name, "org.gtk.GDBus.UnmappedGError.Quark"))
return g_strdup ("org.freedesktop.DBus.Error.Failed");
return g_steal_pointer (&name);
}
|
Safe
|
[
"CWE-94",
"CWE-74"
] |
flatpak
|
aeb6a7ab0abaac4a8f4ad98b3df476d9de6b8bd4
|
1.1366873751038551e+38
| 11 |
portal: Convert --env in extra-args into --env-fd
This hides overridden variables from the command-line, which means
processes running under other uids can't see them in /proc/*/cmdline,
which might be important if they contain secrets.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Part-of: https://github.com/flatpak/flatpak/security/advisories/GHSA-4ppf-fxf6-vxg2
| 0 |
CImg<T> get_dilate(const unsigned int s) const {
return (+*this).dilate(s);
}
|
Safe
|
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
|
8.913873252267514e+36
| 3 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
| 0 |
smi_from_recv_msg(struct ipmi_smi *intf, struct ipmi_recv_msg *recv_msg,
unsigned char seq, long seqid)
{
struct ipmi_smi_msg *smi_msg = ipmi_alloc_smi_msg();
if (!smi_msg)
/*
* If we can't allocate the message, then just return, we
* get 4 retries, so this should be ok.
*/
return NULL;
memcpy(smi_msg->data, recv_msg->msg.data, recv_msg->msg.data_len);
smi_msg->data_size = recv_msg->msg.data_len;
smi_msg->msgid = STORE_SEQ_IN_MSGID(seq, seqid);
pr_debug("Resend: %*ph\n", smi_msg->data_size, smi_msg->data);
return smi_msg;
}
|
Safe
|
[
"CWE-400",
"CWE-401"
] |
linux
|
4aa7afb0ee20a97fbf0c5bab3df028d5fb85fdab
|
1.2592722265169888e+38
| 19 |
ipmi: Fix memory leak in __ipmi_bmc_register
In the impelementation of __ipmi_bmc_register() the allocated memory for
bmc should be released in case ida_simple_get() fails.
Fixes: 68e7e50f195f ("ipmi: Don't use BMC product/dev ids in the BMC name")
Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com>
Message-Id: <20191021200649.1511-1-navid.emamdoost@gmail.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
| 0 |
static bool check_no_constants(THD *, partition_info*)
{
return FALSE;
}
|
Safe
|
[
"CWE-416"
] |
server
|
c02ebf3510850ba78a106be9974c94c3b97d8585
|
2.992925494940279e+38
| 4 |
MDEV-24176 Preparations
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
| 0 |
static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
struct io_comp_state *cs)
{
spin_lock_irq(&ctx->completion_lock);
list_splice_init(&ctx->locked_free_list, &cs->free_list);
ctx->locked_free_nr = 0;
spin_unlock_irq(&ctx->completion_lock);
}
|
Safe
|
[
"CWE-125"
] |
linux
|
89c2b3b74918200e46699338d7bcc19b1ea12110
|
2.7051215768014675e+38
| 8 |
io_uring: reexpand under-reexpanded iters
[ 74.211232] BUG: KASAN: stack-out-of-bounds in iov_iter_revert+0x809/0x900
[ 74.212778] Read of size 8 at addr ffff888025dc78b8 by task
syz-executor.0/828
[ 74.214756] CPU: 0 PID: 828 Comm: syz-executor.0 Not tainted
5.14.0-rc3-next-20210730 #1
[ 74.216525] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 74.219033] Call Trace:
[ 74.219683] dump_stack_lvl+0x8b/0xb3
[ 74.220706] print_address_description.constprop.0+0x1f/0x140
[ 74.224226] kasan_report.cold+0x7f/0x11b
[ 74.226085] iov_iter_revert+0x809/0x900
[ 74.227960] io_write+0x57d/0xe40
[ 74.232647] io_issue_sqe+0x4da/0x6a80
[ 74.242578] __io_queue_sqe+0x1ac/0xe60
[ 74.245358] io_submit_sqes+0x3f6e/0x76a0
[ 74.248207] __do_sys_io_uring_enter+0x90c/0x1a20
[ 74.257167] do_syscall_64+0x3b/0x90
[ 74.257984] entry_SYSCALL_64_after_hwframe+0x44/0xae
old_size = iov_iter_count();
...
iov_iter_revert(old_size - iov_iter_count());
If iov_iter_revert() is done base on the initial size as above, and the
iter is truncated and not reexpanded in the middle, it miscalculates
borders causing problems. This trace is due to no one reexpanding after
generic_write_checks().
Now iters store how many bytes has been truncated, so reexpand them to
the initial state right before reverting.
Cc: stable@vger.kernel.org
Reported-by: Palash Oswal <oswalpalash@gmail.com>
Reported-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Reported-and-tested-by: syzbot+9671693590ef5aad8953@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| 0 |
static void auth_request_userdb_save_cache(struct auth_request *request,
enum userdb_result result)
{
struct auth_userdb *userdb = request->userdb;
string_t *str;
const char *cache_value;
if (passdb_cache == NULL || userdb->cache_key == NULL)
return;
if (result == USERDB_RESULT_USER_UNKNOWN)
cache_value = "";
else {
str = t_str_new(128);
auth_fields_append(request->fields.userdb_reply, str,
AUTH_FIELD_FLAG_CHANGED,
AUTH_FIELD_FLAG_CHANGED);
if (request->user_changed_by_lookup) {
/* username was changed by passdb or userdb */
if (str_len(str) > 0)
str_append_c(str, '\t');
str_append(str, "user=");
str_append_tabescaped(str, request->fields.user);
}
if (str_len(str) == 0) {
/* no userdb fields. but we can't save an empty string,
since that means "user unknown". */
str_append(str, AUTH_REQUEST_USER_KEY_IGNORE);
}
cache_value = str_c(str);
}
/* last_success has no meaning with userdb */
auth_cache_insert(passdb_cache, request, userdb->cache_key,
cache_value, FALSE);
}
|
Safe
|
[
"CWE-284"
] |
core
|
7bad6a24160e34bce8f10e73dbbf9e5fbbcd1904
|
3.177566449791697e+37
| 35 |
auth: Fix handling passdbs with identical driver/args but different mechanisms/username_filter
The passdb was wrongly deduplicated in this situation, causing wrong
mechanisms or username_filter setting to be used. This would be a rather
unlikely configuration though.
Fixed by moving mechanisms and username_filter from struct passdb_module
to struct auth_passdb, which is where they should have been in the first
place.
| 0 |
bool sctp_verify_asconf(const struct sctp_association *asoc,
struct sctp_chunk *chunk, bool addr_param_needed,
struct sctp_paramhdr **errp)
{
sctp_addip_chunk_t *addip = (sctp_addip_chunk_t *) chunk->chunk_hdr;
union sctp_params param;
bool addr_param_seen = false;
sctp_walk_params(param, addip, addip_hdr.params) {
size_t length = ntohs(param.p->length);
*errp = param.p;
switch (param.p->type) {
case SCTP_PARAM_ERR_CAUSE:
break;
case SCTP_PARAM_IPV4_ADDRESS:
if (length != sizeof(sctp_ipv4addr_param_t))
return false;
addr_param_seen = true;
break;
case SCTP_PARAM_IPV6_ADDRESS:
if (length != sizeof(sctp_ipv6addr_param_t))
return false;
addr_param_seen = true;
break;
case SCTP_PARAM_ADD_IP:
case SCTP_PARAM_DEL_IP:
case SCTP_PARAM_SET_PRIMARY:
/* In ASCONF chunks, these need to be first. */
if (addr_param_needed && !addr_param_seen)
return false;
length = ntohs(param.addip->param_hdr.length);
if (length < sizeof(sctp_addip_param_t) +
sizeof(sctp_paramhdr_t))
return false;
break;
case SCTP_PARAM_SUCCESS_REPORT:
case SCTP_PARAM_ADAPTATION_LAYER_IND:
if (length != sizeof(sctp_addip_param_t))
return false;
break;
default:
/* This is unkown to us, reject! */
return false;
}
}
/* Remaining sanity checks. */
if (addr_param_needed && !addr_param_seen)
return false;
if (!addr_param_needed && addr_param_seen)
return false;
if (param.v != chunk->chunk_end)
return false;
return true;
}
|
Safe
|
[
"CWE-20",
"CWE-399"
] |
linux
|
9de7922bc709eee2f609cd01d98aaedc4cf5ea74
|
2.654690871401437e+38
| 57 |
net: sctp: fix skb_over_panic when receiving malformed ASCONF chunks
Commit 6f4c618ddb0 ("SCTP : Add paramters validity check for
ASCONF chunk") added basic verification of ASCONF chunks, however,
it is still possible to remotely crash a server by sending a
special crafted ASCONF chunk, even up to pre 2.6.12 kernels:
skb_over_panic: text:ffffffffa01ea1c3 len:31056 put:30768
head:ffff88011bd81800 data:ffff88011bd81800 tail:0x7950
end:0x440 dev:<NULL>
------------[ cut here ]------------
kernel BUG at net/core/skbuff.c:129!
[...]
Call Trace:
<IRQ>
[<ffffffff8144fb1c>] skb_put+0x5c/0x70
[<ffffffffa01ea1c3>] sctp_addto_chunk+0x63/0xd0 [sctp]
[<ffffffffa01eadaf>] sctp_process_asconf+0x1af/0x540 [sctp]
[<ffffffff8152d025>] ? _read_unlock_bh+0x15/0x20
[<ffffffffa01e0038>] sctp_sf_do_asconf+0x168/0x240 [sctp]
[<ffffffffa01e3751>] sctp_do_sm+0x71/0x1210 [sctp]
[<ffffffff8147645d>] ? fib_rules_lookup+0xad/0xf0
[<ffffffffa01e6b22>] ? sctp_cmp_addr_exact+0x32/0x40 [sctp]
[<ffffffffa01e8393>] sctp_assoc_bh_rcv+0xd3/0x180 [sctp]
[<ffffffffa01ee986>] sctp_inq_push+0x56/0x80 [sctp]
[<ffffffffa01fcc42>] sctp_rcv+0x982/0xa10 [sctp]
[<ffffffffa01d5123>] ? ipt_local_in_hook+0x23/0x28 [iptable_filter]
[<ffffffff8148bdc9>] ? nf_iterate+0x69/0xb0
[<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0
[<ffffffff8148bf86>] ? nf_hook_slow+0x76/0x120
[<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0
[<ffffffff81496ded>] ip_local_deliver_finish+0xdd/0x2d0
[<ffffffff81497078>] ip_local_deliver+0x98/0xa0
[<ffffffff8149653d>] ip_rcv_finish+0x12d/0x440
[<ffffffff81496ac5>] ip_rcv+0x275/0x350
[<ffffffff8145c88b>] __netif_receive_skb+0x4ab/0x750
[<ffffffff81460588>] netif_receive_skb+0x58/0x60
This can be triggered e.g., through a simple scripted nmap
connection scan injecting the chunk after the handshake, for
example, ...
-------------- INIT[ASCONF; ASCONF_ACK] ------------->
<----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------
-------------------- COOKIE-ECHO -------------------->
<-------------------- COOKIE-ACK ---------------------
------------------ ASCONF; UNKNOWN ------------------>
... where ASCONF chunk of length 280 contains 2 parameters ...
1) Add IP address parameter (param length: 16)
2) Add/del IP address parameter (param length: 255)
... followed by an UNKNOWN chunk of e.g. 4 bytes. Here, the
Address Parameter in the ASCONF chunk is even missing, too.
This is just an example and similarly-crafted ASCONF chunks
could be used just as well.
The ASCONF chunk passes through sctp_verify_asconf() as all
parameters passed sanity checks, and after walking, we ended
up successfully at the chunk end boundary, and thus may invoke
sctp_process_asconf(). Parameter walking is done with
WORD_ROUND() to take padding into account.
In sctp_process_asconf()'s TLV processing, we may fail in
sctp_process_asconf_param() e.g., due to removal of the IP
address that is also the source address of the packet containing
the ASCONF chunk, and thus we need to add all TLVs after the
failure to our ASCONF response to remote via helper function
sctp_add_asconf_response(), which basically invokes a
sctp_addto_chunk() adding the error parameters to the given
skb.
When walking to the next parameter this time, we proceed
with ...
length = ntohs(asconf_param->param_hdr.length);
asconf_param = (void *)asconf_param + length;
... instead of the WORD_ROUND()'ed length, thus resulting here
in an off-by-one that leads to reading the follow-up garbage
parameter length of 12336, and thus throwing an skb_over_panic
for the reply when trying to sctp_addto_chunk() next time,
which implicitly calls the skb_put() with that length.
Fix it by using sctp_walk_params() [ which is also used in
INIT parameter processing ] macro in the verification *and*
in ASCONF processing: it will make sure we don't spill over,
that we walk parameters WORD_ROUND()'ed. Moreover, we're being
more defensive and guard against unknown parameter types and
missized addresses.
Joint work with Vlad Yasevich.
Fixes: b896b82be4ae ("[SCTP] ADDIP: Support for processing incoming ASCONF_ACK chunks.")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
RemoveSchemaById(Oid schemaOid)
{
Relation relation;
HeapTuple tup;
relation = table_open(NamespaceRelationId, RowExclusiveLock);
tup = SearchSysCache1(NAMESPACEOID,
ObjectIdGetDatum(schemaOid));
if (!HeapTupleIsValid(tup)) /* should not happen */
elog(ERROR, "cache lookup failed for namespace %u", schemaOid);
CatalogTupleDelete(relation, &tup->t_self);
ReleaseSysCache(tup);
table_close(relation, RowExclusiveLock);
}
|
Safe
|
[
"CWE-94"
] |
postgres
|
7e92f78abe80e4b30e648a40073abb59057e21f8
|
1.3150666856070329e+38
| 18 |
In extensions, don't replace objects not belonging to the extension.
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
| 0 |
ofputil_encode_queue_stats_request(enum ofp_version ofp_version,
const struct ofputil_queue_stats_request *oqsr)
{
struct ofpbuf *request;
switch (ofp_version) {
case OFP11_VERSION:
case OFP12_VERSION:
case OFP13_VERSION:
case OFP14_VERSION:
case OFP15_VERSION:
case OFP16_VERSION: {
struct ofp11_queue_stats_request *req;
request = ofpraw_alloc(OFPRAW_OFPST11_QUEUE_REQUEST, ofp_version, 0);
req = ofpbuf_put_zeros(request, sizeof *req);
req->port_no = ofputil_port_to_ofp11(oqsr->port_no);
req->queue_id = htonl(oqsr->queue_id);
break;
}
case OFP10_VERSION: {
struct ofp10_queue_stats_request *req;
request = ofpraw_alloc(OFPRAW_OFPST10_QUEUE_REQUEST, ofp_version, 0);
req = ofpbuf_put_zeros(request, sizeof *req);
/* OpenFlow 1.0 needs OFPP_ALL instead of OFPP_ANY */
req->port_no = htons(ofp_to_u16(oqsr->port_no == OFPP_ANY
? OFPP_ALL : oqsr->port_no));
req->queue_id = htonl(oqsr->queue_id);
break;
}
default:
OVS_NOT_REACHED();
}
return request;
}
|
Safe
|
[
"CWE-772"
] |
ovs
|
77ad4225d125030420d897c873e4734ac708c66b
|
6.362281449808601e+37
| 35 |
ofp-util: Fix memory leaks on error cases in ofputil_decode_group_mod().
Found by libFuzzer.
Reported-by: Bhargava Shastry <bshastry@sec.t-labs.tu-berlin.de>
Signed-off-by: Ben Pfaff <blp@ovn.org>
Acked-by: Justin Pettit <jpettit@ovn.org>
| 0 |
static int ip_vs_stats_percpu_seq_open(struct inode *inode, struct file *file)
{
return single_open_net(inode, file, ip_vs_stats_percpu_show);
}
|
Safe
|
[
"CWE-200"
] |
linux
|
2d8a041b7bfe1097af21441cb77d6af95f4f4680
|
1.542680988205826e+38
| 4 |
ipvs: fix info leak in getsockopt(IP_VS_SO_GET_TIMEOUT)
If at least one of CONFIG_IP_VS_PROTO_TCP or CONFIG_IP_VS_PROTO_UDP is
not set, __ip_vs_get_timeouts() does not fully initialize the structure
that gets copied to userland and that for leaks up to 12 bytes of kernel
stack. Add an explicit memset(0) before passing the structure to
__ip_vs_get_timeouts() to avoid the info leak.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Wensong Zhang <wensong@linux-vs.org>
Cc: Simon Horman <horms@verge.net.au>
Cc: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static inline void securityfs_remove(struct dentry *dentry)
{}
|
Safe
|
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
|
2.4742665986271012e+38
| 2 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
| 0 |
static void prb_fill_vlan_info(struct tpacket_kbdq_core *pkc,
struct tpacket3_hdr *ppd)
{
if (skb_vlan_tag_present(pkc->skb)) {
ppd->hv1.tp_vlan_tci = skb_vlan_tag_get(pkc->skb);
ppd->hv1.tp_vlan_tpid = ntohs(pkc->skb->vlan_proto);
ppd->tp_status = TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
} else {
ppd->hv1.tp_vlan_tci = 0;
ppd->hv1.tp_vlan_tpid = 0;
ppd->tp_status = TP_STATUS_AVAILABLE;
}
}
|
Safe
|
[
"CWE-416",
"CWE-362"
] |
linux
|
84ac7260236a49c79eede91617700174c2c19b0c
|
1.4963909862756364e+38
| 13 |
packet: fix race condition in packet_set_ring
When packet_set_ring creates a ring buffer it will initialize a
struct timer_list if the packet version is TPACKET_V3. This value
can then be raced by a different thread calling setsockopt to
set the version to TPACKET_V1 before packet_set_ring has finished.
This leads to a use-after-free on a function pointer in the
struct timer_list when the socket is closed as the previously
initialized timer will not be deleted.
The bug is fixed by taking lock_sock(sk) in packet_setsockopt when
changing the packet version while also taking the lock at the start
of packet_set_ring.
Fixes: f6fb8f100b80 ("af-packet: TPACKET_V3 flexible buffer implementation.")
Signed-off-by: Philip Pettersson <philip.pettersson@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
scsi_netlink_init(void)
{
struct netlink_kernel_cfg cfg = {
.input = scsi_nl_rcv_msg,
.groups = SCSI_NL_GRP_CNT,
};
scsi_nl_sock = netlink_kernel_create(&init_net, NETLINK_SCSITRANSPORT,
&cfg);
if (!scsi_nl_sock) {
printk(KERN_ERR "%s: register of receive handler failed\n",
__func__);
return;
}
return;
}
|
Safe
|
[
"CWE-264"
] |
net
|
90f62cf30a78721641e08737bda787552428061e
|
9.596342750682029e+37
| 17 |
net: Use netlink_ns_capable to verify the permisions of netlink messages
It is possible by passing a netlink socket to a more privileged
executable and then to fool that executable into writing to the socket
data that happens to be valid netlink message to do something that
privileged executable did not intend to do.
To keep this from happening replace bare capable and ns_capable calls
with netlink_capable, netlink_net_calls and netlink_ns_capable calls.
Which act the same as the previous calls except they verify that the
opener of the socket had the desired permissions as well.
Reported-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static int ing_filter(struct sk_buff *skb)
{
struct net_device *dev = skb->dev;
u32 ttl = G_TC_RTTL(skb->tc_verd);
struct netdev_queue *rxq;
int result = TC_ACT_OK;
struct Qdisc *q;
if (MAX_RED_LOOP < ttl++) {
printk(KERN_WARNING
"Redir loop detected Dropping packet (%d->%d)\n",
skb->skb_iif, dev->ifindex);
return TC_ACT_SHOT;
}
skb->tc_verd = SET_TC_RTTL(skb->tc_verd, ttl);
skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);
rxq = &dev->rx_queue;
q = rxq->qdisc;
if (q != &noop_qdisc) {
spin_lock(qdisc_lock(q));
if (likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
result = qdisc_enqueue_root(skb, q);
spin_unlock(qdisc_lock(q));
}
return result;
}
|
Safe
|
[
"CWE-399"
] |
linux
|
6ec82562ffc6f297d0de36d65776cff8e5704867
|
1.78645658875269e+38
| 30 |
veth: Dont kfree_skb() after dev_forward_skb()
In case of congestion, netif_rx() frees the skb, so we must assume
dev_forward_skb() also consume skb.
Bug introduced by commit 445409602c092
(veth: move loopback logic to common location)
We must change dev_forward_skb() to always consume skb, and veth to not
double free it.
Bug report : http://marc.info/?l=linux-netdev&m=127310770900442&w=3
Reported-by: Martín Ferrari <martin.ferrari@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static int handle_headers_frame(h2o_http2_conn_t *conn, h2o_http2_frame_t *frame, const char **err_desc)
{
h2o_http2_headers_payload_t payload;
h2o_http2_stream_t *stream;
int ret;
/* decode */
if ((ret = h2o_http2_decode_headers_payload(&payload, frame, err_desc)) != 0)
return ret;
if ((frame->stream_id & 1) == 0) {
*err_desc = "invalid stream id in HEADERS frame";
return H2O_HTTP2_ERROR_PROTOCOL;
}
if (!(conn->pull_stream_ids.max_open < frame->stream_id)) {
if ((stream = h2o_http2_conn_get_stream(conn, frame->stream_id)) != NULL &&
stream->state == H2O_HTTP2_STREAM_STATE_RECV_BODY) {
/* is a trailer */
if ((frame->flags & H2O_HTTP2_FRAME_FLAG_END_STREAM) == 0) {
*err_desc = "trailing HEADERS frame MUST have END_STREAM flag set";
return H2O_HTTP2_ERROR_PROTOCOL;
}
stream->req.entity = h2o_iovec_init(stream->_req_body->bytes, stream->_req_body->size);
if ((frame->flags & H2O_HTTP2_FRAME_FLAG_END_HEADERS) == 0)
goto PREPARE_FOR_CONTINUATION;
return handle_trailing_headers(conn, stream, payload.headers, payload.headers_len, err_desc);
}
*err_desc = "invalid stream id in HEADERS frame";
return H2O_HTTP2_ERROR_STREAM_CLOSED;
}
if (frame->stream_id == payload.priority.dependency) {
*err_desc = "stream cannot depend on itself";
return H2O_HTTP2_ERROR_PROTOCOL;
}
if (conn->state >= H2O_HTTP2_CONN_STATE_HALF_CLOSED)
return 0;
/* open or determine the stream and prepare */
if ((stream = h2o_http2_conn_get_stream(conn, frame->stream_id)) != NULL) {
if ((frame->flags & H2O_HTTP2_FRAME_FLAG_PRIORITY) != 0)
set_priority(conn, stream, &payload.priority, 1);
} else {
stream = h2o_http2_stream_open(conn, frame->stream_id, NULL);
set_priority(conn, stream, &payload.priority, 0);
}
h2o_http2_stream_prepare_for_request(conn, stream);
/* setup container for request body if it is expected to arrive */
if ((frame->flags & H2O_HTTP2_FRAME_FLAG_END_STREAM) == 0)
h2o_buffer_init(&stream->_req_body, &h2o_socket_buffer_prototype);
if ((frame->flags & H2O_HTTP2_FRAME_FLAG_END_HEADERS) != 0) {
/* request is complete, handle it */
return handle_incoming_request(conn, stream, payload.headers, payload.headers_len, err_desc);
}
PREPARE_FOR_CONTINUATION:
/* request is not complete, store in buffer */
conn->_read_expect = expect_continuation_of_headers;
h2o_buffer_init(&conn->_headers_unparsed, &h2o_socket_buffer_prototype);
h2o_buffer_reserve(&conn->_headers_unparsed, payload.headers_len);
memcpy(conn->_headers_unparsed->bytes, payload.headers, payload.headers_len);
conn->_headers_unparsed->size = payload.headers_len;
return 0;
}
|
Safe
|
[
"CWE-703"
] |
h2o
|
1c0808d580da09fdec5a9a74ff09e103ea058dd4
|
3.377593283859911e+38
| 65 |
h2: use after free on premature connection close #920
lib/http2/connection.c:on_read() calls parse_input(), which might free
`conn`. It does so in particular if the connection preface isn't
the expected one in expect_preface(). `conn` is then used after the free
in `if (h2o_timeout_is_linked(&conn->_write.timeout_entry)`.
We fix this by adding a return value to close_connection that returns a
negative value if `conn` has been free'd and can't be used anymore.
Credits for finding the bug to Tim Newsham.
| 0 |
static int bus_parse_next_address(sd_bus *b) {
_cleanup_free_ char *guid = NULL;
const char *a;
int r;
assert(b);
if (!b->address)
return 0;
if (b->address[b->address_index] == 0)
return 0;
bus_reset_parsed_address(b);
a = b->address + b->address_index;
while (*a != 0) {
if (*a == ';') {
a++;
continue;
}
if (startswith(a, "unix:")) {
a += 5;
r = parse_unix_address(b, &a, &guid);
if (r < 0)
return r;
break;
} else if (startswith(a, "tcp:")) {
a += 4;
r = parse_tcp_address(b, &a, &guid);
if (r < 0)
return r;
break;
} else if (startswith(a, "unixexec:")) {
a += 9;
r = parse_exec_address(b, &a, &guid);
if (r < 0)
return r;
break;
} else if (startswith(a, "x-machine-unix:")) {
a += 15;
r = parse_container_unix_address(b, &a, &guid);
if (r < 0)
return r;
break;
}
a = strchr(a, ';');
if (!a)
return 0;
}
if (guid) {
r = sd_id128_from_string(guid, &b->server_id);
if (r < 0)
return r;
}
b->address_index = a - b->address;
return 1;
}
|
Safe
|
[
"CWE-416"
] |
systemd
|
1068447e6954dc6ce52f099ed174c442cb89ed54
|
2.5083646660413607e+38
| 73 |
sd-bus: introduce API for re-enqueuing incoming messages
When authorizing via PolicyKit we want to process incoming method calls
twice: once to process and figure out that we need PK authentication,
and a second time after we aquired PK authentication to actually execute
the operation. With this new call sd_bus_enqueue_for_read() we have a
way to put an incoming message back into the read queue for this
purpose.
This might have other uses too, for example debugging.
| 0 |
int wsgi_req_async_recv(struct wsgi_request *wsgi_req) {
uwsgi.workers[uwsgi.mywid].cores[wsgi_req->async_id].in_request = 1;
wsgi_req->start_of_request = uwsgi_micros();
wsgi_req->start_of_request_in_sec = wsgi_req->start_of_request / 1000000;
if (!wsgi_req->do_not_add_to_async_queue) {
if (event_queue_add_fd_read(uwsgi.async_queue, wsgi_req->fd) < 0)
return -1;
async_add_timeout(wsgi_req, uwsgi.socket_timeout);
uwsgi.async_proto_fd_table[wsgi_req->fd] = wsgi_req;
}
// enter harakiri mode
if (uwsgi.harakiri_options.workers > 0) {
set_harakiri(wsgi_req, uwsgi.harakiri_options.workers);
}
return 0;
}
|
Safe
|
[
"CWE-119",
"CWE-703",
"CWE-787"
] |
uwsgi
|
cb4636f7c0af2e97a4eef7a3cdcbd85a71247bfe
|
1.9131650356818584e+38
| 22 |
improve uwsgi_expand_path() to sanitize input, avoiding stack corruption and potential security issue
| 0 |
ar6000_roam_data_event(struct ar6_softc *ar, WMI_TARGET_ROAM_DATA *p)
{
switch (p->roamDataType) {
case ROAM_DATA_TIME:
ar6000_display_roam_time(&p->u.roamTime);
break;
default:
break;
}
}
|
Safe
|
[
"CWE-703",
"CWE-264"
] |
linux
|
550fd08c2cebad61c548def135f67aba284c6162
|
7.554054097305394e+37
| 10 |
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Karsten Keil <isdn@linux-pingi.de>
CC: "David S. Miller" <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Krzysztof Halasa <khc@pm.waw.pl>
CC: "John W. Linville" <linville@tuxdriver.com>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: Marcel Holtmann <marcel@holtmann.org>
CC: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static bool check_read_only(sys_var *self, THD *thd, set_var *var)
{
/* Prevent self dead-lock */
if (thd->locked_tables_mode || thd->in_active_multi_stmt_transaction())
{
my_error(ER_LOCK_OR_ACTIVE_TRANSACTION, MYF(0));
return true;
}
return false;
}
|
Safe
|
[
"CWE-264"
] |
mysql-server
|
48bd8b16fe382be302c6f0b45931be5aa6f29a0e
|
3.0646809085446593e+38
| 10 |
Bug#24388753: PRIVILEGE ESCALATION USING MYSQLD_SAFE
[This is the 5.5/5.6 version of the bugfix].
The problem was that it was possible to write log files ending
in .ini/.cnf that later could be parsed as an options file.
This made it possible for users to specify startup options
without the permissions to do so.
This patch fixes the problem by disallowing general query log
and slow query log to be written to files ending in .ini and .cnf.
| 0 |
on_ps(PG_FUNCTION_ARGS)
{
Point *pt = PG_GETARG_POINT_P(0);
LSEG *lseg = PG_GETARG_LSEG_P(1);
PG_RETURN_BOOL(on_ps_internal(pt, lseg));
}
|
Safe
|
[
"CWE-703",
"CWE-189"
] |
postgres
|
31400a673325147e1205326008e32135a78b4d8a
|
1.7959785922265012e+38
| 7 |
Predict integer overflow to avoid buffer overruns.
Several functions, mostly type input functions, calculated an allocation
size such that the calculation wrapped to a small positive value when
arguments implied a sufficiently-large requirement. Writes past the end
of the inadvertent small allocation followed shortly thereafter.
Coverity identified the path_in() vulnerability; code inspection led to
the rest. In passing, add check_stack_depth() to prevent stack overflow
in related functions.
Back-patch to 8.4 (all supported versions). The non-comment hstore
changes touch code that did not exist in 8.4, so that part stops at 9.0.
Noah Misch and Heikki Linnakangas, reviewed by Tom Lane.
Security: CVE-2014-0064
| 0 |
setup_called_state_call(Node* node, int state)
{
switch (NODE_TYPE(node)) {
case NODE_ALT:
state |= IN_ALT;
/* fall */
case NODE_LIST:
do {
setup_called_state_call(NODE_CAR(node), state);
} while (IS_NOT_NULL(node = NODE_CDR(node)));
break;
case NODE_QUANT:
{
QuantNode* qn = QUANT_(node);
if (IS_REPEAT_INFINITE(qn->upper) || qn->upper >= 2)
state |= IN_REAL_REPEAT;
if (qn->lower != qn->upper)
state |= IN_VAR_REPEAT;
setup_called_state_call(NODE_QUANT_BODY(qn), state);
}
break;
case NODE_ANCHOR:
{
AnchorNode* an = ANCHOR_(node);
switch (an->type) {
case ANCR_PREC_READ_NOT:
case ANCR_LOOK_BEHIND_NOT:
state |= IN_NOT;
/* fall */
case ANCR_PREC_READ:
case ANCR_LOOK_BEHIND:
setup_called_state_call(NODE_ANCHOR_BODY(an), state);
break;
default:
break;
}
}
break;
case NODE_BAG:
{
BagNode* en = BAG_(node);
if (en->type == BAG_MEMORY) {
if (NODE_IS_MARK1(node)) {
if ((~en->m.called_state & state) != 0) {
en->m.called_state |= state;
setup_called_state_call(NODE_BODY(node), state);
}
}
else {
NODE_STATUS_ADD(node, MARK1);
en->m.called_state |= state;
setup_called_state_call(NODE_BODY(node), state);
NODE_STATUS_REMOVE(node, MARK1);
}
}
else if (en->type == BAG_IF_ELSE) {
if (IS_NOT_NULL(en->te.Then)) {
setup_called_state_call(en->te.Then, state);
}
if (IS_NOT_NULL(en->te.Else))
setup_called_state_call(en->te.Else, state);
}
else {
setup_called_state_call(NODE_BODY(node), state);
}
}
break;
case NODE_CALL:
setup_called_state_call(NODE_BODY(node), state);
break;
default:
break;
}
}
|
Safe
|
[
"CWE-476",
"CWE-125"
] |
oniguruma
|
c509265c5f6ae7264f7b8a8aae1cfa5fc59d108c
|
2.4529290830661533e+38
| 83 |
Fix CVE-2019-13225: problem in converting if-then-else pattern to bytecode.
| 0 |
server_set_opcode(struct xrdp_mod* mod, int opcode)
{
struct xrdp_painter* p;
p = (struct xrdp_painter*)(mod->painter);
if (p == 0)
{
return 0;
}
p->rop = opcode;
return 0;
}
|
Safe
|
[] |
xrdp
|
d8f9e8310dac362bb9578763d1024178f94f4ecc
|
8.603497634031898e+37
| 12 |
move temp files from /tmp to /tmp/.xrdp
| 0 |
job_print_summary(struct cmdq_item *item, int blank)
{
struct job *job;
u_int n = 0;
LIST_FOREACH(job, &all_jobs, entry) {
if (blank) {
cmdq_print(item, "%s", "");
blank = 0;
}
cmdq_print(item, "Job %u: %s [fd=%d, pid=%ld, status=%d]",
n, job->cmd, job->fd, (long)job->pid, job->status);
n++;
}
}
|
Safe
|
[] |
src
|
b32e1d34e10a0da806823f57f02a4ae6e93d756e
|
1.406129564228873e+38
| 15 |
evbuffer_new and bufferevent_new can both fail (when malloc fails) and
return NULL. GitHub issue 1547.
| 0 |
static int parse_attribute_from_arg(Item *item) {
static const struct {
char character;
unsigned value;
} attributes[] = {
{ 'A', FS_NOATIME_FL }, /* do not update atime */
{ 'S', FS_SYNC_FL }, /* Synchronous updates */
{ 'D', FS_DIRSYNC_FL }, /* dirsync behaviour (directories only) */
{ 'a', FS_APPEND_FL }, /* writes to file may only append */
{ 'c', FS_COMPR_FL }, /* Compress file */
{ 'd', FS_NODUMP_FL }, /* do not dump file */
{ 'e', FS_EXTENT_FL }, /* Extents */
{ 'i', FS_IMMUTABLE_FL }, /* Immutable file */
{ 'j', FS_JOURNAL_DATA_FL }, /* Reserved for ext3 */
{ 's', FS_SECRM_FL }, /* Secure deletion */
{ 'u', FS_UNRM_FL }, /* Undelete */
{ 't', FS_NOTAIL_FL }, /* file tail should not be merged */
{ 'T', FS_TOPDIR_FL }, /* Top of directory hierarchies */
{ 'C', FS_NOCOW_FL }, /* Do not cow file */
};
enum {
MODE_ADD,
MODE_DEL,
MODE_SET
} mode = MODE_ADD;
unsigned value = 0, mask = 0;
const char *p;
assert(item);
p = item->argument;
if (p) {
if (*p == '+') {
mode = MODE_ADD;
p++;
} else if (*p == '-') {
mode = MODE_DEL;
p++;
} else if (*p == '=') {
mode = MODE_SET;
p++;
}
}
if (isempty(p) && mode != MODE_SET) {
log_error("Setting file attribute on '%s' needs an attribute specification.", item->path);
return -EINVAL;
}
for (; p && *p ; p++) {
unsigned i, v;
for (i = 0; i < ELEMENTSOF(attributes); i++)
if (*p == attributes[i].character)
break;
if (i >= ELEMENTSOF(attributes)) {
log_error("Unknown file attribute '%c' on '%s'.", *p, item->path);
return -EINVAL;
}
v = attributes[i].value;
SET_FLAG(value, v, IN_SET(mode, MODE_ADD, MODE_SET));
mask |= v;
}
if (mode == MODE_SET)
mask |= ATTRIBUTES_ALL;
assert(mask != 0);
item->attribute_mask = mask;
item->attribute_value = value;
item->attribute_set = true;
return 0;
}
|
Safe
|
[
"CWE-59"
] |
systemd
|
5579f85663d10269e7ac7464be6548c99cea4ada
|
2.9988516156225865e+38
| 82 |
tmpfiles: refuse to chown()/chmod() files which are hardlinked, unless protected_hardlinks sysctl is on
Let's add some extra safety.
Fixes: #7736
| 0 |
Image::AutoPtr ImageFactory::open(const std::wstring& wpath, bool useCurl)
{
Image::AutoPtr image = open(ImageFactory::createIo(wpath, useCurl)); // may throw
if (image.get() == 0) throw WError(kerFileContainsUnknownImageType, wpath);
return image;
}
|
Safe
|
[
"CWE-835"
] |
exiv2
|
ae49250942f4395639961abeed3c15920fcd7241
|
1.3951918438626922e+38
| 6 |
Check in Image::printIFDStructure if seek and reads are OK
| 0 |
static void bus_reset_queues(sd_bus *b) {
assert(b);
while (b->rqueue_size > 0)
bus_message_unref_queued(b->rqueue[--b->rqueue_size], b);
b->rqueue = mfree(b->rqueue);
b->rqueue_allocated = 0;
while (b->wqueue_size > 0)
bus_message_unref_queued(b->wqueue[--b->wqueue_size], b);
b->wqueue = mfree(b->wqueue);
b->wqueue_allocated = 0;
}
|
Safe
|
[
"CWE-416"
] |
systemd
|
1068447e6954dc6ce52f099ed174c442cb89ed54
|
2.8366388767442963e+38
| 15 |
sd-bus: introduce API for re-enqueuing incoming messages
When authorizing via PolicyKit we want to process incoming method calls
twice: once to process and figure out that we need PK authentication,
and a second time after we aquired PK authentication to actually execute
the operation. With this new call sd_bus_enqueue_for_read() we have a
way to put an incoming message back into the read queue for this
purpose.
This might have other uses too, for example debugging.
| 0 |
static inline int fd_copyout(void __user *param, const void *address,
unsigned long size)
{
return copy_to_user(param, address, size) ? -EFAULT : 0;
}
|
Safe
|
[
"CWE-264",
"CWE-754"
] |
linux
|
ef87dbe7614341c2e7bfe8d32fcb7028cc97442c
|
2.477998803267949e+37
| 5 |
floppy: ignore kernel-only members in FDRAWCMD ioctl input
Always clear out these floppy_raw_cmd struct members after copying the
entire structure from userspace so that the in-kernel version is always
valid and never left in an interdeterminate state.
Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
double val_real() { return ulonglong2double((ulonglong)value); }
|
Safe
|
[
"CWE-617"
] |
server
|
807945f2eb5fa22e6f233cc17b85a2e141efe2c8
|
1.3481212527213063e+38
| 1 |
MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order...
When doing condition pushdown from HAVING into WHERE,
Item_equal::create_pushable_equalities() calls
item->set_extraction_flag(IMMUTABLE_FL) for constant items.
Then, Item::cleanup_excluding_immutables_processor() checks for this flag
to see if it should call item->cleanup() or leave the item as-is.
The failure happens when a constant item has a non-constant one inside it,
like:
(tbl.col=0 AND impossible_cond)
item->walk(cleanup_excluding_immutables_processor) works in a bottom-up
way so it
1. will call Item_func_eq(tbl.col=0)->cleanup()
2. will not call Item_cond_and->cleanup (as the AND is constant)
This creates an item tree where a fixed Item has an un-fixed Item inside
it which eventually causes an assertion failure.
Fixed by introducing this rule: instead of just calling
item->set_extraction_flag(IMMUTABLE_FL);
we call Item::walk() to set the flag for all sub-items of the item.
| 0 |
static void register_localhost(AvahiServer *s) {
AvahiAddress a;
assert(s);
/* Add localhost entries */
avahi_address_parse("127.0.0.1", AVAHI_PROTO_INET, &a);
avahi_server_add_address(s, NULL, AVAHI_IF_UNSPEC, AVAHI_PROTO_UNSPEC, AVAHI_PUBLISH_NO_PROBE|AVAHI_PUBLISH_NO_ANNOUNCE, "localhost", &a);
avahi_address_parse("::1", AVAHI_PROTO_INET6, &a);
avahi_server_add_address(s, NULL, AVAHI_IF_UNSPEC, AVAHI_PROTO_UNSPEC, AVAHI_PUBLISH_NO_PROBE|AVAHI_PUBLISH_NO_ANNOUNCE, "ip6-localhost", &a);
}
|
Safe
|
[
"CWE-399"
] |
avahi
|
3093047f1aa36bed8a37fa79004bf0ee287929f4
|
9.764043847252964e+37
| 11 |
Don't get confused by UDP packets with a source port that is zero
This is a fix for rhbz 475394.
Problem identified by Hugo Dias.
| 0 |
lzw_read_byte (GifContext *context)
{
int code, incode;
gint retval;
gint my_retval;
register int i;
if (context->lzw_code_pending != -1) {
retval = context->lzw_code_pending;
context->lzw_code_pending = -1;
return retval;
}
if (context->lzw_fresh) {
context->lzw_fresh = FALSE;
do {
retval = get_code (context, context->lzw_code_size);
if (retval < 0) {
return retval;
}
context->lzw_firstcode = context->lzw_oldcode = retval;
} while (context->lzw_firstcode == context->lzw_clear_code);
return context->lzw_firstcode;
}
if (context->lzw_sp > context->lzw_stack) {
my_retval = *--(context->lzw_sp);
return my_retval;
}
while ((code = get_code (context, context->lzw_code_size)) >= 0) {
if (code == context->lzw_clear_code) {
for (i = 0; i < context->lzw_clear_code; ++i) {
context->lzw_table[0][i] = 0;
context->lzw_table[1][i] = i;
}
for (; i < (1 << MAX_LZW_BITS); ++i)
context->lzw_table[0][i] = context->lzw_table[1][i] = 0;
context->lzw_code_size = context->lzw_set_code_size + 1;
context->lzw_max_code_size = 2 * context->lzw_clear_code;
context->lzw_max_code = context->lzw_clear_code + 2;
context->lzw_sp = context->lzw_stack;
set_gif_lzw_clear_code (context);
return -3;
} else if (code == context->lzw_end_code) {
int count;
unsigned char buf[260];
/* FIXME - we should handle this case */
g_set_error_literal (context->error,
GDK_PIXBUF_ERROR,
GDK_PIXBUF_ERROR_FAILED,
_("GIF image loader cannot understand this image."));
return -2;
if (ZeroDataBlock) {
return -2;
}
while ((count = GetDataBlock (context, buf)) > 0)
;
if (count != 0) {
/*g_print (_("GIF: missing EOD in data stream (common occurence)"));*/
return -2;
}
}
incode = code;
if (code >= context->lzw_max_code) {
CHECK_LZW_SP ();
*(context->lzw_sp)++ = context->lzw_firstcode;
code = context->lzw_oldcode;
}
while (code >= context->lzw_clear_code) {
if (code >= (1 << MAX_LZW_BITS)) {
g_set_error_literal (context->error,
GDK_PIXBUF_ERROR,
GDK_PIXBUF_ERROR_CORRUPT_IMAGE,
_("Bad code encountered"));
return -2;
}
CHECK_LZW_SP ();
*(context->lzw_sp)++ = context->lzw_table[1][code];
if (code == context->lzw_table[0][code]) {
g_set_error_literal (context->error,
GDK_PIXBUF_ERROR,
GDK_PIXBUF_ERROR_CORRUPT_IMAGE,
_("Circular table entry in GIF file"));
return -2;
}
code = context->lzw_table[0][code];
}
CHECK_LZW_SP ();
*(context->lzw_sp)++ = context->lzw_firstcode = context->lzw_table[1][code];
if ((code = context->lzw_max_code) < (1 << MAX_LZW_BITS)) {
context->lzw_table[0][code] = context->lzw_oldcode;
context->lzw_table[1][code] = context->lzw_firstcode;
++context->lzw_max_code;
if ((context->lzw_max_code >= context->lzw_max_code_size) &&
(context->lzw_max_code_size < (1 << MAX_LZW_BITS))) {
context->lzw_max_code_size *= 2;
++context->lzw_code_size;
}
}
context->lzw_oldcode = incode;
if (context->lzw_sp > context->lzw_stack) {
my_retval = *--(context->lzw_sp);
return my_retval;
}
}
return code;
}
|
Safe
|
[] |
gdk-pixbuf
|
f8569bb13e2aa1584dde61ca545144750f7a7c98
|
2.650066948186545e+38
| 122 |
GIF: Don't return a partially initialized pixbuf structure
It was found that gdk-pixbuf GIF image loader gdk_pixbuf__gif_image_load()
routine did not properly handle certain return values from their subroutines.
A remote attacker could provide a specially-crafted GIF image, which once
opened in an application, linked against gdk-pixbuf would lead to gdk-pixbuf
to return partially initialized pixbuf structure, possibly having huge
width and height, leading to that particular application termination due
excessive memory use.
The CVE identifier of CVE-2011-2485 has been assigned to this issue.
| 0 |
static int hwsim_fops_ps_read(void *dat, u64 *val)
{
struct mac80211_hwsim_data *data = dat;
*val = data->ps;
return 0;
}
|
Safe
|
[
"CWE-703",
"CWE-772"
] |
linux
|
0ddcff49b672239dda94d70d0fcf50317a9f4b51
|
1.2407275899022345e+38
| 6 |
mac80211_hwsim: fix possible memory leak in hwsim_new_radio_nl()
'hwname' is malloced in hwsim_new_radio_nl() and should be freed
before leaving from the error handling cases, otherwise it will cause
memory leak.
Fixes: ff4dd73dd2b4 ("mac80211_hwsim: check HWSIM_ATTR_RADIO_NAME length")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| 0 |
static uint32_t pmac_ide_readb (void *opaque,hwaddr addr)
{
uint8_t retval;
MACIOIDEState *d = opaque;
addr = (addr & 0xFFF) >> 4;
switch (addr) {
case 1 ... 7:
retval = ide_ioport_read(&d->bus, addr);
break;
case 8:
case 22:
retval = ide_status_read(&d->bus, 0);
break;
default:
retval = 0xFF;
break;
}
return retval;
}
|
Safe
|
[
"CWE-399"
] |
qemu
|
3251bdcf1c67427d964517053c3d185b46e618e8
|
2.1891837640448478e+38
| 20 |
ide: Correct handling of malformed/short PRDTs
This impacts both BMDMA and AHCI HBA interfaces for IDE.
Currently, we confuse the difference between a PRDT having
"0 bytes" and a PRDT having "0 complete sectors."
When we receive an incomplete sector, inconsistent error checking
leads to an infinite loop wherein the call succeeds, but it
didn't give us enough bytes -- leading us to re-call the
DMA chain over and over again. This leads to, in the BMDMA case,
leaked memory for short PRDTs, and infinite loops and resource
usage in the AHCI case.
The .prepare_buf() callback is reworked to return the number of
bytes that it successfully prepared. 0 is a valid, non-error
answer that means the table was empty and described no bytes.
-1 indicates an error.
Our current implementation uses the io_buffer in IDEState to
ultimately describe the size of a prepared scatter-gather list.
Even though the AHCI PRDT/SGList can be as large as 256GiB, the
AHCI command header limits transactions to just 4GiB. ATA8-ACS3,
however, defines the largest transaction to be an LBA48 command
that transfers 65,536 sectors. With a 512 byte sector size, this
is just 32MiB.
Since our current state structures use the int type to describe
the size of the buffer, and this state is migrated as int32, we
are limited to describing 2GiB buffer sizes unless we change the
migration protocol.
For this reason, this patch begins to unify the assertions in the
IDE pathways that the scatter-gather list provided by either the
AHCI PRDT or the PCI BMDMA PRDs can only describe, at a maximum,
2GiB. This should be resilient enough unless we need a sector
size that exceeds 32KiB.
Further, the likelihood of any guest operating system actually
attempting to transfer this much data in a single operation is
very slim.
To this end, the IDEState variables have been updated to more
explicitly clarify our maximum supported size. Callers to the
prepare_buf callback have been reworked to understand the new
return code, and all versions of the prepare_buf callback have
been adjusted accordingly.
Lastly, the ahci_populate_sglist helper, relied upon by the
AHCI implementation of .prepare_buf() as well as the PCI
implementation of the callback have had overflow assertions
added to help make clear the reasonings behind the various
type changes.
[Added %d -> %"PRId64" fix John sent because off_pos changed from int to
int64_t.
--Stefan]
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1414785819-26209-4-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| 0 |
CURLUcode curl_url_get(CURLU *u, CURLUPart what,
char **part, unsigned int flags)
{
char *ptr;
CURLUcode ifmissing = CURLUE_UNKNOWN_PART;
char portbuf[7];
bool urldecode = (flags & CURLU_URLDECODE)?1:0;
bool urlencode = (flags & CURLU_URLENCODE)?1:0;
bool plusdecode = FALSE;
(void)flags;
if(!u)
return CURLUE_BAD_HANDLE;
if(!part)
return CURLUE_BAD_PARTPOINTER;
*part = NULL;
switch(what) {
case CURLUPART_SCHEME:
ptr = u->scheme;
ifmissing = CURLUE_NO_SCHEME;
urldecode = FALSE; /* never for schemes */
break;
case CURLUPART_USER:
ptr = u->user;
ifmissing = CURLUE_NO_USER;
break;
case CURLUPART_PASSWORD:
ptr = u->password;
ifmissing = CURLUE_NO_PASSWORD;
break;
case CURLUPART_OPTIONS:
ptr = u->options;
ifmissing = CURLUE_NO_OPTIONS;
break;
case CURLUPART_HOST:
ptr = u->host;
ifmissing = CURLUE_NO_HOST;
break;
case CURLUPART_ZONEID:
ptr = u->zoneid;
ifmissing = CURLUE_NO_ZONEID;
break;
case CURLUPART_PORT:
ptr = u->port;
ifmissing = CURLUE_NO_PORT;
urldecode = FALSE; /* never for port */
if(!ptr && (flags & CURLU_DEFAULT_PORT) && u->scheme) {
/* there's no stored port number, but asked to deliver
a default one for the scheme */
const struct Curl_handler *h =
Curl_builtin_scheme(u->scheme);
if(h) {
msnprintf(portbuf, sizeof(portbuf), "%u", h->defport);
ptr = portbuf;
}
}
else if(ptr && u->scheme) {
/* there is a stored port number, but ask to inhibit if
it matches the default one for the scheme */
const struct Curl_handler *h =
Curl_builtin_scheme(u->scheme);
if(h && (h->defport == u->portnum) &&
(flags & CURLU_NO_DEFAULT_PORT))
ptr = NULL;
}
break;
case CURLUPART_PATH:
ptr = u->path;
if(!ptr) {
ptr = u->path = strdup("/");
if(!u->path)
return CURLUE_OUT_OF_MEMORY;
}
break;
case CURLUPART_QUERY:
ptr = u->query;
ifmissing = CURLUE_NO_QUERY;
plusdecode = urldecode;
break;
case CURLUPART_FRAGMENT:
ptr = u->fragment;
ifmissing = CURLUE_NO_FRAGMENT;
break;
case CURLUPART_URL: {
char *url;
char *scheme;
char *options = u->options;
char *port = u->port;
char *allochost = NULL;
if(u->scheme && strcasecompare("file", u->scheme)) {
url = aprintf("file://%s%s%s",
u->path,
u->fragment? "#": "",
u->fragment? u->fragment : "");
}
else if(!u->host)
return CURLUE_NO_HOST;
else {
const struct Curl_handler *h = NULL;
if(u->scheme)
scheme = u->scheme;
else if(flags & CURLU_DEFAULT_SCHEME)
scheme = (char *) DEFAULT_SCHEME;
else
return CURLUE_NO_SCHEME;
h = Curl_builtin_scheme(scheme);
if(!port && (flags & CURLU_DEFAULT_PORT)) {
/* there's no stored port number, but asked to deliver
a default one for the scheme */
if(h) {
msnprintf(portbuf, sizeof(portbuf), "%u", h->defport);
port = portbuf;
}
}
else if(port) {
/* there is a stored port number, but asked to inhibit if it matches
the default one for the scheme */
if(h && (h->defport == u->portnum) &&
(flags & CURLU_NO_DEFAULT_PORT))
port = NULL;
}
if(h && !(h->flags & PROTOPT_URLOPTIONS))
options = NULL;
if(u->host[0] == '[') {
if(u->zoneid) {
/* make it '[ host %25 zoneid ]' */
size_t hostlen = strlen(u->host);
size_t alen = hostlen + 3 + strlen(u->zoneid) + 1;
allochost = malloc(alen);
if(!allochost)
return CURLUE_OUT_OF_MEMORY;
memcpy(allochost, u->host, hostlen - 1);
msnprintf(&allochost[hostlen - 1], alen - hostlen + 1,
"%%25%s]", u->zoneid);
}
}
else if(urlencode) {
allochost = curl_easy_escape(NULL, u->host, 0);
if(!allochost)
return CURLUE_OUT_OF_MEMORY;
}
else {
/* only encode '%' in output host name */
char *host = u->host;
size_t pcount = 0;
/* first, count number of percents present in the name */
while(*host) {
if(*host == '%')
pcount++;
host++;
}
/* if there were percents, encode the host name */
if(pcount) {
size_t hostlen = strlen(u->host);
size_t alen = hostlen + 2 * pcount + 1;
char *o = allochost = malloc(alen);
if(!allochost)
return CURLUE_OUT_OF_MEMORY;
host = u->host;
while(*host) {
if(*host == '%') {
memcpy(o, "%25", 3);
o += 3;
host++;
continue;
}
*o++ = *host++;
}
*o = '\0';
}
}
url = aprintf("%s://%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s",
scheme,
u->user ? u->user : "",
u->password ? ":": "",
u->password ? u->password : "",
options ? ";" : "",
options ? options : "",
(u->user || u->password || options) ? "@": "",
allochost ? allochost : u->host,
port ? ":": "",
port ? port : "",
(u->path && (u->path[0] != '/')) ? "/": "",
u->path ? u->path : "/",
(u->query && u->query[0]) ? "?": "",
(u->query && u->query[0]) ? u->query : "",
u->fragment? "#": "",
u->fragment? u->fragment : "");
free(allochost);
}
if(!url)
return CURLUE_OUT_OF_MEMORY;
*part = url;
return CURLUE_OK;
}
default:
ptr = NULL;
break;
}
if(ptr) {
*part = strdup(ptr);
if(!*part)
return CURLUE_OUT_OF_MEMORY;
if(plusdecode) {
/* convert + to space */
char *plus;
for(plus = *part; *plus; ++plus) {
if(*plus == '+')
*plus = ' ';
}
}
if(urldecode) {
char *decoded;
size_t dlen;
/* this unconditional rejection of control bytes is documented
API behavior */
CURLcode res = Curl_urldecode(*part, 0, &decoded, &dlen, REJECT_CTRL);
free(*part);
if(res) {
*part = NULL;
return CURLUE_URLDECODE;
}
*part = decoded;
}
return CURLUE_OK;
}
else
return ifmissing;
}
|
Safe
|
[] |
curl
|
914aaab9153764ef8fa4178215b8ad89d3ac263a
|
2.0153706178214218e+38
| 234 |
urlapi: reject percent-decoding host name into separator bytes
CVE-2022-27780
Reported-by: Axel Chong
Bug: https://curl.se/docs/CVE-2022-27780.html
Closes #8826
| 0 |
static int hub_port_wait_reset(struct usb_hub *hub, int port1,
struct usb_device *udev, unsigned int delay, bool warm)
{
int delay_time, ret;
u16 portstatus;
u16 portchange;
for (delay_time = 0;
delay_time < HUB_RESET_TIMEOUT;
delay_time += delay) {
/* wait to give the device a chance to reset */
msleep(delay);
/* read and decode port status */
ret = hub_port_status(hub, port1, &portstatus, &portchange);
if (ret < 0)
return ret;
/* The port state is unknown until the reset completes. */
if (!(portstatus & USB_PORT_STAT_RESET))
break;
/* switch to the long delay after two short delay failures */
if (delay_time >= 2 * HUB_SHORT_RESET_TIME)
delay = HUB_LONG_RESET_TIME;
dev_dbg(&hub->ports[port1 - 1]->dev,
"not %sreset yet, waiting %dms\n",
warm ? "warm " : "", delay);
}
if ((portstatus & USB_PORT_STAT_RESET))
return -EBUSY;
if (hub_port_warm_reset_required(hub, port1, portstatus))
return -ENOTCONN;
/* Device went away? */
if (!(portstatus & USB_PORT_STAT_CONNECTION))
return -ENOTCONN;
/* bomb out completely if the connection bounced. A USB 3.0
* connection may bounce if multiple warm resets were issued,
* but the device may have successfully re-connected. Ignore it.
*/
if (!hub_is_superspeed(hub->hdev) &&
(portchange & USB_PORT_STAT_C_CONNECTION))
return -ENOTCONN;
if (!(portstatus & USB_PORT_STAT_ENABLE))
return -EBUSY;
if (!udev)
return 0;
if (hub_is_wusb(hub))
udev->speed = USB_SPEED_WIRELESS;
else if (hub_is_superspeed(hub->hdev))
udev->speed = USB_SPEED_SUPER;
else if (portstatus & USB_PORT_STAT_HIGH_SPEED)
udev->speed = USB_SPEED_HIGH;
else if (portstatus & USB_PORT_STAT_LOW_SPEED)
udev->speed = USB_SPEED_LOW;
else
udev->speed = USB_SPEED_FULL;
return 0;
}
|
Safe
|
[
"CWE-703"
] |
linux
|
e50293ef9775c5f1cf3fcc093037dd6a8c5684ea
|
1.1455763273039137e+38
| 67 |
USB: fix invalid memory access in hub_activate()
Commit 8520f38099cc ("USB: change hub initialization sleeps to
delayed_work") changed the hub_activate() routine to make part of it
run in a workqueue. However, the commit failed to take a reference to
the usb_hub structure or to lock the hub interface while doing so. As
a result, if a hub is plugged in and quickly unplugged before the work
routine can run, the routine will try to access memory that has been
deallocated. Or, if the hub is unplugged while the routine is
running, the memory may be deallocated while it is in active use.
This patch fixes the problem by taking a reference to the usb_hub at
the start of hub_activate() and releasing it at the end (when the work
is finished), and by locking the hub interface while the work routine
is running. It also adds a check at the start of the routine to see
if the hub has already been disconnected, in which nothing should be
done.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Alexandru Cornea <alexandru.cornea@intel.com>
Tested-by: Alexandru Cornea <alexandru.cornea@intel.com>
Fixes: 8520f38099cc ("USB: change hub initialization sleeps to delayed_work")
CC: <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
TEST(StringRefTest, ConvertToString) {
std::string s = StringRef("abc").to_string();
EXPECT_EQ("abc", s);
#if FMT_HAS_STRING_VIEW
StringRef str_ref("defg");
std::string_view sv = static_cast<std::string_view>(str_ref);
EXPECT_EQ("defg", sv);
#endif
#if FMT_HAS_EXPERIMENTAL_STRING_VIEW
StringRef str_ref("defg");
std::experimental::string_view sv = static_cast<std::experimental::string_view>(str_ref);
EXPECT_EQ("defg", sv);
#endif
}
|
Safe
|
[
"CWE-134",
"CWE-119",
"CWE-787"
] |
fmt
|
8cf30aa2be256eba07bb1cefb998c52326e846e7
|
1.4675458283564243e+38
| 16 |
Fix segfault on complex pointer formatting (#642)
| 0 |
static int bond_ipsec_add_sa(struct xfrm_state *xs)
{
struct net_device *bond_dev = xs->xso.dev;
struct bonding *bond;
struct slave *slave;
int err;
if (!bond_dev)
return -EINVAL;
rcu_read_lock();
bond = netdev_priv(bond_dev);
slave = rcu_dereference(bond->curr_active_slave);
xs->xso.real_dev = slave->dev;
bond->xs = xs;
if (!(slave->dev->xfrmdev_ops
&& slave->dev->xfrmdev_ops->xdo_dev_state_add)) {
slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n");
rcu_read_unlock();
return -EINVAL;
}
err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
rcu_read_unlock();
return err;
}
|
Vulnerable
|
[
"CWE-476",
"CWE-703"
] |
linux
|
105cd17a866017b45f3c45901b394c711c97bf40
|
1.4357600584727845e+38
| 27 |
bonding: fix null dereference in bond_ipsec_add_sa()
If bond doesn't have real device, bond->curr_active_slave is null.
But bond_ipsec_add_sa() dereferences bond->curr_active_slave without
null checking.
So, null-ptr-deref would occur.
Test commands:
ip link add bond0 type bond
ip link set bond0 up
ip x s add proto esp dst 14.1.1.1 src 15.1.1.1 spi \
0x07 mode transport reqid 0x07 replay-window 32 aead 'rfc4106(gcm(aes))' \
0x44434241343332312423222114131211f4f3f2f1 128 sel src 14.0.0.52/24 \
dst 14.0.0.70/24 proto tcp offload dev bond0 dir in
Splat looks like:
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 4 PID: 680 Comm: ip Not tainted 5.13.0-rc3+ #1168
RIP: 0010:bond_ipsec_add_sa+0xc4/0x2e0 [bonding]
Code: 85 21 02 00 00 4d 8b a6 48 0c 00 00 e8 75 58 44 ce 85 c0 0f 85 14
01 00 00 48 b8 00 00 00 00 00 fc ff df 4c 89 e2 48 c1 ea 03 <80> 3c 02
00 0f 85 fc 01 00 00 48 8d bb e0 02 00 00 4d 8b 2c 24 48
RSP: 0018:ffff88810946f508 EFLAGS: 00010246
RAX: dffffc0000000000 RBX: ffff88810b4e8040 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff8fe34280 RDI: ffff888115abe100
RBP: ffff88810946f528 R08: 0000000000000003 R09: fffffbfff2287e11
R10: 0000000000000001 R11: ffff888115abe0c8 R12: 0000000000000000
R13: ffffffffc0aea9a0 R14: ffff88800d7d2000 R15: ffff88810b4e8330
FS: 00007efc5552e680(0000) GS:ffff888119c00000(0000)
knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055c2530dbf40 CR3: 0000000103056004 CR4: 00000000003706e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
xfrm_dev_state_add+0x2a9/0x770
? memcpy+0x38/0x60
xfrm_add_sa+0x2278/0x3b10 [xfrm_user]
? xfrm_get_policy+0xaa0/0xaa0 [xfrm_user]
? register_lock_class+0x1750/0x1750
xfrm_user_rcv_msg+0x331/0x660 [xfrm_user]
? rcu_read_lock_sched_held+0x91/0xc0
? xfrm_user_state_lookup.constprop.39+0x320/0x320 [xfrm_user]
? find_held_lock+0x3a/0x1c0
? mutex_lock_io_nested+0x1210/0x1210
? sched_clock_cpu+0x18/0x170
netlink_rcv_skb+0x121/0x350
? xfrm_user_state_lookup.constprop.39+0x320/0x320 [xfrm_user]
? netlink_ack+0x9d0/0x9d0
? netlink_deliver_tap+0x17c/0xa50
xfrm_netlink_rcv+0x68/0x80 [xfrm_user]
netlink_unicast+0x41c/0x610
? netlink_attachskb+0x710/0x710
netlink_sendmsg+0x6b9/0xb70
[ ...]
Fixes: 18cb261afd7b ("bonding: support hardware encryption offload to slaves")
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 1 |
cls(const std::vector<std::pair<char32_t, char32_t>> &ranges) {
return std::make_shared<CharacterClass>(ranges, false);
}
|
Safe
|
[
"CWE-125"
] |
cpp-peglib
|
b3b29ce8f3acf3a32733d930105a17d7b0ba347e
|
3.337688669722039e+38
| 3 |
Fix #122
| 0 |
static char *JSON_parse_array(JSON_Parser *json, char *p, char *pe, VALUE *result, int current_nesting)
{
int cs = EVIL;
VALUE array_class = json->array_class;
if (json->max_nesting && current_nesting > json->max_nesting) {
rb_raise(eNestingError, "nesting of %d is too deep", current_nesting);
}
*result = NIL_P(array_class) ? rb_ary_new() : rb_class_new_instance(0, 0, array_class);
#line 1172 "parser.c"
{
cs = JSON_array_start;
}
#line 411 "parser.rl"
#line 1179 "parser.c"
{
if ( p == pe )
goto _test_eof;
switch ( cs )
{
case 1:
if ( (*p) == 91 )
goto st2;
goto st0;
st0:
cs = 0;
goto _out;
st2:
if ( ++p == pe )
goto _test_eof2;
case 2:
switch( (*p) ) {
case 13: goto st2;
case 32: goto st2;
case 34: goto tr2;
case 45: goto tr2;
case 47: goto st13;
case 73: goto tr2;
case 78: goto tr2;
case 91: goto tr2;
case 93: goto tr4;
case 102: goto tr2;
case 110: goto tr2;
case 116: goto tr2;
case 123: goto tr2;
}
if ( (*p) > 10 ) {
if ( 48 <= (*p) && (*p) <= 57 )
goto tr2;
} else if ( (*p) >= 9 )
goto st2;
goto st0;
tr2:
#line 375 "parser.rl"
{
VALUE v = Qnil;
char *np = JSON_parse_value(json, p, pe, &v, current_nesting);
if (np == NULL) {
p--; {p++; cs = 3; goto _out;}
} else {
if (NIL_P(json->array_class)) {
rb_ary_push(*result, v);
} else {
rb_funcall(*result, i_leftshift, 1, v);
}
{p = (( np))-1;}
}
}
goto st3;
st3:
if ( ++p == pe )
goto _test_eof3;
case 3:
#line 1238 "parser.c"
switch( (*p) ) {
case 13: goto st3;
case 32: goto st3;
case 44: goto st4;
case 47: goto st9;
case 93: goto tr4;
}
if ( 9 <= (*p) && (*p) <= 10 )
goto st3;
goto st0;
st4:
if ( ++p == pe )
goto _test_eof4;
case 4:
switch( (*p) ) {
case 13: goto st4;
case 32: goto st4;
case 34: goto tr2;
case 45: goto tr2;
case 47: goto st5;
case 73: goto tr2;
case 78: goto tr2;
case 91: goto tr2;
case 102: goto tr2;
case 110: goto tr2;
case 116: goto tr2;
case 123: goto tr2;
}
if ( (*p) > 10 ) {
if ( 48 <= (*p) && (*p) <= 57 )
goto tr2;
} else if ( (*p) >= 9 )
goto st4;
goto st0;
st5:
if ( ++p == pe )
goto _test_eof5;
case 5:
switch( (*p) ) {
case 42: goto st6;
case 47: goto st8;
}
goto st0;
st6:
if ( ++p == pe )
goto _test_eof6;
case 6:
if ( (*p) == 42 )
goto st7;
goto st6;
st7:
if ( ++p == pe )
goto _test_eof7;
case 7:
switch( (*p) ) {
case 42: goto st7;
case 47: goto st4;
}
goto st6;
st8:
if ( ++p == pe )
goto _test_eof8;
case 8:
if ( (*p) == 10 )
goto st4;
goto st8;
st9:
if ( ++p == pe )
goto _test_eof9;
case 9:
switch( (*p) ) {
case 42: goto st10;
case 47: goto st12;
}
goto st0;
st10:
if ( ++p == pe )
goto _test_eof10;
case 10:
if ( (*p) == 42 )
goto st11;
goto st10;
st11:
if ( ++p == pe )
goto _test_eof11;
case 11:
switch( (*p) ) {
case 42: goto st11;
case 47: goto st3;
}
goto st10;
st12:
if ( ++p == pe )
goto _test_eof12;
case 12:
if ( (*p) == 10 )
goto st3;
goto st12;
tr4:
#line 390 "parser.rl"
{ p--; {p++; cs = 17; goto _out;} }
goto st17;
st17:
if ( ++p == pe )
goto _test_eof17;
case 17:
#line 1345 "parser.c"
goto st0;
st13:
if ( ++p == pe )
goto _test_eof13;
case 13:
switch( (*p) ) {
case 42: goto st14;
case 47: goto st16;
}
goto st0;
st14:
if ( ++p == pe )
goto _test_eof14;
case 14:
if ( (*p) == 42 )
goto st15;
goto st14;
st15:
if ( ++p == pe )
goto _test_eof15;
case 15:
switch( (*p) ) {
case 42: goto st15;
case 47: goto st2;
}
goto st14;
st16:
if ( ++p == pe )
goto _test_eof16;
case 16:
if ( (*p) == 10 )
goto st2;
goto st16;
}
_test_eof2: cs = 2; goto _test_eof;
_test_eof3: cs = 3; goto _test_eof;
_test_eof4: cs = 4; goto _test_eof;
_test_eof5: cs = 5; goto _test_eof;
_test_eof6: cs = 6; goto _test_eof;
_test_eof7: cs = 7; goto _test_eof;
_test_eof8: cs = 8; goto _test_eof;
_test_eof9: cs = 9; goto _test_eof;
_test_eof10: cs = 10; goto _test_eof;
_test_eof11: cs = 11; goto _test_eof;
_test_eof12: cs = 12; goto _test_eof;
_test_eof17: cs = 17; goto _test_eof;
_test_eof13: cs = 13; goto _test_eof;
_test_eof14: cs = 14; goto _test_eof;
_test_eof15: cs = 15; goto _test_eof;
_test_eof16: cs = 16; goto _test_eof;
_test_eof: {}
_out: {}
}
#line 412 "parser.rl"
if(cs >= JSON_array_first_final) {
return p + 1;
} else {
rb_enc_raise(EXC_ENCODING eParserError, "%u: unexpected token at '%s'", __LINE__, p);
return NULL;
}
}
|
Safe
|
[
"CWE-20"
] |
ruby
|
36e9ed7fef6eb2d14becf6c52452e4ab16e4bf01
|
2.9519690807324473e+38
| 249 |
backport 80b5a0ff2a7709367178f29d4ebe1c54122b1c27 partially as a securify fix for CVE-2020-10663.
The patch was provided by Jeremy Evans.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/branches/ruby_2_6@67856 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
| 0 |
mrb_gc_mark_mt_size(mrb_state *mrb, struct RClass *c)
{
khash_t(mt) *h = c->mt;
if (!h) return 0;
return kh_size(h);
}
|
Safe
|
[
"CWE-476",
"CWE-415"
] |
mruby
|
faa4eaf6803bd11669bc324b4c34e7162286bfa3
|
1.066503645142996e+38
| 7 |
`mrb_class_real()` did not work for `BasicObject`; fix #4037
| 0 |
goa_http_client_check (GoaHttpClient *client,
const gchar *uri,
const gchar *username,
const gchar *password,
GCancellable *cancellable,
GAsyncReadyCallback callback,
gpointer user_data)
{
CheckData *data;
CheckAuthData *auth;
g_return_if_fail (GOA_IS_HTTP_CLIENT (client));
g_return_if_fail (uri != NULL || uri[0] != '\0');
g_return_if_fail (username != NULL || username[0] != '\0');
g_return_if_fail (password != NULL || password[0] != '\0');
g_return_if_fail (cancellable == NULL || G_IS_CANCELLABLE (cancellable));
data = g_slice_new0 (CheckData);
data->res = g_simple_async_result_new (G_OBJECT (client), callback, user_data, goa_http_client_check);
data->session = soup_session_async_new_with_options (SOUP_SESSION_USE_THREAD_CONTEXT, TRUE,
NULL);
data->msg = soup_message_new (SOUP_METHOD_GET, uri);
soup_message_headers_append (data->msg->request_headers, "Connection", "close");
if (cancellable != NULL)
{
data->cancellable = g_object_ref (cancellable);
data->cancellable_id = g_cancellable_connect (data->cancellable,
G_CALLBACK (http_client_check_cancelled_cb),
data,
NULL);
g_simple_async_result_set_check_cancellable (data->res, data->cancellable);
}
auth = g_slice_new0 (CheckAuthData);
auth->username = g_strdup (username);
auth->password = g_strdup (password);
g_signal_connect_data (data->session,
"authenticate",
G_CALLBACK (http_client_authenticate),
auth,
http_client_check_auth_data_free,
0);
soup_session_queue_message (data->session, data->msg, http_client_check_response_cb, data);
}
|
Vulnerable
|
[
"CWE-310"
] |
gnome-online-accounts
|
ecad8142e9ac519b9fc74b96dcb5531052bbffe1
|
2.771911515351138e+38
| 47 |
Guard against invalid SSL certificates
None of the branded providers (eg., Google, Facebook and Windows Live)
should ever have an invalid certificate. So set "ssl-strict" on the
SoupSession object being used by GoaWebView.
Providers like ownCloud and Exchange might have to deal with
certificates that are not up to the mark. eg., self-signed
certificates. For those, show a warning when the account is being
created, and only proceed if the user decides to ignore it. In any
case, save the status of the certificate that was used to create the
account. So an account created with a valid certificate will never
work with an invalid one, and one created with an invalid certificate
will not throw any further warnings.
Fixes: CVE-2013-0240
| 1 |
static int send_ietf_00(handler_ctx *hctx, mod_wstunnel_frame_type_t type, const char *payload, size_t siz) {
static const char head = 0; /* 0x00 */
static const char tail = ~0; /* 0xff */
request_st * const r = hctx->gw.r;
char *mem;
size_t len;
switch (type) {
case MOD_WEBSOCKET_FRAME_TYPE_TEXT:
if (0 == siz) return 0;
http_chunk_append_mem(r, &head, 1);
http_chunk_append_mem(r, payload, siz);
http_chunk_append_mem(r, &tail, 1);
len = siz+2;
break;
case MOD_WEBSOCKET_FRAME_TYPE_BIN:
if (0 == siz) return 0;
http_chunk_append_mem(r, &head, 1);
len = 4*(siz/3)+4+1;
/* avoid accumulating too much data in memory; send to tmpfile */
mem = malloc(len);
force_assert(mem);
len=li_to_base64(mem,len,(unsigned char *)payload,siz,BASE64_STANDARD);
http_chunk_append_mem(r, mem, len);
free(mem);
http_chunk_append_mem(r, &tail, 1);
len += 2;
break;
case MOD_WEBSOCKET_FRAME_TYPE_CLOSE:
http_chunk_append_mem(r, &tail, 1);
http_chunk_append_mem(r, &head, 1);
len = 2;
break;
default:
DEBUG_LOG_ERR("%s", "invalid frame type");
return -1;
}
DEBUG_LOG_DEBUG("send data to client (fd=%d), frame size=%zx",
r->con->fd, len);
return 0;
}
|
Safe
|
[
"CWE-476"
] |
lighttpd1.4
|
971773f1fae600074b46ef64f3ca1f76c227985f
|
2.4217502265920762e+38
| 41 |
[mod_wstunnel] fix crash with bad hybivers (fixes #3165)
(thx Michał Dardas)
x-ref:
"mod_wstunnel null pointer dereference"
https://redmine.lighttpd.net/issues/3165
| 0 |
int gnutls_x509_key_purpose_init(gnutls_x509_key_purposes_t * p)
{
*p = gnutls_calloc(1, sizeof(struct gnutls_x509_key_purposes_st));
if (*p == NULL) {
gnutls_assert();
return GNUTLS_E_MEMORY_ERROR;
}
return 0;
}
|
Safe
|
[] |
gnutls
|
d6972be33264ecc49a86cd0958209cd7363af1e9
|
1.7531584806986463e+38
| 10 |
eliminated double-free in the parsing of dist points
Reported by Robert Święcki.
| 0 |
print_p2r_escape (const unsigned char *msg, size_t msglen)
{
print_p2r_header ("PC_to_RDR_Escape", msg, msglen);
print_pr_data (msg, msglen, 7);
}
|
Safe
|
[
"CWE-20"
] |
gnupg
|
2183683bd633818dd031b090b5530951de76f392
|
2.3298373752644995e+38
| 5 |
Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <wk@gnupg.org>
| 0 |
static std::string GetFilePathExtension(const std::string &FileName) {
if (FileName.find_last_of(".") != std::string::npos)
return FileName.substr(FileName.find_last_of(".") + 1);
return "";
}
|
Safe
|
[
"CWE-20"
] |
tinygltf
|
52ff00a38447f06a17eab1caa2cf0730a119c751
|
2.4785472268518265e+38
| 5 |
Do not expand file path since its not necessary for glTF asset path(URI) and for security reason(`wordexp`).
| 0 |
get_mime_type_icon (FrWindow *window,
const char *mime_type)
{
GIcon *icon;
GdkPixbuf *pixbuf;
icon = g_content_type_get_icon (mime_type);
pixbuf = gth_icon_cache_get_pixbuf (window->priv->tree_icon_cache, icon);
g_object_unref (icon);
return pixbuf;
}
|
Safe
|
[
"CWE-22"
] |
file-roller
|
b147281293a8307808475e102a14857055f81631
|
2.866164354389394e+38
| 13 |
libarchive: sanitize filenames before extracting
| 0 |
STATIC ptr_t GC_unmap_end(ptr_t start, size_t bytes)
{
return (ptr_t)((word)(start + bytes) & ~(GC_page_size - 1));
}
|
Safe
|
[
"CWE-119"
] |
bdwgc
|
7292c02fac2066d39dd1bcc37d1a7054fd1e32ee
|
2.6130927973602754e+38
| 4 |
Fix malloc routines to prevent size value wrap-around
See issue #135 on Github.
* allchblk.c (GC_allochblk, GC_allochblk_nth): Use
OBJ_SZ_TO_BLOCKS_CHECKED instead of OBJ_SZ_TO_BLOCKS.
* malloc.c (GC_alloc_large): Likewise.
* alloc.c (GC_expand_hp_inner): Type of "bytes" local variable changed
from word to size_t; cast ROUNDUP_PAGESIZE argument to size_t; prevent
overflow when computing GC_heapsize+bytes > GC_max_heapsize.
* dbg_mlc.c (GC_debug_malloc, GC_debug_malloc_ignore_off_page,
GC_debug_malloc_atomic_ignore_off_page, GC_debug_generic_malloc,
GC_debug_generic_malloc_inner,
GC_debug_generic_malloc_inner_ignore_off_page,
GC_debug_malloc_stubborn, GC_debug_malloc_atomic,
GC_debug_malloc_uncollectable, GC_debug_malloc_atomic_uncollectable):
Use SIZET_SAT_ADD (instead of "+" operator) to add extra bytes to lb
value.
* fnlz_mlc.c (GC_finalized_malloc): Likewise.
* gcj_mlc.c (GC_debug_gcj_malloc): Likewise.
* include/private/gc_priv.h (ROUNDUP_GRANULE_SIZE, ROUNDED_UP_GRANULES,
ADD_SLOP, ROUNDUP_PAGESIZE): Likewise.
* include/private/gcconfig.h (GET_MEM): Likewise.
* mallocx.c (GC_malloc_many, GC_memalign): Likewise.
* os_dep.c (GC_wince_get_mem, GC_win32_get_mem): Likewise.
* typd_mlc.c (GC_malloc_explicitly_typed,
GC_malloc_explicitly_typed_ignore_off_page,
GC_calloc_explicitly_typed): Likewise.
* headers.c (GC_scratch_alloc): Change type of bytes_to_get from word
to size_t (because ROUNDUP_PAGESIZE_IF_MMAP result type changed).
* include/private/gc_priv.h: Include limits.h (unless SIZE_MAX already
defined).
* include/private/gc_priv.h (GC_SIZE_MAX, GC_SQRT_SIZE_MAX): Move from
malloc.c file.
* include/private/gc_priv.h (SIZET_SAT_ADD): New macro (defined before
include gcconfig.h).
* include/private/gc_priv.h (EXTRA_BYTES, GC_page_size): Change type
to size_t.
* os_dep.c (GC_page_size): Likewise.
* include/private/gc_priv.h (ROUNDUP_GRANULE_SIZE, ROUNDED_UP_GRANULES,
ADD_SLOP, ROUNDUP_PAGESIZE): Add comment about the argument.
* include/private/gcconfig.h (GET_MEM): Likewise.
* include/private/gc_priv.h (ROUNDUP_GRANULE_SIZE, ROUNDED_UP_GRANULES,
ADD_SLOP, OBJ_SZ_TO_BLOCKS, ROUNDUP_PAGESIZE,
ROUNDUP_PAGESIZE_IF_MMAP): Rename argument to "lb".
* include/private/gc_priv.h (OBJ_SZ_TO_BLOCKS_CHECKED): New macro.
* include/private/gcconfig.h (GC_win32_get_mem, GC_wince_get_mem,
GC_unix_get_mem): Change argument type from word to int.
* os_dep.c (GC_unix_mmap_get_mem, GC_unix_get_mem,
GC_unix_sbrk_get_mem, GC_wince_get_mem, GC_win32_get_mem): Likewise.
* malloc.c (GC_alloc_large_and_clear): Call OBJ_SZ_TO_BLOCKS only
if no value wrap around is guaranteed.
* malloc.c (GC_generic_malloc): Do not check for lb_rounded < lb case
(because ROUNDED_UP_GRANULES and GRANULES_TO_BYTES guarantees no value
wrap around).
* mallocx.c (GC_generic_malloc_ignore_off_page): Likewise.
* misc.c (GC_init_size_map): Change "i" local variable type from int
to size_t.
* os_dep.c (GC_write_fault_handler, catch_exception_raise): Likewise.
* misc.c (GC_envfile_init): Cast len to size_t when passed to
ROUNDUP_PAGESIZE_IF_MMAP.
* os_dep.c (GC_setpagesize): Cast GC_sysinfo.dwPageSize and
GETPAGESIZE() to size_t (when setting GC_page_size).
* os_dep.c (GC_unix_mmap_get_mem, GC_unmap_start, GC_remove_protection):
Expand ROUNDUP_PAGESIZE macro but without value wrap-around checking
(the argument is of word type).
* os_dep.c (GC_unix_mmap_get_mem): Replace -GC_page_size with
~GC_page_size+1 (because GC_page_size is unsigned); remove redundant
cast to size_t.
* os_dep.c (GC_unix_sbrk_get_mem): Add explicit cast of GC_page_size
to SBRK_ARG_T.
* os_dep.c (GC_wince_get_mem): Change type of res_bytes local variable
to size_t.
* typd_mlc.c: Do not include limits.h.
* typd_mlc.c (GC_SIZE_MAX, GC_SQRT_SIZE_MAX): Remove (as defined in
gc_priv.h now).
| 0 |
static int init_wvc_bitstream (WavpackStream *wps, WavpackMetadata *wpmd)
{
if (!wpmd->byte_length || (wpmd->byte_length & 1))
return FALSE;
bs_open_read (&wps->wvcbits, wpmd->data, (unsigned char *) wpmd->data + wpmd->byte_length);
return TRUE;
}
|
Safe
|
[
"CWE-125"
] |
WavPack
|
4bc05fc490b66ef2d45b1de26abf1455b486b0dc
|
2.3418984178141595e+38
| 8 |
fixes for 4 fuzz failures posted to SourceForge mailing list
| 0 |
static void ext4_init_journal_params(struct super_block *sb, journal_t *journal)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
journal->j_commit_interval = sbi->s_commit_interval;
journal->j_min_batch_time = sbi->s_min_batch_time;
journal->j_max_batch_time = sbi->s_max_batch_time;
write_lock(&journal->j_state_lock);
if (test_opt(sb, BARRIER))
journal->j_flags |= JBD2_BARRIER;
else
journal->j_flags &= ~JBD2_BARRIER;
if (test_opt(sb, DATA_ERR_ABORT))
journal->j_flags |= JBD2_ABORT_ON_SYNCDATA_ERR;
else
journal->j_flags &= ~JBD2_ABORT_ON_SYNCDATA_ERR;
write_unlock(&journal->j_state_lock);
}
|
Safe
|
[
"CWE-362"
] |
linux
|
ea3d7209ca01da209cda6f0dea8be9cc4b7a933b
|
1.7966399467905255e+38
| 19 |
ext4: fix races between page faults and hole punching
Currently, page faults and hole punching are completely unsynchronized.
This can result in page fault faulting in a page into a range that we
are punching after truncate_pagecache_range() has been called and thus
we can end up with a page mapped to disk blocks that will be shortly
freed. Filesystem corruption will shortly follow. Note that the same
race is avoided for truncate by checking page fault offset against
i_size but there isn't similar mechanism available for punching holes.
Fix the problem by creating new rw semaphore i_mmap_sem in inode and
grab it for writing over truncate, hole punching, and other functions
removing blocks from extent tree and for read over page faults. We
cannot easily use i_data_sem for this since that ranks below transaction
start and we need something ranking above it so that it can be held over
the whole truncate / hole punching operation. Also remove various
workarounds we had in the code to reduce race window when page fault
could have created pages with stale mapping information.
Signed-off-by: Jan Kara <jack@suse.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
| 0 |
int __form_query(int id,
const char *name,
int type,
unsigned char *packet,
int maxlen)
{
struct resolv_header h;
struct resolv_question q;
int i, j;
memset(&h, 0, sizeof(h));
h.id = id;
h.qdcount = 1;
q.dotted = (char *) name;
q.qtype = type;
q.qclass = C_IN; /* CLASS_IN */
i = __encode_header(&h, packet, maxlen);
if (i < 0)
return i;
j = __encode_question(&q, packet + i, maxlen - i);
if (j < 0)
return j;
return i + j;
}
|
Safe
|
[
"CWE-79"
] |
uclibc-ng
|
0f822af0445e5348ce7b7bd8ce1204244f31d174
|
9.937451489088433e+37
| 28 |
libc/inet/resolv.c: add __hnbad to check DNS entries for validity…
… using the same rules glibc does
also call __hnbad in some places to check answers
| 0 |
doGlblProcessInit(void)
{
struct sigaction sigAct;
int num_fds;
int i;
DEFiRet;
thrdInit();
if( !(Debug || NoFork) )
{
DBGPRINTF("Checking pidfile.\n");
if (!check_pid(PidFile))
{
memset(&sigAct, 0, sizeof (sigAct));
sigemptyset(&sigAct.sa_mask);
sigAct.sa_handler = doexit;
sigaction(SIGTERM, &sigAct, NULL);
if (fork()) {
/* Parent process
*/
sleep(300);
/* Not reached unless something major went wrong. 5
* minutes should be a fair amount of time to wait.
* Please note that this procedure is important since
* the father must not exit before syslogd isn't
* initialized or the klogd won't be able to flush its
* logs. -Joey
*/
exit(1); /* "good" exit - after forking, not diasabling anything */
}
num_fds = getdtablesize();
close(0);
/* we keep stdout and stderr open in case we have to emit something */
for (i = 3; i < num_fds; i++)
(void) close(i);
untty();
}
else
{
fputs(" Already running.\n", stderr);
exit(1); /* "good" exit, done if syslogd is already running */
}
}
/* tuck my process id away */
DBGPRINTF("Writing pidfile %s.\n", PidFile);
if (!check_pid(PidFile))
{
if (!write_pid(PidFile))
{
fputs("Can't write pid.\n", stderr);
exit(1); /* exit during startup - questionable */
}
}
else
{
fputs("Pidfile (and pid) already exist.\n", stderr);
exit(1); /* exit during startup - questionable */
}
myPid = getpid(); /* save our pid for further testing (also used for messages) */
memset(&sigAct, 0, sizeof (sigAct));
sigemptyset(&sigAct.sa_mask);
sigAct.sa_handler = sigsegvHdlr;
sigaction(SIGSEGV, &sigAct, NULL);
sigAct.sa_handler = sigsegvHdlr;
sigaction(SIGABRT, &sigAct, NULL);
sigAct.sa_handler = doDie;
sigaction(SIGTERM, &sigAct, NULL);
sigAct.sa_handler = Debug ? doDie : SIG_IGN;
sigaction(SIGINT, &sigAct, NULL);
sigaction(SIGQUIT, &sigAct, NULL);
sigAct.sa_handler = reapchild;
sigaction(SIGCHLD, &sigAct, NULL);
sigAct.sa_handler = Debug ? debug_switch : SIG_IGN;
sigaction(SIGUSR1, &sigAct, NULL);
sigAct.sa_handler = SIG_IGN;
sigaction(SIGPIPE, &sigAct, NULL);
sigaction(SIGXFSZ, &sigAct, NULL); /* do not abort if 2gig file limit is hit */
RETiRet;
}
|
Safe
|
[
"CWE-119"
] |
rsyslog
|
1ca6cc236d1dabf1633238b873fb1c057e52f95e
|
2.814404967163166e+38
| 85 |
bugfix: off-by-one(two) bug in legacy syslog parser
| 0 |
void send_response() override {}
|
Safe
|
[
"CWE-770"
] |
ceph
|
ab29bed2fc9f961fe895de1086a8208e21ddaddc
|
7.942591827462483e+37
| 1 |
rgw: fix issues with 'enforce bounds' patch
The patch to enforce bounds on max-keys/max-uploads/max-parts had a few
issues that would prevent us from compiling it. Instead of changing the
code provided by the submitter, we're addressing them in a separate
commit to maintain the DCO.
Signed-off-by: Joao Eduardo Luis <joao@suse.de>
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 29bc434a6a81a2e5c5b8cfc4c8d5c82ca5bf538a)
mimic specific fixes:
As the largeish change from master g_conf() isn't in mimic yet, use the g_conf
global structure, also make rgw_op use the value from req_info ceph context as
we do for all the requests
| 0 |
ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
{
return cpu_show_common(dev, attr, buf, X86_BUG_SPECTRE_V2);
}
|
Safe
|
[] |
linux
|
a2059825986a1c8143fd6698774fa9d83733bb11
|
5.1317382291442563e+36
| 4 |
x86/speculation: Enable Spectre v1 swapgs mitigations
The previous commit added macro calls in the entry code which mitigate the
Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are
enabled. Enable those features where applicable.
The mitigations may be disabled with "nospectre_v1" or "mitigations=off".
There are different features which can affect the risk of attack:
- When FSGSBASE is enabled, unprivileged users are able to place any
value in GS, using the wrgsbase instruction. This means they can
write a GS value which points to any value in kernel space, which can
be useful with the following gadget in an interrupt/exception/NMI
handler:
if (coming from user space)
swapgs
mov %gs:<percpu_offset>, %reg1
// dependent load or store based on the value of %reg
// for example: mov %(reg1), %reg2
If an interrupt is coming from user space, and the entry code
speculatively skips the swapgs (due to user branch mistraining), it
may speculatively execute the GS-based load and a subsequent dependent
load or store, exposing the kernel data to an L1 side channel leak.
Note that, on Intel, a similar attack exists in the above gadget when
coming from kernel space, if the swapgs gets speculatively executed to
switch back to the user GS. On AMD, this variant isn't possible
because swapgs is serializing with respect to future GS-based
accesses.
NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case
doesn't exist quite yet.
- When FSGSBASE is disabled, the issue is mitigated somewhat because
unprivileged users must use prctl(ARCH_SET_GS) to set GS, which
restricts GS values to user space addresses only. That means the
gadget would need an additional step, since the target kernel address
needs to be read from user space first. Something like:
if (coming from user space)
swapgs
mov %gs:<percpu_offset>, %reg1
mov (%reg1), %reg2
// dependent load or store based on the value of %reg2
// for example: mov %(reg2), %reg3
It's difficult to audit for this gadget in all the handlers, so while
there are no known instances of it, it's entirely possible that it
exists somewhere (or could be introduced in the future). Without
tooling to analyze all such code paths, consider it vulnerable.
Effects of SMAP on the !FSGSBASE case:
- If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not
susceptible to Meltdown), the kernel is prevented from speculatively
reading user space memory, even L1 cached values. This effectively
disables the !FSGSBASE attack vector.
- If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP
still prevents the kernel from speculatively reading user space
memory. But it does *not* prevent the kernel from reading the
user value from L1, if it has already been cached. This is probably
only a small hurdle for an attacker to overcome.
Thanks to Dave Hansen for contributing the speculative_smap() function.
Thanks to Andrew Cooper for providing the inside scoop on whether swapgs
is serializing on AMD.
[ tglx: Fixed the USER fence decision and polished the comment as suggested
by Dave Hansen ]
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@intel.com>
| 0 |
static int llc_ui_getsockopt(struct socket *sock, int level, int optname,
char __user *optval, int __user *optlen)
{
struct sock *sk = sock->sk;
struct llc_sock *llc = llc_sk(sk);
int val = 0, len = 0, rc = -EINVAL;
lock_sock(sk);
if (unlikely(level != SOL_LLC))
goto out;
rc = get_user(len, optlen);
if (rc)
goto out;
rc = -EINVAL;
if (len != sizeof(int))
goto out;
switch (optname) {
case LLC_OPT_RETRY:
val = llc->n2; break;
case LLC_OPT_SIZE:
val = llc->n1; break;
case LLC_OPT_ACK_TMR_EXP:
val = llc->ack_timer.expire / HZ; break;
case LLC_OPT_P_TMR_EXP:
val = llc->pf_cycle_timer.expire / HZ; break;
case LLC_OPT_REJ_TMR_EXP:
val = llc->rej_sent_timer.expire / HZ; break;
case LLC_OPT_BUSY_TMR_EXP:
val = llc->busy_state_timer.expire / HZ; break;
case LLC_OPT_TX_WIN:
val = llc->k; break;
case LLC_OPT_RX_WIN:
val = llc->rw; break;
case LLC_OPT_PKTINFO:
val = (llc->cmsg_flags & LLC_CMSG_PKTINFO) != 0;
break;
default:
rc = -ENOPROTOOPT;
goto out;
}
rc = 0;
if (put_user(len, optlen) || copy_to_user(optval, &val, len))
rc = -EFAULT;
out:
release_sock(sk);
return rc;
}
|
Safe
|
[
"CWE-200"
] |
net
|
b8670c09f37bdf2847cc44f36511a53afc6161fd
|
5.3872399951713665e+34
| 47 |
net: fix infoleak in llc
The stack object “info” has a total size of 12 bytes. Its last byte
is padding which is not initialized and leaked via “put_cmsg”.
Signed-off-by: Kangjie Lu <kjlu@gatech.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static int php_is_file_ok(const cwd_state *state) /* {{{ */
{
struct stat buf;
if (php_sys_stat(state->cwd, &buf) == 0 && S_ISREG(buf.st_mode))
return (0);
return (1);
}
|
Safe
|
[
"CWE-190"
] |
php-src
|
0218acb7e756a469099c4ccfb22bce6c2bd1ef87
|
1.9124373265690725e+38
| 9 |
Fix for bug #72513
| 0 |
static void shm_open(struct vm_area_struct *vma)
{
struct file *file = vma->vm_file;
struct shm_file_data *sfd = shm_file_data(file);
struct shmid_kernel *shp;
shp = shm_lock(sfd->ns, sfd->id);
shp->shm_atim = get_seconds();
shp->shm_lprid = task_tgid_vnr(current);
shp->shm_nattch++;
shm_unlock(shp);
}
|
Safe
|
[
"CWE-362",
"CWE-401"
] |
linux
|
b9a532277938798b53178d5a66af6e2915cb27cf
|
1.769113717070962e+38
| 12 |
Initialize msg/shm IPC objects before doing ipc_addid()
As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
having initialized the IPC object state. Yes, we initialize the IPC
object in a locked state, but with all the lockless RCU lookup work,
that IPC object lock no longer means that the state cannot be seen.
We already did this for the IPC semaphore code (see commit e8577d1f0329:
"ipc/sem.c: fully initialize sem_array before making it visible") but we
clearly forgot about msg and shm.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
_pango_emoji_iter_next (PangoEmojiIter *iter)
{
PangoEmojiType current_emoji_type = PANGO_EMOJI_TYPE_INVALID;
if (iter->end == iter->text_end)
return FALSE;
iter->start = iter->end;
for (; iter->end < iter->text_end; iter->end = g_utf8_next_char (iter->end))
{
gunichar ch = g_utf8_get_char (iter->end);
/* Except at the beginning, ZWJ just carries over the emoji or neutral
* text type, VS15 & VS16 we just carry over as well, since we already
* resolved those through lookahead. Also, don't downgrade to text
* presentation for emoji that are part of a ZWJ sequence, example
* U+1F441 U+200D U+1F5E8, eye (text presentation) + ZWJ + left speech
* bubble, see below. */
if ((!(ch == kZeroWidthJoinerCharacter && !iter->is_emoji) &&
ch != kVariationSelector15Character &&
ch != kVariationSelector16Character &&
ch != kCombiningEnclosingCircleBackslashCharacter &&
!_pango_Is_Regional_Indicator(ch) &&
!((ch == kLeftSpeechBubbleCharacter ||
ch == kRainbowCharacter ||
ch == kMaleSignCharacter ||
ch == kFemaleSignCharacter ||
ch == kStaffOfAesculapiusCharacter) &&
!iter->is_emoji)) ||
current_emoji_type == PANGO_EMOJI_TYPE_INVALID) {
current_emoji_type = _pango_get_emoji_type (ch);
}
if (g_utf8_next_char (iter->end) < iter->text_end) /* Optimize. */
{
gunichar peek_char = g_utf8_get_char (g_utf8_next_char (iter->end));
/* Variation Selectors */
if (current_emoji_type ==
PANGO_EMOJI_TYPE_EMOJI_EMOJI &&
peek_char == kVariationSelector15Character) {
current_emoji_type = PANGO_EMOJI_TYPE_EMOJI_TEXT;
}
if ((current_emoji_type ==
PANGO_EMOJI_TYPE_EMOJI_TEXT ||
_pango_Is_Emoji_Keycap_Base(ch)) &&
peek_char == kVariationSelector16Character) {
current_emoji_type = PANGO_EMOJI_TYPE_EMOJI_EMOJI;
}
/* Combining characters Keycap... */
if (_pango_Is_Emoji_Keycap_Base(ch) &&
peek_char == kCombiningEnclosingKeycapCharacter) {
current_emoji_type = PANGO_EMOJI_TYPE_EMOJI_EMOJI;
};
/* Regional indicators */
if (_pango_Is_Regional_Indicator(ch) &&
_pango_Is_Regional_Indicator(peek_char)) {
current_emoji_type = PANGO_EMOJI_TYPE_EMOJI_EMOJI;
}
/* Upgrade text presentation emoji to emoji presentation when followed by
* ZWJ, Example U+1F441 U+200D U+1F5E8, eye + ZWJ + left speech bubble. */
if ((ch == kEyeCharacter ||
ch == kWavingWhiteFlagCharacter) &&
peek_char == kZeroWidthJoinerCharacter) {
current_emoji_type = PANGO_EMOJI_TYPE_EMOJI_EMOJI;
}
}
if (iter->is_emoji == (gboolean) 2)
iter->is_emoji = !PANGO_EMOJI_TYPE_IS_EMOJI (current_emoji_type);
if (iter->is_emoji == PANGO_EMOJI_TYPE_IS_EMOJI (current_emoji_type))
{
iter->is_emoji = !PANGO_EMOJI_TYPE_IS_EMOJI (current_emoji_type);
return TRUE;
}
}
iter->is_emoji = PANGO_EMOJI_TYPE_IS_EMOJI (current_emoji_type);
return TRUE;
}
|
Vulnerable
|
[
"CWE-119",
"CWE-787"
] |
pango
|
71aaeaf020340412b8d012fe23a556c0420eda5f
|
7.793748913256423e+37
| 86 |
Prevent an assertion with invalid Unicode sequences
Invalid Unicode sequences, such as 0x2665 0xfe0e 0xfe0f,
can trick the Emoji iter code into returning an empty
segment, which then triggers an assertion in the itemizer.
Prevent this by ensuring that we make progress.
This issue was reported by Jeffrey M.
| 1 |
static void save_xattr_block(long long start, int offset)
{
struct hash_entry *hash_entry = malloc(sizeof(*hash_entry));
int hash = start & 0xffff;
TRACE("save_xattr_block: start %lld, offset %d\n", start, offset);
if(hash_entry == NULL)
MEM_ERROR();
hash_entry->start = start;
hash_entry->offset = offset;
hash_entry->next = hash_table[hash];
hash_table[hash] = hash_entry;
}
|
Safe
|
[
"CWE-20",
"CWE-190"
] |
squashfs-tools
|
f95864afe8833fe3ad782d714b41378e860977b1
|
1.9685061021772735e+38
| 15 |
unsquashfs-4: Add more sanity checks + fix CVE-2015-4645/6
Add more filesystem table sanity checks to Unsquashfs-4 and
also properly fix CVE-2015-4645 and CVE-2015-4646.
The CVEs were raised due to Unsquashfs having variable
oveflow and stack overflow in a number of vulnerable
functions.
The suggested patch only "fixed" one such function and fixed
it badly, and so it was buggy and introduced extra bugs!
The suggested patch was not only buggy, but, it used the
essentially wrong approach too. It was "fixing" the
symptom but not the cause. The symptom is wrong values
causing overflow, the cause is filesystem corruption.
This corruption should be detected and the filesystem
rejected *before* trying to allocate memory.
This patch applies the following fixes:
1. The filesystem super-block tables are checked, and the values
must match across the filesystem.
This will trap corrupted filesystems created by Mksquashfs.
2. The maximum (theorectical) size the filesystem tables could grow
to, were analysed, and some variables were increased from int to
long long.
This analysis has been added as comments.
3. Stack allocation was removed, and a shared buffer (which is
checked and increased as necessary) is used to read the
table indexes.
Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.