func
string | target
string | cwe
list | project
string | commit_id
string | hash
string | size
int64 | message
string | vul
int64 |
---|---|---|---|---|---|---|---|---|
bool operator()(Face_const_handle f) const { return D.mark(f); }
|
Safe
|
[
"CWE-269"
] |
cgal
|
618b409b0fbcef7cb536a4134ae3a424ef5aae45
|
8.270289542395504e+37
| 1 |
Fix Nef_2 and Nef_S2 IO
| 0 |
static long uptomult(long x, long y)
{
assert(x >= 0);
return ((x + y - 1) / y) * y;
}
|
Safe
|
[
"CWE-189"
] |
jasper
|
3c55b399c36ef46befcb21e4ebc4799367f89684
|
2.6259091865995187e+38
| 5 |
At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems.
| 0 |
virtual ~Rowid_seq_cursor()
{
if (ref_buffer)
my_free(ref_buffer);
if (io_cache)
{
end_slave_io_cache(io_cache);
my_free(io_cache);
io_cache= NULL;
}
}
|
Safe
|
[] |
server
|
ba4927e520190bbad763bb5260ae154f29a61231
|
2.5403199219504615e+38
| 11 |
MDEV-19398: Assertion `item1->type() == Item::FIELD_ITEM ...
Window Functions code tries to minimize the number of times it
needs to sort the select's resultset by finding "compatible"
OVER (PARTITION BY ... ORDER BY ...) clauses.
This employs compare_order_elements(). That function assumed that
the order expressions are Item_field-derived objects (that refer
to a temp.table). But this is not always the case: one can
construct queries order expressions are arbitrary item expressions.
Add handling for such expressions: sort them according to the window
specification they appeared in.
This means we cannot detect that two compatible PARTITION BY clauses
that use expressions can share the sorting step.
But at least we won't crash.
| 0 |
TPMA_SESSION_Marshal(TPMA_SESSION *source, BYTE **buffer, INT32 *size)
{
UINT16 written = 0;
written += UINT8_Marshal((UINT8 *)source, buffer, size); /* libtpms changed */
return written;
}
|
Safe
|
[
"CWE-787"
] |
libtpms
|
3ef9b26cb9f28bd64d738bff9505a20d4eb56acd
|
5.682659755253781e+37
| 6 |
tpm2: Add maxSize parameter to TPM2B_Marshal for sanity checks
Add maxSize parameter to TPM2B_Marshal and assert on it checking
the size of the data intended to be marshaled versus the maximum
buffer size.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
| 0 |
int ha_maria::end_bulk_insert()
{
int first_error, error;
my_bool abort= file->s->deleting;
DBUG_ENTER("ha_maria::end_bulk_insert");
if ((first_error= maria_end_bulk_insert(file, abort)))
abort= 1;
if ((error= maria_extra(file, HA_EXTRA_NO_CACHE, 0)))
{
first_error= first_error ? first_error : error;
abort= 1;
}
if (!abort && can_enable_indexes)
if ((error= enable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE)))
first_error= first_error ? first_error : error;
if (bulk_insert_single_undo != BULK_INSERT_NONE)
{
/*
Table was transactional just before start_bulk_insert().
No need to flush pages if we did a repair (which already flushed).
*/
if ((error= _ma_reenable_logging_for_table(file,
bulk_insert_single_undo ==
BULK_INSERT_SINGLE_UNDO_AND_NO_REPAIR)))
first_error= first_error ? first_error : error;
bulk_insert_single_undo= BULK_INSERT_NONE; // Safety
}
can_enable_indexes= 0;
DBUG_RETURN(first_error);
}
|
Safe
|
[
"CWE-400"
] |
server
|
9e39d0ae44595dbd1570805d97c9c874778a6be8
|
2.6765852883361507e+38
| 34 |
MDEV-25787 Bug report: crash on SELECT DISTINCT thousands_blob_fields
fix a debug assert to account for not opened temp tables
| 0 |
struct net_device *dev_get_by_index(int ifindex)
{
struct net_device *dev;
read_lock(&dev_base_lock);
dev = __dev_get_by_index(ifindex);
if (dev)
dev_hold(dev);
read_unlock(&dev_base_lock);
return dev;
}
|
Safe
|
[] |
linux
|
e89e9cf539a28df7d0eb1d0a545368e9920b34ac
|
2.751459904411228e+38
| 11 |
[IPv4/IPv6]: UFO Scatter-gather approach
Attached is kernel patch for UDP Fragmentation Offload (UFO) feature.
1. This patch incorporate the review comments by Jeff Garzik.
2. Renamed USO as UFO (UDP Fragmentation Offload)
3. udp sendfile support with UFO
This patches uses scatter-gather feature of skb to generate large UDP
datagram. Below is a "how-to" on changes required in network device
driver to use the UFO interface.
UDP Fragmentation Offload (UFO) Interface:
-------------------------------------------
UFO is a feature wherein the Linux kernel network stack will offload the
IP fragmentation functionality of large UDP datagram to hardware. This
will reduce the overhead of stack in fragmenting the large UDP datagram to
MTU sized packets
1) Drivers indicate their capability of UFO using
dev->features |= NETIF_F_UFO | NETIF_F_HW_CSUM | NETIF_F_SG
NETIF_F_HW_CSUM is required for UFO over ipv6.
2) UFO packet will be submitted for transmission using driver xmit routine.
UFO packet will have a non-zero value for
"skb_shinfo(skb)->ufo_size"
skb_shinfo(skb)->ufo_size will indicate the length of data part in each IP
fragment going out of the adapter after IP fragmentation by hardware.
skb->data will contain MAC/IP/UDP header and skb_shinfo(skb)->frags[]
contains the data payload. The skb->ip_summed will be set to CHECKSUM_HW
indicating that hardware has to do checksum calculation. Hardware should
compute the UDP checksum of complete datagram and also ip header checksum of
each fragmented IP packet.
For IPV6 the UFO provides the fragment identification-id in
skb_shinfo(skb)->ip6_frag_id. The adapter should use this ID for generating
IPv6 fragments.
Signed-off-by: Ananda Raju <ananda.raju@neterion.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (forwarded)
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| 0 |
int compare(const string& str) const {
rgw_user u(str);
return compare(u);
}
|
Safe
|
[
"CWE-617"
] |
ceph
|
b3118cabb8060a8cc6a01c4e8264cb18e7b1745a
|
1.2663403944784189e+38
| 4 |
rgw: Remove assertions in IAM Policy
A couple of them could be triggered by user input.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
| 0 |
struct sk_buff *ip_make_skb(struct sock *sk,
struct flowi4 *fl4,
int getfrag(void *from, char *to, int offset,
int len, int odd, struct sk_buff *skb),
void *from, int length, int transhdrlen,
struct ipcm_cookie *ipc, struct rtable **rtp,
unsigned int flags)
{
struct inet_cork cork;
struct sk_buff_head queue;
int err;
if (flags & MSG_PROBE)
return NULL;
__skb_queue_head_init(&queue);
cork.flags = 0;
cork.addr = 0;
cork.opt = NULL;
err = ip_setup_cork(sk, &cork, ipc, rtp);
if (err)
return ERR_PTR(err);
err = __ip_append_data(sk, fl4, &queue, &cork,
¤t->task_frag, getfrag,
from, length, transhdrlen, flags);
if (err) {
__ip_flush_pending_frames(sk, &queue, &cork);
return ERR_PTR(err);
}
return __ip_make_skb(sk, fl4, &queue, &cork);
}
|
Safe
|
[
"CWE-362"
] |
net
|
85f1bd9a7b5a79d5baa8bf44af19658f7bf77bfa
|
1.9691543507653362e+38
| 34 |
udp: consistently apply ufo or fragmentation
When iteratively building a UDP datagram with MSG_MORE and that
datagram exceeds MTU, consistently choose UFO or fragmentation.
Once skb_is_gso, always apply ufo. Conversely, once a datagram is
split across multiple skbs, do not consider ufo.
Sendpage already maintains the first invariant, only add the second.
IPv6 does not have a sendpage implementation to modify.
A gso skb must have a partial checksum, do not follow sk_no_check_tx
in udp_send_skb.
Found by syzkaller.
Fixes: e89e9cf539a2 ("[IPv4/IPv6]: UFO Scatter-gather approach")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
xfs_attr_shortform_getvalue(xfs_da_args_t *args)
{
xfs_attr_shortform_t *sf;
xfs_attr_sf_entry_t *sfe;
int i;
ASSERT(args->dp->i_afp->if_flags == XFS_IFINLINE);
sf = (xfs_attr_shortform_t *)args->dp->i_afp->if_u1.if_data;
sfe = &sf->list[0];
for (i = 0; i < sf->hdr.count;
sfe = XFS_ATTR_SF_NEXTENTRY(sfe), i++) {
if (sfe->namelen != args->namelen)
continue;
if (memcmp(args->name, sfe->nameval, args->namelen) != 0)
continue;
if (!xfs_attr_namesp_match(args->flags, sfe->flags))
continue;
if (args->flags & ATTR_KERNOVAL) {
args->valuelen = sfe->valuelen;
return -EEXIST;
}
if (args->valuelen < sfe->valuelen) {
args->valuelen = sfe->valuelen;
return -ERANGE;
}
args->valuelen = sfe->valuelen;
memcpy(args->value, &sfe->nameval[args->namelen],
args->valuelen);
return -EEXIST;
}
return -ENOATTR;
}
|
Safe
|
[
"CWE-476"
] |
linux
|
bb3d48dcf86a97dc25fe9fc2c11938e19cb4399a
|
2.5065222566420296e+38
| 32 |
xfs: don't call xfs_da_shrink_inode with NULL bp
xfs_attr3_leaf_create may have errored out before instantiating a buffer,
for example if the blkno is out of range. In that case there is no work
to do to remove it, and in fact xfs_da_shrink_inode will lead to an oops
if we try.
This also seems to fix a flaw where the original error from
xfs_attr3_leaf_create gets overwritten in the cleanup case, and it
removes a pointless assignment to bp which isn't used after this.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199969
Reported-by: Xu, Wen <wen.xu@gatech.edu>
Tested-by: Xu, Wen <wen.xu@gatech.edu>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
| 0 |
TIFFWriteDirectoryTagSlong(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, uint16 tag, int32 value)
{
if (dir==NULL)
{
(*ndir)++;
return(1);
}
return(TIFFWriteDirectoryTagCheckedSlong(tif,ndir,dir,tag,value));
}
|
Safe
|
[
"CWE-617"
] |
libtiff
|
de144fd228e4be8aa484c3caf3d814b6fa88c6d9
|
1.951144015160532e+38
| 9 |
TIFFWriteDirectorySec: avoid assertion. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2795. CVE-2018-10963
| 0 |
static inline Quantum GetPixelCompositeMask(const Image *magick_restrict image,
const Quantum *magick_restrict pixel)
{
if (image->channel_map[CompositeMaskPixelChannel].traits == UndefinedPixelTrait)
return((Quantum) QuantumRange);
return(pixel[image->channel_map[CompositeMaskPixelChannel].offset]);
}
|
Safe
|
[
"CWE-20",
"CWE-125"
] |
ImageMagick
|
8187d2d8fd010d2d6b1a3a8edd935beec404dddc
|
9.221097651158703e+37
| 7 |
https://github.com/ImageMagick/ImageMagick/issues/1610
| 0 |
xfs_inode_item_format_attr_fork(
struct xfs_inode_log_item *iip,
struct xfs_inode_log_format *ilf,
struct xfs_log_vec *lv,
struct xfs_log_iovec **vecp)
{
struct xfs_inode *ip = iip->ili_inode;
size_t data_bytes;
switch (ip->i_d.di_aformat) {
case XFS_DINODE_FMT_EXTENTS:
iip->ili_fields &=
~(XFS_ILOG_ADATA | XFS_ILOG_ABROOT);
if ((iip->ili_fields & XFS_ILOG_AEXT) &&
ip->i_d.di_anextents > 0 &&
ip->i_afp->if_bytes > 0) {
struct xfs_bmbt_rec *p;
ASSERT(ip->i_afp->if_bytes / sizeof(xfs_bmbt_rec_t) ==
ip->i_d.di_anextents);
ASSERT(ip->i_afp->if_u1.if_extents != NULL);
p = xlog_prepare_iovec(lv, vecp, XLOG_REG_TYPE_IATTR_EXT);
data_bytes = xfs_iextents_copy(ip, p, XFS_ATTR_FORK);
xlog_finish_iovec(lv, *vecp, data_bytes);
ilf->ilf_asize = data_bytes;
ilf->ilf_size++;
} else {
iip->ili_fields &= ~XFS_ILOG_AEXT;
}
break;
case XFS_DINODE_FMT_BTREE:
iip->ili_fields &=
~(XFS_ILOG_ADATA | XFS_ILOG_AEXT);
if ((iip->ili_fields & XFS_ILOG_ABROOT) &&
ip->i_afp->if_broot_bytes > 0) {
ASSERT(ip->i_afp->if_broot != NULL);
xlog_copy_iovec(lv, vecp, XLOG_REG_TYPE_IATTR_BROOT,
ip->i_afp->if_broot,
ip->i_afp->if_broot_bytes);
ilf->ilf_asize = ip->i_afp->if_broot_bytes;
ilf->ilf_size++;
} else {
iip->ili_fields &= ~XFS_ILOG_ABROOT;
}
break;
case XFS_DINODE_FMT_LOCAL:
iip->ili_fields &=
~(XFS_ILOG_AEXT | XFS_ILOG_ABROOT);
if ((iip->ili_fields & XFS_ILOG_ADATA) &&
ip->i_afp->if_bytes > 0) {
/*
* Round i_bytes up to a word boundary.
* The underlying memory is guaranteed to
* to be there by xfs_idata_realloc().
*/
data_bytes = roundup(ip->i_afp->if_bytes, 4);
ASSERT(ip->i_afp->if_real_bytes == 0 ||
ip->i_afp->if_real_bytes == data_bytes);
ASSERT(ip->i_afp->if_u1.if_data != NULL);
xlog_copy_iovec(lv, vecp, XLOG_REG_TYPE_IATTR_LOCAL,
ip->i_afp->if_u1.if_data,
data_bytes);
ilf->ilf_asize = (unsigned)data_bytes;
ilf->ilf_size++;
} else {
iip->ili_fields &= ~XFS_ILOG_ADATA;
}
break;
default:
ASSERT(0);
break;
}
}
|
Safe
|
[
"CWE-19"
] |
linux
|
fc0561cefc04e7803c0f6501ca4f310a502f65b8
|
3.2230123482406838e+38
| 79 |
xfs: optimise away log forces on timestamp updates for fdatasync
xfs: timestamp updates cause excessive fdatasync log traffic
Sage Weil reported that a ceph test workload was writing to the
log on every fdatasync during an overwrite workload. Event tracing
showed that the only metadata modification being made was the
timestamp updates during the write(2) syscall, but fdatasync(2)
is supposed to ignore them. The key observation was that the
transactions in the log all looked like this:
INODE: #regs: 4 ino: 0x8b flags: 0x45 dsize: 32
And contained a flags field of 0x45 or 0x85, and had data and
attribute forks following the inode core. This means that the
timestamp updates were triggering dirty relogging of previously
logged parts of the inode that hadn't yet been flushed back to
disk.
There are two parts to this problem. The first is that XFS relogs
dirty regions in subsequent transactions, so it carries around the
fields that have been dirtied since the last time the inode was
written back to disk, not since the last time the inode was forced
into the log.
The second part is that on v5 filesystems, the inode change count
update during inode dirtying also sets the XFS_ILOG_CORE flag, so
on v5 filesystems this makes a timestamp update dirty the entire
inode.
As a result when fdatasync is run, it looks at the dirty fields in
the inode, and sees more than just the timestamp flag, even though
the only metadata change since the last fdatasync was just the
timestamps. Hence we force the log on every subsequent fdatasync
even though it is not needed.
To fix this, add a new field to the inode log item that tracks
changes since the last time fsync/fdatasync forced the log to flush
the changes to the journal. This flag is updated when we dirty the
inode, but we do it before updating the change count so it does not
carry the "core dirty" flag from timestamp updates. The fields are
zeroed when the inode is marked clean (due to writeback/freeing) or
when an fsync/datasync forces the log. Hence if we only dirty the
timestamps on the inode between fsync/fdatasync calls, the fdatasync
will not trigger another log force.
Over 100 runs of the test program:
Ext4 baseline:
runtime: 1.63s +/- 0.24s
avg lat: 1.59ms +/- 0.24ms
iops: ~2000
XFS, vanilla kernel:
runtime: 2.45s +/- 0.18s
avg lat: 2.39ms +/- 0.18ms
log forces: ~400/s
iops: ~1000
XFS, patched kernel:
runtime: 1.49s +/- 0.26s
avg lat: 1.46ms +/- 0.25ms
log forces: ~30/s
iops: ~1500
Reported-by: Sage Weil <sage@redhat.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
| 0 |
e_util_free_object_slist (GSList *objects)
{
g_slist_free_full (objects, (GDestroyNotify) g_object_unref);
}
|
Safe
|
[
"CWE-295"
] |
evolution-data-server
|
6672b8236139bd6ef41ecb915f4c72e2a052dba5
|
2.1500693304687523e+38
| 4 |
Let child source with 'none' authentication method use collection source authentication
That might be the same as having set NULL authentication method.
Related to https://gitlab.gnome.org/GNOME/evolution-ews/issues/27
| 0 |
eog_multipage_error_message_area_new(void)
{
static GOnce evince_is_available = G_ONCE_INIT;
EogErrorMessageAreaButtons buttons = EOG_ERROR_MESSAGE_AREA_NO_BUTTONS;
GtkWidget *message_area;
const gchar *info_message;
g_once (&evince_is_available, _check_evince_availability, NULL);
if (GPOINTER_TO_BOOLEAN (evince_is_available.retval))
{
buttons = EOG_ERROR_MESSAGE_AREA_OPEN_WITH_EVINCE_BUTTON;
info_message = N_("This image contains multiple pages. "
"Image Viewer displays only the first page.\n"
"Do you want to open the image with the Document Viewer to see all pages?");
} else {
buttons = EOG_ERROR_MESSAGE_AREA_NO_BUTTONS;
info_message = N_("This image contains multiple pages. "
"Image Viewer displays only the first page.\n"
"You may want to install the Document Viewer to see all pages.");
}
message_area = create_info_message_area (gettext (info_message),
NULL,
buttons);
gtk_info_bar_set_show_close_button (GTK_INFO_BAR (message_area),
TRUE);
return message_area;
}
|
Safe
|
[
"CWE-787"
] |
eog
|
e99a8c00f959652fe7c10e2fa5a3a7a5c25e6af4
|
3.0964776544728817e+38
| 30 |
EogErrorMessageArea: Make sure error messages are valid UTF8
GMarkup requires valid UTF8 input strings and would cause odd
looking messages if given invalid input. This could also trigger an
out-of-bounds write in glib before 2.44.1. Reported by kaslovdmitri.
https://bugzilla.gnome.org/show_bug.cgi?id=770143
| 0 |
lys_ext_dup(struct ly_ctx *ctx, struct lys_module *mod, struct lys_ext_instance **orig, uint8_t size, void *parent,
LYEXT_PAR parent_type, struct lys_ext_instance ***new, int shallow, struct unres_schema *unres)
{
int i;
uint8_t u = 0;
struct lys_ext_instance **result;
struct unres_ext *info, *info_orig;
size_t len;
assert(new);
if (!size) {
if (orig) {
LOGINT(ctx);
return EXIT_FAILURE;
}
(*new) = NULL;
return EXIT_SUCCESS;
}
(*new) = result = calloc(size, sizeof *result);
LY_CHECK_ERR_RETURN(!result, LOGMEM(ctx), EXIT_FAILURE);
for (u = 0; u < size; u++) {
if (orig[u]) {
/* resolved extension instance, just duplicate it */
switch(orig[u]->ext_type) {
case LYEXT_FLAG:
result[u] = malloc(sizeof(struct lys_ext_instance));
LY_CHECK_ERR_GOTO(!result[u], LOGMEM(ctx), error);
break;
case LYEXT_COMPLEX:
len = ((struct lyext_plugin_complex*)orig[u]->def->plugin)->instance_size;
result[u] = calloc(1, len);
LY_CHECK_ERR_GOTO(!result[u], LOGMEM(ctx), error);
((struct lys_ext_instance_complex*)result[u])->substmt = ((struct lyext_plugin_complex*)orig[u]->def->plugin)->substmt;
/* TODO duplicate data in extension instance content */
memcpy((char *)result[u] + sizeof(**orig), (char *)orig[u] + sizeof(**orig), len - sizeof(**orig));
break;
}
/* generic part */
result[u]->def = orig[u]->def;
result[u]->flags = LYEXT_OPT_CONTENT;
result[u]->arg_value = lydict_insert(ctx, orig[u]->arg_value, 0);
result[u]->parent = parent;
result[u]->parent_type = parent_type;
result[u]->insubstmt = orig[u]->insubstmt;
result[u]->insubstmt_index = orig[u]->insubstmt_index;
result[u]->ext_type = orig[u]->ext_type;
result[u]->priv = NULL;
result[u]->nodetype = LYS_EXT;
result[u]->module = mod;
/* extensions */
result[u]->ext_size = orig[u]->ext_size;
if (lys_ext_dup(ctx, mod, orig[u]->ext, orig[u]->ext_size, result[u],
LYEXT_PAR_EXTINST, &result[u]->ext, shallow, unres)) {
goto error;
}
/* in case of shallow copy (duplication for deviation), duplicate only the link to private data
* in a new copy, otherwise (grouping instantiation) do not duplicate the private data */
if (shallow) {
result[u]->priv = orig[u]->priv;
}
} else {
/* original extension is not yet resolved, so duplicate it in unres */
i = unres_schema_find(unres, -1, &orig, UNRES_EXT);
if (i == -1) {
/* extension not found in unres */
LOGINT(ctx);
goto error;
}
info_orig = unres->str_snode[i];
info = malloc(sizeof *info);
LY_CHECK_ERR_GOTO(!info, LOGMEM(ctx), error);
info->datatype = info_orig->datatype;
if (info->datatype == LYS_IN_YIN) {
info->data.yin = lyxml_dup_elem(ctx, info_orig->data.yin, NULL, 1, 0);
} /* else TODO YANG */
info->parent = parent;
info->mod = mod;
info->parent_type = parent_type;
info->ext_index = u;
if (unres_schema_add_node(info->mod, unres, new, UNRES_EXT, (struct lys_node *)info) == -1) {
goto error;
}
}
}
return EXIT_SUCCESS;
error:
(*new) = NULL;
lys_extension_instances_free(ctx, result, u, NULL);
return EXIT_FAILURE;
}
|
Safe
|
[
"CWE-617"
] |
libyang
|
5ce30801f9ccc372bbe9b7c98bb5324b15fb010a
|
1.6046230597278191e+38
| 97 |
schema tree BUGFIX freeing nodes with no module set
Context must be passed explicitly for these cases.
Fixes #1452
| 0 |
int __init ip_rt_init(void)
{
int cpu;
ip_idents = kmalloc_array(IP_IDENTS_SZ, sizeof(*ip_idents),
GFP_KERNEL);
if (!ip_idents)
panic("IP: failed to allocate ip_idents\n");
prandom_bytes(ip_idents, IP_IDENTS_SZ * sizeof(*ip_idents));
ip_tstamps = kcalloc(IP_IDENTS_SZ, sizeof(*ip_tstamps), GFP_KERNEL);
if (!ip_tstamps)
panic("IP: failed to allocate ip_tstamps\n");
for_each_possible_cpu(cpu) {
struct uncached_list *ul = &per_cpu(rt_uncached_list, cpu);
INIT_LIST_HEAD(&ul->head);
spin_lock_init(&ul->lock);
}
#ifdef CONFIG_IP_ROUTE_CLASSID
ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct), __alignof__(struct ip_rt_acct));
if (!ip_rt_acct)
panic("IP: failed to allocate ip_rt_acct\n");
#endif
ipv4_dst_ops.kmem_cachep =
kmem_cache_create("ip_dst_cache", sizeof(struct rtable), 0,
SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
ipv4_dst_blackhole_ops.kmem_cachep = ipv4_dst_ops.kmem_cachep;
if (dst_entries_init(&ipv4_dst_ops) < 0)
panic("IP: failed to allocate ipv4_dst_ops counter\n");
if (dst_entries_init(&ipv4_dst_blackhole_ops) < 0)
panic("IP: failed to allocate ipv4_dst_blackhole_ops counter\n");
ipv4_dst_ops.gc_thresh = ~0;
ip_rt_max_size = INT_MAX;
devinet_init();
ip_fib_init();
if (ip_rt_proc_init())
pr_err("Unable to create route proc files\n");
#ifdef CONFIG_XFRM
xfrm_init();
xfrm4_init();
#endif
rtnl_register(PF_INET, RTM_GETROUTE, inet_rtm_getroute, NULL,
RTNL_FLAG_DOIT_UNLOCKED);
#ifdef CONFIG_SYSCTL
register_pernet_subsys(&sysctl_route_ops);
#endif
register_pernet_subsys(&rt_genid_ops);
register_pernet_subsys(&ipv4_inetpeer_ops);
return 0;
}
|
Vulnerable
|
[
"CWE-327"
] |
linux
|
aa6dd211e4b1dde9d5dc25d699d35f789ae7eeba
|
6.965264938574301e+37
| 61 |
inet: use bigger hash table for IP ID generation
In commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count")
I used a very small hash table that could be abused
by patient attackers to reveal sensitive information.
Switch to a dynamic sizing, depending on RAM size.
Typical big hosts will now use 128x more storage (2 MB)
to get a similar increase in security and reduction
of hash collisions.
As a bonus, use of alloc_large_system_hash() spreads
allocated memory among all NUMA nodes.
Fixes: 73f156a6e8c1 ("inetpeer: get rid of ip_id_count")
Reported-by: Amit Klein <aksecurity@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 1 |
iso9660_finish_entry(struct archive_write *a)
{
struct iso9660 *iso9660 = a->format_data;
if (iso9660->cur_file == NULL)
return (ARCHIVE_OK);
if (archive_entry_filetype(iso9660->cur_file->entry) != AE_IFREG)
return (ARCHIVE_OK);
if (iso9660->cur_file->content.size == 0)
return (ARCHIVE_OK);
/* If there are unwritten data, write null data instead. */
while (iso9660->bytes_remaining > 0) {
size_t s;
s = (iso9660->bytes_remaining > a->null_length)?
a->null_length: (size_t)iso9660->bytes_remaining;
if (write_iso9660_data(a, a->nulls, s) < 0)
return (ARCHIVE_FATAL);
iso9660->bytes_remaining -= s;
}
if (iso9660->zisofs.making && zisofs_finish_entry(a) != ARCHIVE_OK)
return (ARCHIVE_FATAL);
/* Write padding. */
if (wb_write_padding_to_temp(a, iso9660->cur_file->cur_content->size)
!= ARCHIVE_OK)
return (ARCHIVE_FATAL);
/* Compute the logical block number. */
iso9660->cur_file->cur_content->blocks = (int)
((iso9660->cur_file->cur_content->size
+ LOGICAL_BLOCK_SIZE -1) >> LOGICAL_BLOCK_BITS);
/* Add the current file to data file list. */
isofile_add_data_file(iso9660, iso9660->cur_file);
return (ARCHIVE_OK);
}
|
Safe
|
[
"CWE-190"
] |
libarchive
|
3014e19820ea53c15c90f9d447ca3e668a0b76c6
|
1.1664801080899152e+38
| 40 |
Issue 711: Be more careful about verifying filename lengths when writing ISO9660 archives
* Don't cast size_t to int, since this can lead to overflow
on machines where sizeof(int) < sizeof(size_t)
* Check a + b > limit by writing it as
a > limit || b > limit || a + b > limit
to avoid problems when a + b wraps around.
| 0 |
const double* Magick::Image::strokeDashArray(void) const
{
return(constOptions()->strokeDashArray());
}
|
Safe
|
[
"CWE-416"
] |
ImageMagick
|
8c35502217c1879cb8257c617007282eee3fe1cc
|
3.0904723184285353e+37
| 4 |
Added missing return to avoid use after free.
| 0 |
UnicodeString::doAppend(const UnicodeString& src, int32_t srcStart, int32_t srcLength) {
if(srcLength == 0) {
return *this;
}
// pin the indices to legal values
src.pinIndices(srcStart, srcLength);
return doAppend(src.getArrayStart(), srcStart, srcLength);
}
|
Safe
|
[
"CWE-190",
"CWE-787"
] |
icu
|
b7d08bc04a4296982fcef8b6b8a354a9e4e7afca
|
6.547634367282647e+37
| 9 |
ICU-20958 Prevent SEGV_MAPERR in append
See #971
| 0 |
dissect_kafka_delete_acls_response(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree, int offset,
kafka_api_version_t api_version)
{
proto_item *subti;
proto_tree *subtree;
offset = dissect_kafka_throttle_time(tvb, pinfo, tree, offset);
subtree = proto_tree_add_subtree(tree, tvb, offset, -1,
ett_kafka_acl_creations,
&subti, "Filters");
offset = dissect_kafka_array(subtree, tvb, pinfo, offset, api_version >= 0, api_version,
&dissect_kafka_delete_acls_response_filter, NULL);
proto_item_set_end(subti, tvb, offset);
if (api_version >= 2) {
offset = dissect_kafka_tagged_fields(tvb, pinfo, tree, offset, 0);
}
return offset;
}
|
Safe
|
[
"CWE-401"
] |
wireshark
|
f4374967bbf9c12746b8ec3cd54dddada9dd353e
|
1.2782897406664115e+38
| 21 |
Kafka: Limit our decompression size.
Don't assume that the Internet has our best interests at heart when it
gives us the size of our decompression buffer. Assign an arbitrary limit
of 50 MB.
This fixes #16739 in that it takes care of
** (process:17681): WARNING **: 20:03:07.440: Dissector bug, protocol Kafka, in packet 31: ../epan/proto.c:7043: failed assertion "end >= fi->start"
which is different from the original error output. It looks like *that*
might have taken care of in one of the other recent Kafka bug fixes.
The decompression routines return a success or failure status. Use
gbooleans instead of ints for that.
| 0 |
static void get_connections_complete(uint8_t status, uint16_t length,
const void *param, void *user_data)
{
struct btd_adapter *adapter = user_data;
const struct mgmt_rp_get_connections *rp = param;
uint16_t i, conn_count;
if (status != MGMT_STATUS_SUCCESS) {
btd_error(adapter->dev_id,
"Failed to get connections: %s (0x%02x)",
mgmt_errstr(status), status);
return;
}
if (length < sizeof(*rp)) {
btd_error(adapter->dev_id,
"Wrong size of get connections response");
return;
}
conn_count = btohs(rp->conn_count);
DBG("Connection count: %d", conn_count);
if (conn_count * sizeof(struct mgmt_addr_info) +
sizeof(*rp) != length) {
btd_error(adapter->dev_id,
"Incorrect packet size for get connections response");
return;
}
for (i = 0; i < conn_count; i++) {
const struct mgmt_addr_info *addr = &rp->addr[i];
struct btd_device *device;
char address[18];
ba2str(&addr->bdaddr, address);
DBG("Adding existing connection to %s", address);
device = btd_adapter_get_device(adapter, &addr->bdaddr,
addr->type);
if (device)
adapter_add_connection(adapter, device, addr->type);
}
}
|
Safe
|
[
"CWE-862",
"CWE-863"
] |
bluez
|
b497b5942a8beb8f89ca1c359c54ad67ec843055
|
3.103717938731637e+38
| 45 |
adapter: Fix storing discoverable setting
discoverable setting shall only be store when changed via Discoverable
property and not when discovery client set it as that be considered
temporary just for the lifetime of the discovery.
| 0 |
do_core_note(struct magic_set *ms, unsigned char *nbuf, uint32_t type,
int swap, uint32_t namesz, uint32_t descsz,
size_t noff, size_t doff, int *flags, size_t size, int clazz)
{
#ifdef ELFCORE
int os_style = -1;
/*
* Sigh. The 2.0.36 kernel in Debian 2.1, at
* least, doesn't correctly implement name
* sections, in core dumps, as specified by
* the "Program Linking" section of "UNIX(R) System
* V Release 4 Programmer's Guide: ANSI C and
* Programming Support Tools", because my copy
* clearly says "The first 'namesz' bytes in 'name'
* contain a *null-terminated* [emphasis mine]
* character representation of the entry's owner
* or originator", but the 2.0.36 kernel code
* doesn't include the terminating null in the
* name....
*/
if ((namesz == 4 && strncmp((char *)&nbuf[noff], "CORE", 4) == 0) ||
(namesz == 5 && strcmp((char *)&nbuf[noff], "CORE") == 0)) {
os_style = OS_STYLE_SVR4;
}
if ((namesz == 8 && strcmp((char *)&nbuf[noff], "FreeBSD") == 0)) {
os_style = OS_STYLE_FREEBSD;
}
if ((namesz >= 11 && strncmp((char *)&nbuf[noff], "NetBSD-CORE", 11)
== 0)) {
os_style = OS_STYLE_NETBSD;
}
if (os_style != -1 && (*flags & FLAGS_DID_CORE_STYLE) == 0) {
if (file_printf(ms, ", %s-style", os_style_names[os_style])
== -1)
return 1;
*flags |= FLAGS_DID_CORE_STYLE;
*flags |= os_style;
}
switch (os_style) {
case OS_STYLE_NETBSD:
if (type == NT_NETBSD_CORE_PROCINFO) {
char sbuf[512];
struct NetBSD_elfcore_procinfo pi;
memset(&pi, 0, sizeof(pi));
memcpy(&pi, nbuf + doff, descsz);
if (file_printf(ms, ", from '%.31s', pid=%u, uid=%u, "
"gid=%u, nlwps=%u, lwp=%u (signal %u/code %u)",
file_printable(sbuf, sizeof(sbuf),
CAST(char *, pi.cpi_name)),
elf_getu32(swap, pi.cpi_pid),
elf_getu32(swap, pi.cpi_euid),
elf_getu32(swap, pi.cpi_egid),
elf_getu32(swap, pi.cpi_nlwps),
elf_getu32(swap, pi.cpi_siglwp),
elf_getu32(swap, pi.cpi_signo),
elf_getu32(swap, pi.cpi_sigcode)) == -1)
return 1;
*flags |= FLAGS_DID_CORE;
return 1;
}
break;
default:
if (type == NT_PRPSINFO && *flags & FLAGS_IS_CORE) {
size_t i, j;
unsigned char c;
/*
* Extract the program name. We assume
* it to be 16 characters (that's what it
* is in SunOS 5.x and Linux).
*
* Unfortunately, it's at a different offset
* in various OSes, so try multiple offsets.
* If the characters aren't all printable,
* reject it.
*/
for (i = 0; i < NOFFSETS; i++) {
unsigned char *cname, *cp;
size_t reloffset = prpsoffsets(i);
size_t noffset = doff + reloffset;
size_t k;
for (j = 0; j < 16; j++, noffset++,
reloffset++) {
/*
* Make sure we're not past
* the end of the buffer; if
* we are, just give up.
*/
if (noffset >= size)
goto tryanother;
/*
* Make sure we're not past
* the end of the contents;
* if we are, this obviously
* isn't the right offset.
*/
if (reloffset >= descsz)
goto tryanother;
c = nbuf[noffset];
if (c == '\0') {
/*
* A '\0' at the
* beginning is
* obviously wrong.
* Any other '\0'
* means we're done.
*/
if (j == 0)
goto tryanother;
else
break;
} else {
/*
* A nonprintable
* character is also
* wrong.
*/
if (!isprint(c) || isquote(c))
goto tryanother;
}
}
/*
* Well, that worked.
*/
/*
* Try next offsets, in case this match is
* in the middle of a string.
*/
for (k = i + 1 ; k < NOFFSETS; k++) {
size_t no;
int adjust = 1;
if (prpsoffsets(k) >= prpsoffsets(i))
continue;
for (no = doff + prpsoffsets(k);
no < doff + prpsoffsets(i); no++)
adjust = adjust
&& isprint(nbuf[no]);
if (adjust)
i = k;
}
cname = (unsigned char *)
&nbuf[doff + prpsoffsets(i)];
for (cp = cname; *cp && isprint(*cp); cp++)
continue;
/*
* Linux apparently appends a space at the end
* of the command line: remove it.
*/
while (cp > cname && isspace(cp[-1]))
cp--;
if (file_printf(ms, ", from '%.*s'",
(int)(cp - cname), cname) == -1)
return 1;
*flags |= FLAGS_DID_CORE;
return 1;
tryanother:
;
}
}
break;
}
#endif
return 0;
}
|
Safe
|
[
"CWE-119"
] |
file
|
35c94dc6acc418f1ad7f6241a6680e5327495793
|
6.142085166159233e+37
| 175 |
Fix always true condition (Thomas Jarosch)
| 0 |
void netdev_state_change(struct net_device *dev)
{
if (dev->flags & IFF_UP) {
call_netdevice_notifiers(NETDEV_CHANGE, dev);
rtmsg_ifinfo(RTM_NEWLINK, dev, 0);
}
}
|
Safe
|
[
"CWE-399"
] |
linux
|
6ec82562ffc6f297d0de36d65776cff8e5704867
|
4.094468737076979e+36
| 7 |
veth: Dont kfree_skb() after dev_forward_skb()
In case of congestion, netif_rx() frees the skb, so we must assume
dev_forward_skb() also consume skb.
Bug introduced by commit 445409602c092
(veth: move loopback logic to common location)
We must change dev_forward_skb() to always consume skb, and veth to not
double free it.
Bug report : http://marc.info/?l=linux-netdev&m=127310770900442&w=3
Reported-by: Martín Ferrari <martin.ferrari@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static int nfs4_wait_clnt_recover(struct nfs_client *clp)
{
int res;
might_sleep();
res = wait_on_bit(&clp->cl_state, NFS4CLNT_MANAGER_RUNNING,
nfs_wait_bit_killable, TASK_KILLABLE);
return res;
}
|
Safe
|
[
"CWE-703",
"CWE-189"
] |
linux
|
20e0fa98b751facf9a1101edaefbc19c82616a68
|
2.8355156593570018e+38
| 10 |
Fix length of buffer copied in __nfs4_get_acl_uncached
_copy_from_pages() used to copy data from the temporary buffer to the
user passed buffer is passed the wrong size parameter when copying
data. res.acl_len contains both the bitmap and acl lenghts while
acl_len contains the acl length after adjusting for the bitmap size.
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| 0 |
pdf_filter_Tz(fz_context *ctx, pdf_processor *proc, float scale)
{
/* scale is as written in the file. It is 100 times smaller
* in the gstate. */
pdf_filter_processor *p = (pdf_filter_processor*)proc;
filter_flush(ctx, p, 0);
p->gstate->pending.text.scale = scale / 100;
}
|
Safe
|
[
"CWE-125"
] |
mupdf
|
97096297d409ec6f206298444ba00719607e8ba8
|
1.0369951974129697e+38
| 8 |
Bug 701292: Fix test for missing/empty string.
| 0 |
void parseTest(fs::path testFilePath) {
_testFilePath = testFilePath.string();
LOGV2(4333503, "### Parsing Test ###", "testFilePath"_attr = testFilePath.string());
{
std::ifstream testFile(_testFilePath);
std::ostringstream json;
json << testFile.rdbuf();
_jsonTest = fromjson(json.str());
}
// Only create the initial server description if the original avg rtt is not "NULL". If it
// is, the test case is meant to mimic creating the first ServerDescription which we will do
// above.
std::string origRttAsString = _jsonTest.getStringField("avg_rtt_ms");
if (origRttAsString.compare("NULL") != 0) {
auto serverDescription = ServerDescriptionBuilder()
.withAddress(HostAndPort("dummy"))
.withType(ServerType::kRSPrimary)
.instance();
auto origAvgRtt = Milliseconds(_jsonTest["avg_rtt_ms"].numberInt());
_serverDescription = serverDescription->cloneWithRTT(origAvgRtt);
}
_newRtt = _jsonTest["new_rtt_ms"].numberInt();
_newAvgRtt = _jsonTest["new_avg_rtt"].numberInt();
}
|
Safe
|
[
"CWE-755"
] |
mongo
|
75f7184eafa78006a698cda4c4adfb57f1290047
|
2.9052356718200554e+38
| 27 |
SERVER-50170 fix max staleness read preference parameter for server selection
| 0 |
int smb_vfs_call_sys_acl_set_permset(struct vfs_handle_struct *handle,
SMB_ACL_ENTRY_T entry,
SMB_ACL_PERMSET_T permset)
{
VFS_FIND(sys_acl_set_permset);
return handle->fns->sys_acl_set_permset(handle, entry, permset);
}
|
Safe
|
[
"CWE-22"
] |
samba
|
bd269443e311d96ef495a9db47d1b95eb83bb8f4
|
1.811474711624111e+38
| 7 |
Fix bug 7104 - "wide links" and "unix extensions" are incompatible.
Change parameter "wide links" to default to "no".
Ensure "wide links = no" if "unix extensions = yes" on a share.
Fix man pages to refect this.
Remove "within share" checks for a UNIX symlink set - even if
widelinks = no. The server will not follow that link anyway.
Correct DEBUG message in check_reduced_name() to add missing "\n"
so it's really clear when a path is being denied as it's outside
the enclosing share path.
Jeremy.
| 0 |
static Image *ReadRLEImage(const ImageInfo *image_info,ExceptionInfo *exception)
{
#define SkipLinesOp 0x01
#define SetColorOp 0x02
#define SkipPixelsOp 0x03
#define ByteDataOp 0x05
#define RunDataOp 0x06
#define EOFOp 0x07
char
magick[12];
Image
*image;
int
opcode,
operand,
status;
MagickStatusType
flags;
MagickSizeType
number_pixels;
MemoryInfo
*pixel_info;
Quantum
index;
register ssize_t
x;
register Quantum
*q;
register ssize_t
i;
register unsigned char
*p;
size_t
bits_per_pixel,
map_length,
number_colormaps,
number_planes,
number_planes_filled,
one,
offset,
pixel_info_length;
ssize_t
count,
y;
unsigned char
background_color[256],
*colormap,
pixel,
plane,
*pixels;
/*
Open image file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickCoreSignature);
if (image_info->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",
image_info->filename);
assert(exception != (ExceptionInfo *) NULL);
assert(exception->signature == MagickCoreSignature);
image=AcquireImage(image_info,exception);
status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception);
if (status == MagickFalse)
return(DestroyImageList(image));
/*
Determine if this a RLE file.
*/
count=ReadBlob(image,2,(unsigned char *) magick);
if ((count != 2) || (memcmp(magick,"\122\314",2) != 0))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
do
{
/*
Read image header.
*/
image->page.x=ReadBlobLSBShort(image);
image->page.y=ReadBlobLSBShort(image);
image->columns=ReadBlobLSBShort(image);
image->rows=ReadBlobLSBShort(image);
flags=(MagickStatusType) ReadBlobByte(image);
image->alpha_trait=flags & 0x04 ? BlendPixelTrait : UndefinedPixelTrait;
number_planes=(size_t) ReadBlobByte(image);
bits_per_pixel=(size_t) ReadBlobByte(image);
number_colormaps=(size_t) ReadBlobByte(image);
map_length=(unsigned char) ReadBlobByte(image);
if (map_length >= 64)
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
one=1;
map_length=one << map_length;
if ((number_planes == 0) || (number_planes == 2) ||
((flags & 0x04) && (number_colormaps > 254)) || (bits_per_pixel != 8) ||
(image->columns == 0))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
if (flags & 0x02)
{
/*
No background color-- initialize to black.
*/
for (i=0; i < (ssize_t) number_planes; i++)
background_color[i]=0;
(void) ReadBlobByte(image);
}
else
{
/*
Initialize background color.
*/
p=background_color;
for (i=0; i < (ssize_t) number_planes; i++)
*p++=(unsigned char) ReadBlobByte(image);
}
if ((number_planes & 0x01) == 0)
(void) ReadBlobByte(image);
if (EOFBlob(image) != MagickFalse)
{
ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile",
image->filename);
break;
}
colormap=(unsigned char *) NULL;
if (number_colormaps != 0)
{
/*
Read image colormaps.
*/
colormap=(unsigned char *) AcquireQuantumMemory(number_colormaps,
3*map_length*sizeof(*colormap));
if (colormap == (unsigned char *) NULL)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
p=colormap;
for (i=0; i < (ssize_t) number_colormaps; i++)
for (x=0; x < (ssize_t) map_length; x++)
*p++=(unsigned char) ScaleShortToQuantum(ReadBlobLSBShort(image));
}
if ((flags & 0x08) != 0)
{
char
*comment;
size_t
length;
/*
Read image comment.
*/
length=ReadBlobLSBShort(image);
if (length != 0)
{
comment=(char *) AcquireQuantumMemory(length,sizeof(*comment));
if (comment == (char *) NULL)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
count=ReadBlob(image,length-1,(unsigned char *) comment);
comment[length-1]='\0';
(void) SetImageProperty(image,"comment",comment,exception);
comment=DestroyString(comment);
if ((length & 0x01) == 0)
(void) ReadBlobByte(image);
}
}
if ((image_info->ping != MagickFalse) && (image_info->number_scenes != 0))
if (image->scene >= (image_info->scene+image_info->number_scenes-1))
break;
status=SetImageExtent(image,image->columns,image->rows,exception);
if (status == MagickFalse)
return(DestroyImageList(image));
/*
Allocate RLE pixels.
*/
if (image->alpha_trait != UndefinedPixelTrait)
number_planes++;
number_pixels=(MagickSizeType) image->columns*image->rows;
number_planes_filled=(number_planes % 2 == 0) ? number_planes :
number_planes+1;
if ((number_pixels*number_planes_filled) != (size_t) (number_pixels*
number_planes_filled))
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
pixel_info=AcquireVirtualMemory(image->columns,image->rows*
number_planes_filled*sizeof(*pixels));
if (pixel_info == (MemoryInfo *) NULL)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
pixel_info_length=image->columns*image->rows*number_planes_filled;
pixels=(unsigned char *) GetVirtualMemoryBlob(pixel_info);
if ((flags & 0x01) && !(flags & 0x02))
{
ssize_t
j;
/*
Set background color.
*/
p=pixels;
for (i=0; i < (ssize_t) number_pixels; i++)
{
if (image->alpha_trait == UndefinedPixelTrait)
for (j=0; j < (ssize_t) number_planes; j++)
*p++=background_color[j];
else
{
for (j=0; j < (ssize_t) (number_planes-1); j++)
*p++=background_color[j];
*p++=0; /* initialize matte channel */
}
}
}
/*
Read runlength-encoded image.
*/
plane=0;
x=0;
y=0;
opcode=ReadBlobByte(image);
do
{
switch (opcode & 0x3f)
{
case SkipLinesOp:
{
operand=ReadBlobByte(image);
if (opcode & 0x40)
operand=ReadBlobLSBSignedShort(image);
x=0;
y+=operand;
break;
}
case SetColorOp:
{
operand=ReadBlobByte(image);
plane=(unsigned char) operand;
if (plane == 255)
plane=(unsigned char) (number_planes-1);
x=0;
break;
}
case SkipPixelsOp:
{
operand=ReadBlobByte(image);
if (opcode & 0x40)
operand=ReadBlobLSBSignedShort(image);
x+=operand;
break;
}
case ByteDataOp:
{
operand=ReadBlobByte(image);
if (opcode & 0x40)
operand=ReadBlobLSBSignedShort(image);
offset=((image->rows-y-1)*image->columns*number_planes)+x*
number_planes+plane;
operand++;
if (offset+((size_t) operand*number_planes) > pixel_info_length)
{
if (number_colormaps != 0)
colormap=(unsigned char *) RelinquishMagickMemory(colormap);
pixel_info=RelinquishVirtualMemory(pixel_info);
ThrowReaderException(CorruptImageError,"UnableToReadImageData");
}
p=pixels+offset;
for (i=0; i < (ssize_t) operand; i++)
{
pixel=(unsigned char) ReadBlobByte(image);
if ((y < (ssize_t) image->rows) &&
((x+i) < (ssize_t) image->columns))
*p=pixel;
p+=number_planes;
}
if (operand & 0x01)
(void) ReadBlobByte(image);
x+=operand;
break;
}
case RunDataOp:
{
operand=ReadBlobByte(image);
if (opcode & 0x40)
operand=ReadBlobLSBSignedShort(image);
pixel=(unsigned char) ReadBlobByte(image);
(void) ReadBlobByte(image);
offset=((image->rows-y-1)*image->columns*number_planes)+x*
number_planes+plane;
operand++;
if (offset+((size_t) operand*number_planes) > pixel_info_length)
{
if (number_colormaps != 0)
colormap=(unsigned char *) RelinquishMagickMemory(colormap);
pixel_info=RelinquishVirtualMemory(pixel_info);
ThrowReaderException(CorruptImageError,"UnableToReadImageData");
}
p=pixels+offset;
for (i=0; i < (ssize_t) operand; i++)
{
if ((y < (ssize_t) image->rows) &&
((x+i) < (ssize_t) image->columns))
*p=pixel;
p+=number_planes;
}
x+=operand;
break;
}
default:
break;
}
opcode=ReadBlobByte(image);
} while (((opcode & 0x3f) != EOFOp) && (opcode != EOF));
if (number_colormaps != 0)
{
MagickStatusType
mask;
/*
Apply colormap affineation to image.
*/
mask=(MagickStatusType) (map_length-1);
p=pixels;
x=(ssize_t) number_planes;
if (number_colormaps == 1)
for (i=0; i < (ssize_t) number_pixels; i++)
{
ValidateColormapValue(image,*p & mask,&index,exception);
*p=colormap[(ssize_t) index];
p++;
}
else
if ((number_planes >= 3) && (number_colormaps >= 3))
for (i=0; i < (ssize_t) number_pixels; i++)
for (x=0; x < (ssize_t) number_planes; x++)
{
ValidateColormapValue(image,(size_t) (x*map_length+
(*p & mask)),&index,exception);
*p=colormap[(ssize_t) index];
p++;
}
if ((i < (ssize_t) number_pixels) || (x < (ssize_t) number_planes))
{
colormap=(unsigned char *) RelinquishMagickMemory(colormap);
pixel_info=RelinquishVirtualMemory(pixel_info);
ThrowReaderException(CorruptImageError,"UnableToReadImageData");
}
}
/*
Initialize image structure.
*/
if (number_planes >= 3)
{
/*
Convert raster image to DirectClass pixel packets.
*/
p=pixels;
for (y=0; y < (ssize_t) image->rows; y++)
{
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
SetPixelRed(image,ScaleCharToQuantum(*p++),q);
SetPixelGreen(image,ScaleCharToQuantum(*p++),q);
SetPixelBlue(image,ScaleCharToQuantum(*p++),q);
if (image->alpha_trait != UndefinedPixelTrait)
SetPixelAlpha(image,ScaleCharToQuantum(*p++),q);
q+=GetPixelChannels(image);
}
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
}
else
{
/*
Create colormap.
*/
if (number_colormaps == 0)
map_length=256;
if (AcquireImageColormap(image,map_length,exception) == MagickFalse)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
p=colormap;
if (number_colormaps == 1)
for (i=0; i < (ssize_t) image->colors; i++)
{
/*
Pseudocolor.
*/
image->colormap[i].red=(MagickRealType)
ScaleCharToQuantum((unsigned char) i);
image->colormap[i].green=(MagickRealType)
ScaleCharToQuantum((unsigned char) i);
image->colormap[i].blue=(MagickRealType)
ScaleCharToQuantum((unsigned char) i);
}
else
if (number_colormaps > 1)
for (i=0; i < (ssize_t) image->colors; i++)
{
image->colormap[i].red=(MagickRealType)
ScaleCharToQuantum(*p);
image->colormap[i].green=(MagickRealType)
ScaleCharToQuantum(*(p+map_length));
image->colormap[i].blue=(MagickRealType)
ScaleCharToQuantum(*(p+map_length*2));
p++;
}
p=pixels;
if (image->alpha_trait == UndefinedPixelTrait)
{
/*
Convert raster image to PseudoClass pixel packets.
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
SetPixelIndex(image,*p++,q);
q+=GetPixelChannels(image);
}
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType)
y,image->rows);
if (status == MagickFalse)
break;
}
}
(void) SyncImage(image,exception);
}
else
{
/*
Image has a matte channel-- promote to DirectClass.
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
ValidateColormapValue(image,(ssize_t) *p++,&index,exception);
SetPixelRed(image,ClampToQuantum(image->colormap[(ssize_t)
index].red),q);
ValidateColormapValue(image,(ssize_t) *p++,&index,exception);
SetPixelGreen(image,ClampToQuantum(image->colormap[(ssize_t)
index].green),q);
ValidateColormapValue(image,(ssize_t) *p++,&index,exception);
SetPixelBlue(image,ClampToQuantum(image->colormap[(ssize_t)
index].blue),q);
SetPixelAlpha(image,ScaleCharToQuantum(*p++),q);
q+=GetPixelChannels(image);
}
if (x < (ssize_t) image->columns)
break;
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType)
y,image->rows);
if (status == MagickFalse)
break;
}
}
image->colormap=(PixelInfo *) RelinquishMagickMemory(
image->colormap);
image->storage_class=DirectClass;
image->colors=0;
}
}
if (number_colormaps != 0)
colormap=(unsigned char *) RelinquishMagickMemory(colormap);
pixel_info=RelinquishVirtualMemory(pixel_info);
if (EOFBlob(image) != MagickFalse)
{
ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile",
image->filename);
break;
}
/*
Proceed to next image.
*/
if (image_info->number_scenes != 0)
if (image->scene >= (image_info->scene+image_info->number_scenes-1))
break;
(void) ReadBlobByte(image);
count=ReadBlob(image,2,(unsigned char *) magick);
if ((count != 0) && (memcmp(magick,"\122\314",2) == 0))
{
/*
Allocate next image structure.
*/
AcquireNextImage(image_info,image,exception);
if (GetNextImageInList(image) == (Image *) NULL)
{
image=DestroyImageList(image);
return((Image *) NULL);
}
image=SyncNextImageInList(image);
status=SetImageProgress(image,LoadImagesTag,TellBlob(image),
GetBlobSize(image));
if (status == MagickFalse)
break;
}
} while ((count != 0) && (memcmp(magick,"\122\314",2) == 0));
(void) CloseBlob(image);
return(GetFirstImageInList(image));
}
|
Vulnerable
|
[
"CWE-119",
"CWE-787"
] |
ImageMagick
|
13db820f5e24cd993ee554e99377fea02a904e18
|
1.1554676845991447e+38
| 530 |
https://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=29710
| 1 |
pdf14_put_devn_params(gx_device * pdev, gs_devn_params * pdevn_params,
gs_param_list * plist)
{
int code;
code = put_param_pdf14_spot_names(pdev,
&pdevn_params->pdf14_separations, plist);
return code;
}
|
Safe
|
[
"CWE-416"
] |
ghostpdl
|
90fd0c7ca3efc1ddff64a86f4104b13b3ac969eb
|
2.2172821529158832e+38
| 8 |
Bug 697456. Dont create new ctx when pdf14 device reenabled
This bug had yet another weird case where the user created a
file that pushed the pdf14 device twice. We were in that case,
creating a new ctx and blowing away the original one with out
proper clean up. To avoid, only create a new one when we need it.
| 0 |
TEE_Result syscall_cryp_obj_get_info(unsigned long obj, TEE_ObjectInfo *info)
{
TEE_Result res;
struct tee_ta_session *sess;
struct tee_obj *o;
res = tee_ta_get_current_session(&sess);
if (res != TEE_SUCCESS)
goto exit;
res = tee_obj_get(to_user_ta_ctx(sess->ctx),
tee_svc_uref_to_vaddr(obj), &o);
if (res != TEE_SUCCESS)
goto exit;
res = tee_svc_copy_to_user(info, &o->info, sizeof(o->info));
exit:
return res;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
optee_os
|
a637243270fc1faae16de059091795c32d86e65e
|
2.766648136780979e+36
| 20 |
svc: check for allocation overflow in crypto calls
Without checking for overflow there is a risk of allocating a buffer
with size smaller than anticipated and as a consequence of that it might
lead to a heap based overflow with attacker controlled data written
outside the boundaries of the buffer.
Fixes: OP-TEE-2018-0010: "Integer overflow in crypto system calls (x2)"
Signed-off-by: Joakim Bech <joakim.bech@linaro.org>
Tested-by: Joakim Bech <joakim.bech@linaro.org> (QEMU v7, v8)
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reported-by: Riscure <inforequest@riscure.com>
Reported-by: Alyssa Milburn <a.a.milburn@vu.nl>
Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
| 0 |
inline void getrs(char &TRANS, int &N, float *lapA, int *IPIV, float *lapB, int &INFO) {
int one = 1;
sgetrs_(&TRANS,&N,&one,lapA,&N,IPIV,lapB,&N,&INFO);
}
|
Safe
|
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
|
1.8423615974815046e+38
| 4 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
| 0 |
int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait)
{
struct nfs_server *server = NFS_SERVER(state->inode);
struct nfs_seqid *(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
struct nfs4_closedata *calldata;
struct nfs4_state_owner *sp = state->owner;
struct rpc_task *task;
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_CLOSE],
.rpc_cred = state->owner->so_cred,
};
struct rpc_task_setup task_setup_data = {
.rpc_client = server->client,
.rpc_message = &msg,
.callback_ops = &nfs4_close_ops,
.workqueue = nfsiod_workqueue,
.flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF,
};
int status = -ENOMEM;
nfs4_state_protect(server->nfs_client, NFS_SP4_MACH_CRED_CLEANUP,
&task_setup_data.rpc_client, &msg);
calldata = kzalloc(sizeof(*calldata), gfp_mask);
if (calldata == NULL)
goto out;
nfs4_init_sequence(&calldata->arg.seq_args, &calldata->res.seq_res, 1, 0);
calldata->inode = state->inode;
calldata->state = state;
calldata->arg.fh = NFS_FH(state->inode);
if (!nfs4_copy_open_stateid(&calldata->arg.stateid, state))
goto out_free_calldata;
/* Serialization for the sequence id */
alloc_seqid = server->nfs_client->cl_mvops->alloc_seqid;
calldata->arg.seqid = alloc_seqid(&state->owner->so_seqid, gfp_mask);
if (IS_ERR(calldata->arg.seqid))
goto out_free_calldata;
nfs_fattr_init(&calldata->fattr);
calldata->arg.fmode = 0;
calldata->lr.arg.ld_private = &calldata->lr.ld_private;
calldata->res.fattr = &calldata->fattr;
calldata->res.seqid = calldata->arg.seqid;
calldata->res.server = server;
calldata->res.lr_ret = -NFS4ERR_NOMATCHING_LAYOUT;
calldata->lr.roc = pnfs_roc(state->inode,
&calldata->lr.arg, &calldata->lr.res, msg.rpc_cred);
if (calldata->lr.roc) {
calldata->arg.lr_args = &calldata->lr.arg;
calldata->res.lr_res = &calldata->lr.res;
}
nfs_sb_active(calldata->inode->i_sb);
msg.rpc_argp = &calldata->arg;
msg.rpc_resp = &calldata->res;
task_setup_data.callback_data = calldata;
task = rpc_run_task(&task_setup_data);
if (IS_ERR(task))
return PTR_ERR(task);
status = 0;
if (wait)
status = rpc_wait_for_completion_task(task);
rpc_put_task(task);
return status;
out_free_calldata:
kfree(calldata);
out:
nfs4_put_open_state(state);
nfs4_put_state_owner(sp);
return status;
}
|
Safe
|
[
"CWE-787"
] |
linux
|
b4487b93545214a9db8cbf32e86411677b0cca21
|
2.4614110487419558e+38
| 70 |
nfs: Fix getxattr kernel panic and memory overflow
Move the buffer size check to decode_attr_security_label() before memcpy()
Only call memcpy() if the buffer is large enough
Fixes: aa9c2669626c ("NFS: Client implementation of Labeled-NFS")
Signed-off-by: Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
[Trond: clean up duplicate test of label->len != 0]
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
| 0 |
gst_matroska_demux_move_to_entry (GstMatroskaDemux * demux,
GstMatroskaIndex * entry, gboolean reset, gboolean update)
{
gint i;
GST_OBJECT_LOCK (demux);
if (update) {
/* seek (relative to matroska segment) */
/* position might be invalid; will error when streaming resumes ... */
demux->common.offset = entry->pos + demux->common.ebml_segment_start;
demux->next_cluster_offset = 0;
GST_DEBUG_OBJECT (demux,
"Seeked to offset %" G_GUINT64_FORMAT ", block %d, " "time %"
GST_TIME_FORMAT, entry->pos + demux->common.ebml_segment_start,
entry->block, GST_TIME_ARGS (entry->time));
/* update the time */
gst_matroska_read_common_reset_streams (&demux->common, entry->time, TRUE);
gst_flow_combiner_reset (demux->flowcombiner);
demux->common.segment.position = entry->time;
demux->seek_block = entry->block;
demux->seek_first = TRUE;
demux->last_stop_end = GST_CLOCK_TIME_NONE;
}
for (i = 0; i < demux->common.src->len; i++) {
GstMatroskaTrackContext *stream = g_ptr_array_index (demux->common.src, i);
if (reset) {
stream->to_offset = G_MAXINT64;
} else {
if (stream->from_offset != -1)
stream->to_offset = stream->from_offset;
}
stream->from_offset = -1;
stream->from_time = GST_CLOCK_TIME_NONE;
}
GST_OBJECT_UNLOCK (demux);
return TRUE;
}
|
Safe
|
[] |
gst-plugins-good
|
9181191511f9c0be6a89c98b311f49d66bd46dc3
|
1.3809503404034064e+37
| 44 |
matroskademux: Fix extraction of multichannel WavPack
The old code had a couple of issues that all lead to potential memory
safety bugs.
- Use a constant for the Wavpack4Header size instead of using sizeof.
It's written out into the data and not from the struct and who knows
what special alignment/padding requirements some C compilers have.
- gst_buffer_set_size() does not realloc the buffer when setting a
bigger size than allocated, it only allows growing up to the maximum
allocated size. Instead use a GstAdapter to collect all the blocks
and take out everything at once in the end.
- Check that enough data is actually available in the input and
otherwise handle it an error in all cases instead of silently
ignoring it.
Among other things this fixes out of bounds writes because the code
assumed gst_buffer_set_size() can grow the buffer and simply wrote after
the end of the buffer.
Thanks to Natalie Silvanovich for reporting.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/859
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/903>
| 0 |
static BOOL update_send_switch_surface_order(
rdpContext* context,
const SWITCH_SURFACE_ORDER* switch_surface)
{
wStream* s;
size_t bm, em, inf;
BYTE orderType;
BYTE controlFlags;
int headerLength;
rdpUpdate* update;
if (!context || !switch_surface || !context->update)
return FALSE;
update = context->update;
headerLength = 1;
orderType = ORDER_TYPE_SWITCH_SURFACE;
controlFlags = ORDER_SECONDARY | (orderType << 2);
inf = update_approximate_switch_surface_order(switch_surface);
update_check_flush(context, headerLength + inf);
s = update->us;
if (!s)
return FALSE;
bm = Stream_GetPosition(s);
if (!Stream_EnsureRemainingCapacity(s, headerLength))
return FALSE;
Stream_Seek(s, headerLength);
if (!update_write_switch_surface_order(s, switch_surface))
return FALSE;
em = Stream_GetPosition(s);
Stream_SetPosition(s, bm);
Stream_Write_UINT8(s, controlFlags); /* controlFlags (1 byte) */
Stream_SetPosition(s, em);
update->numberOrders++;
return TRUE;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
FreeRDP
|
445a5a42c500ceb80f8fa7f2c11f3682538033f3
|
1.2468605714514895e+38
| 42 |
Fixed CVE-2018-8786
Thanks to Eyal Itkin from Check Point Software Technologies.
| 0 |
static noinline int should_fail_bio(struct bio *bio)
{
if (should_fail_request(&bio->bi_disk->part0, bio->bi_iter.bi_size))
return -EIO;
return 0;
}
|
Safe
|
[
"CWE-416",
"CWE-703"
] |
linux
|
54648cf1ec2d7f4b6a71767799c45676a138ca24
|
3.0022667232820494e+38
| 6 |
block: blk_init_allocated_queue() set q->fq as NULL in the fail case
We find the memory use-after-free issue in __blk_drain_queue()
on the kernel 4.14. After read the latest kernel 4.18-rc6 we
think it has the same problem.
Memory is allocated for q->fq in the blk_init_allocated_queue().
If the elevator init function called with error return, it will
run into the fail case to free the q->fq.
Then the __blk_drain_queue() uses the same memory after the free
of the q->fq, it will lead to the unpredictable event.
The patch is to set q->fq as NULL in the fail case of
blk_init_allocated_queue().
Fixes: commit 7c94e1c157a2 ("block: introduce blk_flush_queue to drive flush machinery")
Cc: <stable@vger.kernel.org>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: xiao jin <jin.xiao@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| 0 |
void Server::handleCommand_SrpBytesA(NetworkPacket* pkt)
{
session_t peer_id = pkt->getPeerId();
RemoteClient *client = getClient(peer_id, CS_Invalid);
ClientState cstate = client->getState();
bool wantSudo = (cstate == CS_Active);
if (!((cstate == CS_HelloSent) || (cstate == CS_Active))) {
actionstream << "Server: got SRP _A packet in wrong state " << cstate <<
" from " << getPeerAddress(peer_id).serializeString() <<
". Ignoring." << std::endl;
return;
}
if (client->chosen_mech != AUTH_MECHANISM_NONE) {
actionstream << "Server: got SRP _A packet, while auth is already "
"going on with mech " << client->chosen_mech << " from " <<
getPeerAddress(peer_id).serializeString() <<
" (wantSudo=" << wantSudo << "). Ignoring." << std::endl;
if (wantSudo) {
DenySudoAccess(peer_id);
return;
}
DenyAccess(peer_id, SERVER_ACCESSDENIED_UNEXPECTED_DATA);
return;
}
std::string bytes_A;
u8 based_on;
*pkt >> bytes_A >> based_on;
infostream << "Server: TOSERVER_SRP_BYTES_A received with "
<< "based_on=" << int(based_on) << " and len_A="
<< bytes_A.length() << "." << std::endl;
AuthMechanism chosen = (based_on == 0) ?
AUTH_MECHANISM_LEGACY_PASSWORD : AUTH_MECHANISM_SRP;
if (wantSudo) {
if (!client->isSudoMechAllowed(chosen)) {
actionstream << "Server: Player \"" << client->getName() <<
"\" at " << getPeerAddress(peer_id).serializeString() <<
" tried to change password using unallowed mech " << chosen <<
"." << std::endl;
DenySudoAccess(peer_id);
return;
}
} else {
if (!client->isMechAllowed(chosen)) {
actionstream << "Server: Client tried to authenticate from " <<
getPeerAddress(peer_id).serializeString() <<
" using unallowed mech " << chosen << "." << std::endl;
DenyAccess(peer_id, SERVER_ACCESSDENIED_UNEXPECTED_DATA);
return;
}
}
client->chosen_mech = chosen;
std::string salt;
std::string verifier;
if (based_on == 0) {
generate_srp_verifier_and_salt(client->getName(), client->enc_pwd,
&verifier, &salt);
} else if (!decode_srp_verifier_and_salt(client->enc_pwd, &verifier, &salt)) {
// Non-base64 errors should have been catched in the init handler
actionstream << "Server: User " << client->getName() <<
" tried to log in, but srp verifier field was invalid (most likely "
"invalid base64)." << std::endl;
DenyAccess(peer_id, SERVER_ACCESSDENIED_SERVER_FAIL);
return;
}
char *bytes_B = 0;
size_t len_B = 0;
client->auth_data = srp_verifier_new(SRP_SHA256, SRP_NG_2048,
client->getName().c_str(),
(const unsigned char *) salt.c_str(), salt.size(),
(const unsigned char *) verifier.c_str(), verifier.size(),
(const unsigned char *) bytes_A.c_str(), bytes_A.size(),
NULL, 0,
(unsigned char **) &bytes_B, &len_B, NULL, NULL);
if (!bytes_B) {
actionstream << "Server: User " << client->getName()
<< " tried to log in, SRP-6a safety check violated in _A handler."
<< std::endl;
if (wantSudo) {
DenySudoAccess(peer_id);
return;
}
DenyAccess(peer_id, SERVER_ACCESSDENIED_UNEXPECTED_DATA);
return;
}
NetworkPacket resp_pkt(TOCLIENT_SRP_BYTES_S_B, 0, peer_id);
resp_pkt << salt << std::string(bytes_B, len_B);
Send(&resp_pkt);
}
|
Safe
|
[
"CWE-276"
] |
minetest
|
3693b6871eba268ecc79b3f52d00d3cefe761131
|
4.180676919276597e+37
| 105 |
Prevent players accessing inventories of other players (#10341)
| 0 |
_archive_write_disk_free(struct archive *_a)
{
struct archive_write_disk *a;
int ret;
if (_a == NULL)
return (ARCHIVE_OK);
archive_check_magic(_a, ARCHIVE_WRITE_DISK_MAGIC,
ARCHIVE_STATE_ANY | ARCHIVE_STATE_FATAL, "archive_write_disk_free");
a = (struct archive_write_disk *)_a;
ret = _archive_write_disk_close(&a->archive);
archive_write_disk_set_group_lookup(&a->archive, NULL, NULL, NULL);
archive_write_disk_set_user_lookup(&a->archive, NULL, NULL, NULL);
archive_entry_free(a->entry);
archive_string_free(&a->_name_data);
archive_string_free(&a->_tmpname_data);
archive_string_free(&a->archive.error_string);
archive_string_free(&a->path_safe);
a->archive.magic = 0;
__archive_clean(&a->archive);
free(a->decmpfs_header_p);
free(a->resource_fork);
free(a->compressed_buffer);
free(a->uncompressed_buffer);
#if defined(__APPLE__) && defined(UF_COMPRESSED) && defined(HAVE_SYS_XATTR_H)\
&& defined(HAVE_ZLIB_H)
if (a->stream_valid) {
switch (deflateEnd(&a->stream)) {
case Z_OK:
break;
default:
archive_set_error(&a->archive, ARCHIVE_ERRNO_MISC,
"Failed to clean up compressor");
ret = ARCHIVE_FATAL;
break;
}
}
#endif
free(a);
return (ret);
}
|
Safe
|
[
"CWE-59",
"CWE-269"
] |
libarchive
|
b41daecb5ccb4c8e3b2c53fd6147109fc12c3043
|
2.5804100490741978e+38
| 40 |
Do not follow symlinks when processing the fixup list
Use lchmod() instead of chmod() and tell the remaining functions that the
real file to be modified is a symbolic link.
Fixes #1566
| 0 |
int UseSNIContext(const SSLPointer& ssl, BaseObjectPtr<SecureContext> context) {
SSL_CTX* ctx = context->ctx_.get();
X509* x509 = SSL_CTX_get0_certificate(ctx);
EVP_PKEY* pkey = SSL_CTX_get0_privatekey(ctx);
STACK_OF(X509)* chain;
int err = SSL_CTX_get0_chain_certs(ctx, &chain);
if (err == 1) err = SSL_use_certificate(ssl.get(), x509);
if (err == 1) err = SSL_use_PrivateKey(ssl.get(), pkey);
if (err == 1 && chain != nullptr) err = SSL_set1_chain(ssl.get(), chain);
return err;
}
|
Safe
|
[
"CWE-295"
] |
node
|
466e5415a2b7b3574ab5403acb87e89a94a980d1
|
1.9167863409719616e+38
| 12 |
crypto,tls: implement safe x509 GeneralName format
This change introduces JSON-compatible escaping rules for strings that
include X.509 GeneralName components (see RFC 5280). This non-standard
format avoids ambiguities and prevents injection attacks that could
previously lead to X.509 certificates being accepted even though they
were not valid for the target hostname.
These changes affect the format of subject alternative names and the
format of authority information access. The checkServerIdentity function
has been modified to safely handle the new format, eliminating the
possibility of injecting subject alternative names into the verification
logic.
Because each subject alternative name is only encoded as a JSON string
literal if necessary for security purposes, this change will only be
visible in rare cases.
This addresses CVE-2021-44532.
CVE-ID: CVE-2021-44532
PR-URL: https://github.com/nodejs-private/node-private/pull/300
Reviewed-By: Michael Dawson <midawson@redhat.com>
Reviewed-By: Rich Trott <rtrott@gmail.com>
| 0 |
njs_generate_inc_dec_operation_prop(njs_vm_t *vm, njs_generator_t *generator,
njs_parser_node_t *node)
{
njs_int_t ret;
njs_bool_t post;
njs_index_t index, dest_index;
njs_parser_node_t *lvalue;
njs_vmcode_3addr_t *code;
njs_vmcode_prop_get_t *prop_get;
njs_vmcode_prop_set_t *prop_set;
lvalue = node->left;
if (node->dest != NULL) {
dest_index = node->dest->index;
if (dest_index != NJS_INDEX_NONE
&& dest_index != lvalue->left->index
&& dest_index != lvalue->right->index)
{
node->index = dest_index;
goto found;
}
}
dest_index = njs_generate_node_temp_index_get(vm, generator, node);
found:
post = *((njs_bool_t *) generator->context);
index = post ? njs_generate_temp_index_get(vm, generator, node)
: dest_index;
if (njs_slow_path(index == NJS_INDEX_ERROR)) {
return NJS_ERROR;
}
njs_generate_code(generator, njs_vmcode_prop_get_t, prop_get,
NJS_VMCODE_PROPERTY_GET, 3, node);
prop_get->value = index;
prop_get->object = lvalue->left->index;
prop_get->property = lvalue->right->index;
njs_generate_code(generator, njs_vmcode_3addr_t, code,
node->u.operation, 3, node);
code->dst = dest_index;
code->src1 = index;
code->src2 = index;
njs_generate_code(generator, njs_vmcode_prop_set_t, prop_set,
NJS_VMCODE_PROPERTY_SET, 3, node);
prop_set->value = index;
prop_set->object = lvalue->left->index;
prop_set->property = lvalue->right->index;
if (post) {
ret = njs_generate_index_release(vm, generator, index);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
}
njs_mp_free(vm->mem_pool, generator->context);
ret = njs_generate_children_indexes_release(vm, generator, lvalue);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
return njs_generator_stack_pop(vm, generator, NULL);
}
|
Safe
|
[
"CWE-703",
"CWE-754"
] |
njs
|
404553896792b8f5f429dc8852d15784a59d8d3e
|
4.826779926211281e+37
| 72 |
Fixed break instruction in a try-catch block.
Previously, JUMP offset for a break instruction inside a try-catch
block was not set to a correct offset during code generation
when a return instruction was present in inner try-catch block.
The fix is to update the JUMP offset appropriately.
This closes #553 issue on Github.
| 0 |
static void *u32_get(struct tcf_proto *tp, u32 handle)
{
struct tc_u_hnode *ht;
struct tc_u_common *tp_c = tp->data;
if (TC_U32_HTID(handle) == TC_U32_ROOT)
ht = rtnl_dereference(tp->root);
else
ht = u32_lookup_ht(tp_c, TC_U32_HTID(handle));
if (!ht)
return NULL;
if (TC_U32_KEY(handle) == 0)
return ht;
return u32_lookup_key(ht, handle);
}
|
Safe
|
[
"CWE-416"
] |
linux
|
3db09e762dc79584a69c10d74a6b98f89a9979f8
|
3.0113859395254025e+38
| 18 |
net/sched: cls_u32: fix netns refcount changes in u32_change()
We are now able to detect extra put_net() at the moment
they happen, instead of much later in correct code paths.
u32_init_knode() / tcf_exts_init() populates the ->exts.net
pointer, but as mentioned in tcf_exts_init(),
the refcount on netns has not been elevated yet.
The refcount is taken only once tcf_exts_get_net()
is called.
So the two u32_destroy_key() calls from u32_change()
are attempting to release an invalid reference on the netns.
syzbot report:
refcount_t: decrement hit 0; leaking memory.
WARNING: CPU: 0 PID: 21708 at lib/refcount.c:31 refcount_warn_saturate+0xbf/0x1e0 lib/refcount.c:31
Modules linked in:
CPU: 0 PID: 21708 Comm: syz-executor.5 Not tainted 5.18.0-rc2-next-20220412-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:refcount_warn_saturate+0xbf/0x1e0 lib/refcount.c:31
Code: 1d 14 b6 b2 09 31 ff 89 de e8 6d e9 89 fd 84 db 75 e0 e8 84 e5 89 fd 48 c7 c7 40 aa 26 8a c6 05 f4 b5 b2 09 01 e8 e5 81 2e 05 <0f> 0b eb c4 e8 68 e5 89 fd 0f b6 1d e3 b5 b2 09 31 ff 89 de e8 38
RSP: 0018:ffffc900051af1b0 EFLAGS: 00010286
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000040000 RSI: ffffffff8160a0c8 RDI: fffff52000a35e28
RBP: 0000000000000004 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffff81604a9e R11: 0000000000000000 R12: 1ffff92000a35e3b
R13: 00000000ffffffef R14: ffff8880211a0194 R15: ffff8880577d0a00
FS: 00007f25d183e700(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f19c859c028 CR3: 0000000051009000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
__refcount_dec include/linux/refcount.h:344 [inline]
refcount_dec include/linux/refcount.h:359 [inline]
ref_tracker_free+0x535/0x6b0 lib/ref_tracker.c:118
netns_tracker_free include/net/net_namespace.h:327 [inline]
put_net_track include/net/net_namespace.h:341 [inline]
tcf_exts_put_net include/net/pkt_cls.h:255 [inline]
u32_destroy_key.isra.0+0xa7/0x2b0 net/sched/cls_u32.c:394
u32_change+0xe01/0x3140 net/sched/cls_u32.c:909
tc_new_tfilter+0x98d/0x2200 net/sched/cls_api.c:2148
rtnetlink_rcv_msg+0x80d/0xb80 net/core/rtnetlink.c:6016
netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2495
netlink_unicast_kernel net/netlink/af_netlink.c:1319 [inline]
netlink_unicast+0x543/0x7f0 net/netlink/af_netlink.c:1345
netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1921
sock_sendmsg_nosec net/socket.c:705 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:725
____sys_sendmsg+0x6e2/0x800 net/socket.c:2413
___sys_sendmsg+0xf3/0x170 net/socket.c:2467
__sys_sendmsg+0xe5/0x1b0 net/socket.c:2496
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f25d0689049
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f25d183e168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f25d079c030 RCX: 00007f25d0689049
RDX: 0000000000000000 RSI: 0000000020000340 RDI: 0000000000000005
RBP: 00007f25d06e308d R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffd0b752e3f R14: 00007f25d183e300 R15: 0000000000022000
</TASK>
Fixes: 35c55fc156d8 ("cls_u32: use tcf_exts_get_net() before call_rcu()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| 0 |
static void sd_read_block_characteristics(struct scsi_disk *sdkp)
{
unsigned char *buffer;
u16 rot;
const int vpd_len = 64;
buffer = kmalloc(vpd_len, GFP_KERNEL);
if (!buffer ||
/* Block Device Characteristics VPD */
scsi_get_vpd_page(sdkp->device, 0xb1, buffer, vpd_len))
goto out;
rot = get_unaligned_be16(&buffer[4]);
if (rot == 1)
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, sdkp->disk->queue);
out:
kfree(buffer);
}
|
Safe
|
[
"CWE-284",
"CWE-264"
] |
linux
|
0bfc96cb77224736dfa35c3c555d37b3646ef35e
|
5.480893571112751e+37
| 21 |
block: fail SCSI passthrough ioctls on partition devices
Linux allows executing the SG_IO ioctl on a partition or LVM volume, and
will pass the command to the underlying block device. This is
well-known, but it is also a large security problem when (via Unix
permissions, ACLs, SELinux or a combination thereof) a program or user
needs to be granted access only to part of the disk.
This patch lets partitions forward a small set of harmless ioctls;
others are logged with printk so that we can see which ioctls are
actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred.
Of course it was being sent to a (partition on a) hard disk, so it would
have failed with ENOTTY and the patch isn't changing anything in
practice. Still, I'm treating it specially to avoid spamming the logs.
In principle, this restriction should include programs running with
CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and
/dev/sdb, it still should not be able to read/write outside the
boundaries of /dev/sda2 independent of the capabilities. However, for
now programs with CAP_SYS_RAWIO will still be allowed to send the
ioctls. Their actions will still be logged.
This patch does not affect the non-libata IDE driver. That driver
however already tests for bd != bd->bd_contains before issuing some
ioctl; it could be restricted further to forbid these ioctls even for
programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO.
Cc: linux-scsi@vger.kernel.org
Cc: Jens Axboe <axboe@kernel.dk>
Cc: James Bottomley <JBottomley@parallels.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[ Make it also print the command name when warning - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
GF_Err gf_isom_rtp_set_time_offset(GF_ISOFile *the_file, u32 trackNumber, u32 HintDescriptionIndex, u32 TimeOffset)
{
GF_TrackBox *trak;
GF_HintSampleEntryBox *hdesc;
u32 i, count;
GF_TimeOffHintEntryBox *ent;
trak = gf_isom_get_track_from_file(the_file, trackNumber);
if (!trak || !CheckHintFormat(trak, GF_ISOM_HINT_RTP)) return GF_BAD_PARAM;
//OK, create a new HintSampleDesc
hdesc = (GF_HintSampleEntryBox *) gf_list_get(trak->Media->information->sampleTable->SampleDescription->child_boxes, HintDescriptionIndex - 1);
if (!hdesc) return GF_BAD_PARAM;
count = gf_list_count(hdesc->child_boxes);
for (i=0; i< count; i++) {
ent = (GF_TimeOffHintEntryBox *)gf_list_get(hdesc->child_boxes, i);
if (ent->type == GF_ISOM_BOX_TYPE_TSRO) {
ent->TimeOffset = TimeOffset;
return GF_OK;
}
}
//we have to create a new entry...
ent = (GF_TimeOffHintEntryBox *) gf_isom_box_new_parent(&hdesc->child_boxes, GF_ISOM_BOX_TYPE_TSRO);
if (!ent) return GF_OUT_OF_MEM;
ent->TimeOffset = TimeOffset;
return GF_OK;
}
|
Safe
|
[
"CWE-787"
] |
gpac
|
86c1566f040b2b84c72afcb6cbd444c5aff56cfe
|
2.993865989071205e+38
| 29 |
fixed #1894
| 0 |
TPMS_KDF_SCHEME_KDF2_Unmarshal(TPMS_KDF_SCHEME_KDF2 *target, BYTE **buffer, INT32 *size)
{
TPM_RC rc = TPM_RC_SUCCESS;
if (rc == TPM_RC_SUCCESS) {
rc = TPMS_SCHEME_HASH_Unmarshal(target, buffer, size);
}
return rc;
}
|
Safe
|
[
"CWE-787"
] |
libtpms
|
5cc98a62dc6f204dcf5b87c2ee83ac742a6a319b
|
2.3308516856136517e+38
| 9 |
tpm2: Restore original value if unmarshalled value was illegal
Restore the original value of the memory location where data from
a stream was unmarshalled and the unmarshalled value was found to
be illegal. The goal is to not keep illegal values in memory.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
| 0 |
static inline int ip_finish_output2(struct sk_buff *skb)
{
struct dst_entry *dst = skb->dst;
struct hh_cache *hh = dst->hh;
struct net_device *dev = dst->dev;
int hh_len = LL_RESERVED_SPACE(dev);
/* Be paranoid, rather than too clever. */
if (unlikely(skb_headroom(skb) < hh_len && dev->hard_header)) {
struct sk_buff *skb2;
skb2 = skb_realloc_headroom(skb, LL_RESERVED_SPACE(dev));
if (skb2 == NULL) {
kfree_skb(skb);
return -ENOMEM;
}
if (skb->sk)
skb_set_owner_w(skb2, skb->sk);
kfree_skb(skb);
skb = skb2;
}
if (hh) {
int hh_alen;
read_lock_bh(&hh->hh_lock);
hh_alen = HH_DATA_ALIGN(hh->hh_len);
memcpy(skb->data - hh_alen, hh->hh_data, hh_alen);
read_unlock_bh(&hh->hh_lock);
skb_push(skb, hh->hh_len);
return hh->hh_output(skb);
} else if (dst->neighbour)
return dst->neighbour->output(skb);
if (net_ratelimit())
printk(KERN_DEBUG "ip_finish_output2: No header cache and no neighbour!\n");
kfree_skb(skb);
return -EINVAL;
}
|
Safe
|
[] |
linux
|
e89e9cf539a28df7d0eb1d0a545368e9920b34ac
|
1.1858201076102777e+38
| 39 |
[IPv4/IPv6]: UFO Scatter-gather approach
Attached is kernel patch for UDP Fragmentation Offload (UFO) feature.
1. This patch incorporate the review comments by Jeff Garzik.
2. Renamed USO as UFO (UDP Fragmentation Offload)
3. udp sendfile support with UFO
This patches uses scatter-gather feature of skb to generate large UDP
datagram. Below is a "how-to" on changes required in network device
driver to use the UFO interface.
UDP Fragmentation Offload (UFO) Interface:
-------------------------------------------
UFO is a feature wherein the Linux kernel network stack will offload the
IP fragmentation functionality of large UDP datagram to hardware. This
will reduce the overhead of stack in fragmenting the large UDP datagram to
MTU sized packets
1) Drivers indicate their capability of UFO using
dev->features |= NETIF_F_UFO | NETIF_F_HW_CSUM | NETIF_F_SG
NETIF_F_HW_CSUM is required for UFO over ipv6.
2) UFO packet will be submitted for transmission using driver xmit routine.
UFO packet will have a non-zero value for
"skb_shinfo(skb)->ufo_size"
skb_shinfo(skb)->ufo_size will indicate the length of data part in each IP
fragment going out of the adapter after IP fragmentation by hardware.
skb->data will contain MAC/IP/UDP header and skb_shinfo(skb)->frags[]
contains the data payload. The skb->ip_summed will be set to CHECKSUM_HW
indicating that hardware has to do checksum calculation. Hardware should
compute the UDP checksum of complete datagram and also ip header checksum of
each fragmented IP packet.
For IPV6 the UFO provides the fragment identification-id in
skb_shinfo(skb)->ip6_frag_id. The adapter should use this ID for generating
IPv6 fragments.
Signed-off-by: Ananda Raju <ananda.raju@neterion.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (forwarded)
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| 0 |
static void print_xml_row(FILE *xml_file, const char *row_name,
MYSQL_RES *tableRes, MYSQL_ROW *row,
const char *str_create)
{
uint i;
my_bool body_found MY_ATTRIBUTE((unused)) = 0;
char *create_stmt_ptr= NULL;
ulong create_stmt_len= 0;
MYSQL_FIELD *field;
ulong *lengths= mysql_fetch_lengths(tableRes);
fprintf(xml_file, "\t\t<%s", row_name);
check_io(xml_file);
mysql_field_seek(tableRes, 0);
for (i= 0; (field= mysql_fetch_field(tableRes)); i++)
{
if ((*row)[i])
{
/* For 'create' statements, dump using CDATA. */
if ((str_create) && (strcmp(str_create, field->name) == 0))
{
create_stmt_ptr= (*row)[i];
create_stmt_len= lengths[i];
body_found= 1;
}
else
{
fputc(' ', xml_file);
print_quoted_xml(xml_file, field->name, field->name_length, 1);
fputs("=\"", xml_file);
print_quoted_xml(xml_file, (*row)[i], lengths[i], 0);
fputc('"', xml_file);
check_io(xml_file);
}
}
}
if (create_stmt_len)
{
DBUG_ASSERT(body_found);
fputs(">\n", xml_file);
print_xml_cdata(xml_file, create_stmt_ptr, create_stmt_len);
fprintf(xml_file, "\t\t</%s>\n", row_name);
}
else
fputs(" />\n", xml_file);
check_io(xml_file);
}
|
Safe
|
[
"CWE-319"
] |
mysql-server
|
0002e1380d5f8c113b6bce91f2cf3f75136fd7c7
|
1.54717735935424e+38
| 49 |
BUG#25575605: SETTING --SSL-MODE=REQUIRED SENDS CREDENTIALS BEFORE VERIFYING SSL CONNECTION
MYSQL_OPT_SSL_MODE option introduced.
It is set in case of --ssl-mode=REQUIRED and permits only SSL connection.
(cherry picked from commit f91b941842d240b8a62645e507f5554e8be76aec)
| 0 |
static size_t account(struct entropy_store *r, size_t nbytes, int min,
int reserved)
{
int entropy_count, orig, have_bytes;
size_t ibytes, nfrac;
BUG_ON(r->entropy_count > r->poolinfo->poolfracbits);
/* Can we pull enough? */
retry:
entropy_count = orig = READ_ONCE(r->entropy_count);
ibytes = nbytes;
/* never pull more than available */
have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
if ((have_bytes -= reserved) < 0)
have_bytes = 0;
ibytes = min_t(size_t, ibytes, have_bytes);
if (ibytes < min)
ibytes = 0;
if (WARN_ON(entropy_count < 0)) {
pr_warn("negative entropy count: pool %s count %d\n",
r->name, entropy_count);
entropy_count = 0;
}
nfrac = ibytes << (ENTROPY_SHIFT + 3);
if ((size_t) entropy_count > nfrac)
entropy_count -= nfrac;
else
entropy_count = 0;
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;
trace_debit_entropy(r->name, 8 * ibytes);
if (ibytes && ENTROPY_BITS(r) < random_write_wakeup_bits) {
wake_up_interruptible(&random_write_wait);
kill_fasync(&fasync, SIGIO, POLL_OUT);
}
return ibytes;
}
|
Safe
|
[
"CWE-200",
"CWE-330"
] |
linux
|
f227e3ec3b5cad859ad15666874405e8c1bbc1d4
|
1.5810276221148618e+38
| 43 |
random32: update the net random state on interrupt and activity
This modifies the first 32 bits out of the 128 bits of a random CPU's
net_rand_state on interrupt or CPU activity to complicate remote
observations that could lead to guessing the network RNG's internal
state.
Note that depending on some network devices' interrupt rate moderation
or binding, this re-seeding might happen on every packet or even almost
never.
In addition, with NOHZ some CPUs might not even get timer interrupts,
leaving their local state rarely updated, while they are running
networked processes making use of the random state. For this reason, we
also perform this update in update_process_times() in order to at least
update the state when there is user or system activity, since it's the
only case we care about.
Reported-by: Amit Klein <aksecurity@gmail.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <edumazet@google.com>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
GF_Err rvcc_box_read(GF_Box *s,GF_BitStream *bs)
{
GF_RVCConfigurationBox *ptr = (GF_RVCConfigurationBox*)s;
ISOM_DECREASE_SIZE(ptr, 2);
ptr->predefined_rvc_config = gf_bs_read_u16(bs);
if (!ptr->predefined_rvc_config) {
ISOM_DECREASE_SIZE(ptr, 2);
ptr->rvc_meta_idx = gf_bs_read_u16(bs);
}
return GF_OK;
|
Safe
|
[
"CWE-787"
] |
gpac
|
388ecce75d05e11fc8496aa4857b91245007d26e
|
1.976434179317262e+38
| 11 |
fixed #1587
| 0 |
inline void SparseMatMul<TL, TR>::ComputeBlockSizes(
const typename SparseMatMul<TL, TR>::ConstMatrixMapL& left,
const typename SparseMatMul<TL, TR>::ConstMatrixMapR& right,
bool transpose_left, int num_threads, int* KR, int* NR, int* KL, int* JB,
int* IB) {
// Heuristics for calculating block sizes
// Assume two hyperthreads per core.
const int est_num_cores = std::max(1, (num_threads + 1) / 2);
// Use block of rhs with at most 128K floats per core.
const int mem = est_num_cores * 128 * 1024;
*KR = std::min(static_cast<int>(right.dimension(0)), mem / 256);
*NR = right.dimension(1);
if (*KR * *NR > mem) {
// 4096 may be enough to amortize the cost of writes.
*KR = std::min<int>(*KR, 4096);
}
// Use sizes that are multiples of K and 256.
*KR = std::max(1, *KR / K) * K;
*NR = std::max(1, *NR / 256) * 256;
if (*KR * *NR > mem) {
*NR = mem / *KR;
}
*NR = std::max(1, *NR / 256) * 256;
const int left_dim0 = transpose_left ? left.dimension(1) : left.dimension(0);
const int left_dim1 = transpose_left ? left.dimension(0) : left.dimension(1);
for (*KL = 1024; *KL > K; *KL /= 2) {
if (*KR % *KL == 0 &&
std::max<int>(1, left_dim0 / 64) * (left_dim1 / *KL) > est_num_cores) {
break;
}
}
DCHECK_EQ(*KL % K, 0);
DCHECK_GE(*KR, *KL);
if (*KR < right.dimension(0)) {
CHECK_EQ(*KR % *KL, 0);
}
*JB = std::max(1, static_cast<int>(sqrt(num_threads) / 2.0));
*IB = 8 * *JB;
DCHECK_EQ(N * sizeof(float) % 64, size_t{0});
}
|
Safe
|
[
"CWE-125"
] |
tensorflow
|
e6cf28c72ba2eb949ca950d834dd6d66bb01cfae
|
1.6796611890310026e+38
| 42 |
Validate that matrix dimension sizes in SparseMatMul are positive.
PiperOrigin-RevId: 401149683
Change-Id: Ib33eafc561a39c8741ece80b2edce6d4aae9a57d
| 0 |
static int cbs_av1_write_increment(CodedBitstreamContext *ctx, PutBitContext *pbc,
uint32_t range_min, uint32_t range_max,
const char *name, uint32_t value)
{
int len;
av_assert0(range_min <= range_max && range_max - range_min < 32);
if (value < range_min || value > range_max) {
av_log(ctx->log_ctx, AV_LOG_ERROR, "%s out of range: "
"%"PRIu32", but must be in [%"PRIu32",%"PRIu32"].\n",
name, value, range_min, range_max);
return AVERROR_INVALIDDATA;
}
if (value == range_max)
len = range_max - range_min;
else
len = value - range_min + 1;
if (put_bits_left(pbc) < len)
return AVERROR(ENOSPC);
if (ctx->trace_enable) {
char bits[33];
int i;
for (i = 0; i < len; i++) {
if (range_min + i == value)
bits[i] = '0';
else
bits[i] = '1';
}
bits[i] = 0;
ff_cbs_trace_syntax_element(ctx, put_bits_count(pbc),
name, NULL, bits, value);
}
if (len > 0)
put_bits(pbc, len, (1 << len) - 1 - (value != range_max));
return 0;
}
|
Safe
|
[
"CWE-20",
"CWE-129"
] |
FFmpeg
|
b97a4b658814b2de8b9f2a3bce491c002d34de31
|
3.3656890816514022e+38
| 40 |
cbs_av1: Fix reading of overlong uvlc codes
The specification allows 2^32-1 to be encoded as any number of zeroes
greater than 31, followed by a one. This previously failed because the
trace code would overflow the array containing the string representation
of the bits if there were more than 63 zeroes. Fix that by splitting the
trace output into batches, and at the same time move it out of the default
path.
(While this seems likely to be a specification error, libaom does support
it so we probably should as well.)
From a test case by keval shah <skeval65@gmail.com>.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
| 0 |
void saa7164_bus_dump(struct saa7164_dev *dev)
{
struct tmComResBusInfo *b = &dev->bus;
dprintk(DBGLVL_BUS, "Dumping the bus structure:\n");
dprintk(DBGLVL_BUS, " .type = %d\n", b->Type);
dprintk(DBGLVL_BUS, " .dev->bmmio = 0x%p\n", dev->bmmio);
dprintk(DBGLVL_BUS, " .m_wMaxReqSize = 0x%x\n", b->m_wMaxReqSize);
dprintk(DBGLVL_BUS, " .m_pdwSetRing = 0x%p\n", b->m_pdwSetRing);
dprintk(DBGLVL_BUS, " .m_dwSizeSetRing = 0x%x\n", b->m_dwSizeSetRing);
dprintk(DBGLVL_BUS, " .m_pdwGetRing = 0x%p\n", b->m_pdwGetRing);
dprintk(DBGLVL_BUS, " .m_dwSizeGetRing = 0x%x\n", b->m_dwSizeGetRing);
dprintk(DBGLVL_BUS, " .m_dwSetReadPos = 0x%x (0x%08x)\n",
b->m_dwSetReadPos, saa7164_readl(b->m_dwSetReadPos));
dprintk(DBGLVL_BUS, " .m_dwSetWritePos = 0x%x (0x%08x)\n",
b->m_dwSetWritePos, saa7164_readl(b->m_dwSetWritePos));
dprintk(DBGLVL_BUS, " .m_dwGetReadPos = 0x%x (0x%08x)\n",
b->m_dwGetReadPos, saa7164_readl(b->m_dwGetReadPos));
dprintk(DBGLVL_BUS, " .m_dwGetWritePos = 0x%x (0x%08x)\n",
b->m_dwGetWritePos, saa7164_readl(b->m_dwGetWritePos));
}
|
Safe
|
[
"CWE-125"
] |
media-tree
|
354dd3924a2e43806774953de536257548b5002c
|
1.1559932293728534e+38
| 26 |
[PATCH] saa7164: Bug - Double fetch PCIe access condition
Avoid a double fetch by reusing the values from the prior transfer.
Originally reported via https://bugzilla.kernel.org/show_bug.cgi?id=195559
Thanks to Pengfei Wang <wpengfeinudt@gmail.com> for reporting.
Signed-off-by: Steven Toth <stoth@kernellabs.com>
| 0 |
void OSD::dispatch_op(OpRequestRef op)
{
switch (op->get_req()->get_type()) {
case MSG_OSD_PG_CREATE:
handle_pg_create(op);
break;
case MSG_OSD_PG_NOTIFY:
handle_pg_notify(op);
break;
case MSG_OSD_PG_QUERY:
handle_pg_query(op);
break;
case MSG_OSD_PG_LOG:
handle_pg_log(op);
break;
case MSG_OSD_PG_REMOVE:
handle_pg_remove(op);
break;
case MSG_OSD_PG_INFO:
handle_pg_info(op);
break;
case MSG_OSD_PG_TRIM:
handle_pg_trim(op);
break;
case MSG_OSD_BACKFILL_RESERVE:
handle_pg_backfill_reserve(op);
break;
case MSG_OSD_RECOVERY_RESERVE:
handle_pg_recovery_reserve(op);
break;
}
}
|
Safe
|
[
"CWE-287",
"CWE-284"
] |
ceph
|
5ead97120e07054d80623dada90a5cc764c28468
|
2.1905869700669583e+38
| 33 |
auth/cephx: add authorizer challenge
Allow the accepting side of a connection to reject an initial authorizer
with a random challenge. The connecting side then has to respond with an
updated authorizer proving they are able to decrypt the service's challenge
and that the new authorizer was produced for this specific connection
instance.
The accepting side requires this challenge and response unconditionally
if the client side advertises they have the feature bit. Servers wishing
to require this improved level of authentication simply have to require
the appropriate feature.
Signed-off-by: Sage Weil <sage@redhat.com>
(cherry picked from commit f80b848d3f830eb6dba50123e04385173fa4540b)
# Conflicts:
# src/auth/Auth.h
# src/auth/cephx/CephxProtocol.cc
# src/auth/cephx/CephxProtocol.h
# src/auth/none/AuthNoneProtocol.h
# src/msg/Dispatcher.h
# src/msg/async/AsyncConnection.cc
- const_iterator
- ::decode vs decode
- AsyncConnection ctor arg noise
- get_random_bytes(), not cct->random()
| 0 |
txBoolean fxTypedArrayGetPropertyValue(txMachine* the, txSlot* instance, txID id, txIndex index, txSlot* receiver, txSlot* value)
{
if ((!id) || fxIsCanonicalIndex(the, id)) {
txSlot* dispatch = instance->next;
txSlot* view = dispatch->next;
txSlot* buffer = view->next;
txU2 shift = dispatch->value.typedArray.dispatch->shift;
txIndex length = fxGetDataViewSize(the, view, buffer) >> shift;
if ((!id) && (index < length)) {
(*dispatch->value.typedArray.dispatch->getter)(the, buffer->value.reference->next, view->value.dataView.offset + (index << shift), value, EndianNative);
return 1;
}
value->kind = XS_UNDEFINED_KIND;
return 0;
}
return fxOrdinaryGetPropertyValue(the, instance, id, index, receiver, value);
}
|
Safe
|
[
"CWE-125"
] |
moddable
|
135aa9a4a6a9b49b60aa730ebc3bcc6247d75c45
|
4.001974552279201e+36
| 17 |
XS: #896
| 0 |
data_available(ftpbuf_t *ftp, php_socket_t s)
{
int n;
n = php_pollfd_for_ms(s, PHP_POLLREADABLE, 1000);
if (n < 1) {
#if !defined(PHP_WIN32) && !(defined(NETWARE) && defined(USE_WINSOCK))
if (n == 0) {
errno = ETIMEDOUT;
}
#endif
return 0;
}
return 1;
}
|
Safe
|
[
"CWE-189"
] |
php-src
|
ac2832935435556dc593784cd0087b5e576bbe4d
|
1.4657878367559269e+37
| 16 |
Fix bug #69545 - avoid overflow when reading list
| 0 |
static int rose_parse_ccitt(unsigned char *p, struct rose_facilities_struct *facilities, int len)
{
unsigned char l, n = 0;
char callsign[11];
do {
switch (*p & 0xC0) {
case 0x00:
if (len < 2)
return -1;
p += 2;
n += 2;
len -= 2;
break;
case 0x40:
if (len < 3)
return -1;
p += 3;
n += 3;
len -= 3;
break;
case 0x80:
if (len < 4)
return -1;
p += 4;
n += 4;
len -= 4;
break;
case 0xC0:
if (len < 2)
return -1;
l = p[1];
/* Prevent overflows*/
if (l < 10 || l > 20)
return -1;
if (*p == FAC_CCITT_DEST_NSAP) {
memcpy(&facilities->source_addr, p + 7, ROSE_ADDR_LEN);
memcpy(callsign, p + 12, l - 10);
callsign[l - 10] = '\0';
asc2ax(&facilities->source_call, callsign);
}
if (*p == FAC_CCITT_SRC_NSAP) {
memcpy(&facilities->dest_addr, p + 7, ROSE_ADDR_LEN);
memcpy(callsign, p + 12, l - 10);
callsign[l - 10] = '\0';
asc2ax(&facilities->dest_call, callsign);
}
p += l + 2;
n += l + 2;
len -= l + 2;
break;
}
} while (*p != 0x00 && len > 0);
return n;
}
|
Safe
|
[
"CWE-20"
] |
linux
|
e0bccd315db0c2f919e7fcf9cb60db21d9986f52
|
3.2385376197071438e+38
| 61 |
rose: Add length checks to CALL_REQUEST parsing
Define some constant offsets for CALL_REQUEST based on the description
at <http://www.techfest.com/networking/wan/x25plp.htm> and the
definition of ROSE as using 10-digit (5-byte) addresses. Use them
consistently. Validate all implicit and explicit facilities lengths.
Validate the address length byte rather than either trusting or
assuming its value.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
int agpopdisc(Agraph_t * g, Agcbdisc_t * cbd)
{
Agcbstack_t *stack_ent;
stack_ent = g->clos->cb;
if (stack_ent) {
if (stack_ent->f == cbd)
g->clos->cb = stack_ent->prev;
else {
while (stack_ent && (stack_ent->prev->f != cbd))
stack_ent = stack_ent->prev;
if (stack_ent && stack_ent->prev)
stack_ent->prev = stack_ent->prev->prev;
}
if (stack_ent) {
agfree(g, stack_ent);
return SUCCESS;
}
}
return FAILURE;
}
|
Safe
|
[
"CWE-476"
] |
graphviz
|
839085f8026afd6f6920a0c31ad2a9d880d97932
|
1.5913830056500603e+38
| 21 |
attempted fix for null pointer deference on malformed input
| 0 |
ews_client_create_autodiscover_xml (const gchar *email)
{
xmlDoc *doc;
xmlNode *node;
xmlNs *ns;
doc = xmlNewDoc ((xmlChar *) "1.0");
node = xmlNewDocNode (doc, NULL, (xmlChar *) "Autodiscover", NULL);
xmlDocSetRootElement (doc, node);
ns = xmlNewNs (node,
(xmlChar *) "http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006",
NULL);
node = xmlNewChild (node, ns, (xmlChar *) "Request", NULL);
xmlNewChild (node, ns, (xmlChar *) "EMailAddress", (xmlChar *) email);
xmlNewChild (node,
ns,
(xmlChar *) "AcceptableResponseSchema",
(xmlChar *) "http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a");
return doc;
}
|
Safe
|
[
"CWE-310"
] |
gnome-online-accounts
|
edde7c63326242a60a075341d3fea0be0bc4d80e
|
2.6725483855197756e+38
| 23 |
Guard against invalid SSL certificates
None of the branded providers (eg., Google, Facebook and Windows Live)
should ever have an invalid certificate. So set "ssl-strict" on the
SoupSession object being used by GoaWebView.
Providers like ownCloud and Exchange might have to deal with
certificates that are not up to the mark. eg., self-signed
certificates. For those, show a warning when the account is being
created, and only proceed if the user decides to ignore it. In any
case, save the status of the certificate that was used to create the
account. So an account created with a valid certificate will never
work with an invalid one, and one created with an invalid certificate
will not throw any further warnings.
Fixes: CVE-2013-0240
| 0 |
char *GetPostScriptFontName(char *dir, int mult) {
unichar_t *ret;
char *u_dir;
char *temp;
u_dir = def2utf8_copy(dir);
ret = FVOpenFont(_("Open Font"), u_dir,mult);
temp = u2def_copy(ret);
free(ret);
return( temp );
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
fontforge
|
626f751752875a0ddd74b9e217b6f4828713573c
|
9.534738889075292e+37
| 12 |
Warn users before discarding their unsaved scripts (#3852)
* Warn users before discarding their unsaved scripts
This closes #3846.
| 0 |
win_vert_neighbor(tabpage_T *tp, win_T *wp, int up, long count)
{
frame_T *fr;
frame_T *nfr;
frame_T *foundfr;
foundfr = wp->w_frame;
while (count--)
{
/*
* First go upwards in the tree of frames until we find a upwards or
* downwards neighbor.
*/
fr = foundfr;
for (;;)
{
if (fr == tp->tp_topframe)
goto end;
if (up)
nfr = fr->fr_prev;
else
nfr = fr->fr_next;
if (fr->fr_parent->fr_layout == FR_COL && nfr != NULL)
break;
fr = fr->fr_parent;
}
/*
* Now go downwards to find the bottom or top frame in it.
*/
for (;;)
{
if (nfr->fr_layout == FR_LEAF)
{
foundfr = nfr;
break;
}
fr = nfr->fr_child;
if (nfr->fr_layout == FR_ROW)
{
/* Find the frame at the cursor row. */
while (fr->fr_next != NULL
&& frame2win(fr)->w_wincol + fr->fr_width
<= wp->w_wincol + wp->w_wcol)
fr = fr->fr_next;
}
if (nfr->fr_layout == FR_COL && up)
while (fr->fr_next != NULL)
fr = fr->fr_next;
nfr = fr;
}
}
end:
return foundfr != NULL ? foundfr->fr_win : NULL;
}
|
Safe
|
[
"CWE-416"
] |
vim
|
ec66c41d84e574baf8009dbc0bd088d2bc5b2421
|
6.623710653183147e+37
| 55 |
patch 8.1.2136: using freed memory with autocmd from fuzzer
Problem: using freed memory with autocmd from fuzzer. (Dhiraj Mishra,
Dominique Pelle)
Solution: Avoid using "wp" after autocommands. (closes #5041)
| 0 |
njs_string_prototype_split(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs,
njs_index_t unused)
{
size_t size;
uint32_t limit;
njs_int_t ret;
njs_utf8_t utf8;
njs_bool_t undefined;
njs_value_t *this, *separator, *value;
njs_value_t separator_lvalue, limit_lvalue, splitter;
njs_array_t *array;
const u_char *p, *start, *next, *last, *end;
njs_string_prop_t string, split;
njs_value_t arguments[3];
static const njs_value_t split_key =
njs_wellknown_symbol(NJS_SYMBOL_SPLIT);
this = njs_argument(args, 0);
if (njs_slow_path(njs_is_null_or_undefined(this))) {
njs_type_error(vm, "cannot convert \"%s\"to object",
njs_type_string(this->type));
return NJS_ERROR;
}
separator = njs_lvalue_arg(&separator_lvalue, args, nargs, 1);
value = njs_lvalue_arg(&limit_lvalue, args, nargs, 2);
if (!njs_is_null_or_undefined(separator)) {
ret = njs_value_method(vm, separator, njs_value_arg(&split_key),
&splitter);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
if (njs_is_defined(&splitter)) {
arguments[0] = *this;
arguments[1] = *value;
return njs_function_call(vm, njs_function(&splitter), separator,
arguments, 2, &vm->retval);
}
}
ret = njs_value_to_string(vm, this, this);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
array = njs_array_alloc(vm, 0, 0, NJS_ARRAY_SPARE);
if (njs_slow_path(array == NULL)) {
return NJS_ERROR;
}
limit = UINT32_MAX;
if (njs_is_defined(value)) {
ret = njs_value_to_uint32(vm, value, &limit);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
}
undefined = njs_is_undefined(separator);
ret = njs_value_to_string(vm, separator, separator);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
if (njs_slow_path(limit == 0)) {
goto done;
}
if (njs_slow_path(undefined)) {
goto single;
}
(void) njs_string_prop(&string, this);
(void) njs_string_prop(&split, separator);
if (njs_slow_path(string.size == 0)) {
if (split.size != 0) {
goto single;
}
goto done;
}
utf8 = NJS_STRING_BYTE;
if (string.length != 0) {
utf8 = NJS_STRING_ASCII;
if (string.length != string.size) {
utf8 = NJS_STRING_UTF8;
}
}
start = string.start;
end = string.start + string.size;
last = end - split.size;
do {
for (p = start; p <= last; p++) {
if (memcmp(p, split.start, split.size) == 0) {
goto found;
}
}
p = end;
found:
next = p + split.size;
/* Empty split string. */
if (p == next) {
p = (utf8 != NJS_STRING_BYTE) ? njs_utf8_next(p, end)
: p + 1;
next = p;
}
size = p - start;
ret = njs_string_split_part_add(vm, array, utf8, start, size);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
start = next;
limit--;
} while (limit != 0 && p < end);
goto done;
single:
value = njs_array_push(vm, array);
if (njs_slow_path(value == NULL)) {
return NJS_ERROR;
}
*value = *this;
done:
njs_set_array(&vm->retval, array);
return NJS_OK;
}
|
Safe
|
[] |
njs
|
36f04a3178fcb6da8513cc3dbf35215c2a581b3f
|
3.39004142693243e+38
| 155 |
Fixed String.prototype.replace() with byte strings.
This closes #522 issue on Github.
| 0 |
R_API R_DEPRECATE RAnalVar *r_anal_get_used_function_var(RAnal *anal, ut64 addr) {
RList *fcns = r_anal_get_functions_in (anal, addr);
if (!fcns) {
return NULL;
}
RAnalVar *var = NULL;
RListIter *it;
RAnalFunction *fcn;
r_list_foreach (fcns, it, fcn) {
RPVector *used_vars = r_anal_function_get_vars_used_at (fcn, addr);
if (used_vars && !r_pvector_empty (used_vars)) {
var = r_pvector_at (used_vars, 0);
break;
}
}
r_list_free (fcns);
return var;
}
|
Safe
|
[
"CWE-416"
] |
radare2
|
a7ce29647fcb38386d7439696375e16e093d6acb
|
2.5335503871853982e+38
| 18 |
Fix UAF in aaaa on arm/thumb switching ##crash
* Reported by @peacock-doris via huntr.dev
* Reproducer tests_65185
* This is a logic fix, but not the fully safe as changes in the code
can result on UAF again, to properly protect r2 from crashing we
need to break the ABI and add refcounting to RRegItem, which can't
happen in 5.6.x because of abi-compat rules
| 0 |
static void jas_icccurv_dump(jas_iccattrval_t *attrval, FILE *out)
{
int i;
jas_icccurv_t *curv = &attrval->data.curv;
fprintf(out, "number of entires = %d\n", curv->numents);
if (curv->numents == 1) {
fprintf(out, "gamma = %f\n", curv->ents[0] / 256.0);
} else {
for (i = 0; i < JAS_CAST(int, curv->numents); ++i) {
if (i < 3 || i >= JAS_CAST(int, curv->numents) - 3) {
fprintf(out, "entry[%d] = %f\n", i, curv->ents[i] / 65535.0);
}
}
}
}
|
Safe
|
[
"CWE-189"
] |
jasper
|
3c55b399c36ef46befcb21e4ebc4799367f89684
|
1.5558442598559434e+36
| 15 |
At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems.
| 0 |
lyp_check_import(struct lys_module *module, const char *value, struct lys_import *imp)
{
int i;
struct lys_module *dup = NULL;
struct ly_ctx *ctx = module->ctx;
/* check for importing a single module in multiple revisions */
for (i = 0; i < module->imp_size; i++) {
if (!module->imp[i].module) {
/* skip the not yet filled records */
continue;
}
if (ly_strequal(module->imp[i].module->name, value, 1)) {
/* check revisions, including multiple revisions of a single module is error */
if (imp->rev[0] && (!module->imp[i].module->rev_size || strcmp(module->imp[i].module->rev[0].date, imp->rev))) {
/* the already imported module has
* - no revision, but here we require some
* - different revision than the one required here */
LOGVAL(ctx, LYE_INARG, LY_VLOG_NONE, NULL, value, "import");
LOGVAL(ctx, LYE_SPEC, LY_VLOG_NONE, NULL, "Importing multiple revisions of module \"%s\".", value);
return -1;
} else if (!imp->rev[0]) {
/* no revision, remember the duplication, but check revisions after loading the module
* because the current revision can be the same (then it is ok) or it can differ (then it
* is error */
dup = module->imp[i].module;
break;
}
/* there is duplication, but since prefixes differs (checked in caller of this function),
* it is ok */
imp->module = module->imp[i].module;
return 0;
}
}
/* circular import check */
if (lyp_check_circmod(module, value, 1)) {
return -1;
}
/* load module - in specific situations it tries to get the module from the context */
imp->module = (struct lys_module *)ly_ctx_load_sub_module(module->ctx, NULL, value, imp->rev[0] ? imp->rev : NULL,
module->ctx->models.flags & LY_CTX_ALLIMPLEMENTED ? 1 : 0,
NULL);
/* check the result */
if (!imp->module) {
LOGERR(ctx, LY_EVALID, "Importing \"%s\" module into \"%s\" failed.", value, module->name);
return -1;
}
if (imp->rev[0] && imp->module->rev_size && strcmp(imp->rev, imp->module->rev[0].date)) {
LOGERR(ctx, LY_EVALID, "\"%s\" import of module \"%s\" in revision \"%s\" not found.",
module->name, value, imp->rev);
return -1;
}
if (dup) {
/* check the revisions */
if ((dup != imp->module) ||
(dup->rev_size != imp->module->rev_size && (!dup->rev_size || imp->module->rev_size)) ||
(dup->rev_size && strcmp(dup->rev[0].date, imp->module->rev[0].date))) {
/* - modules are not the same
* - one of modules has no revision (except they both has no revision)
* - revisions of the modules are not the same */
LOGVAL(ctx, LYE_INARG, LY_VLOG_NONE, NULL, value, "import");
LOGVAL(ctx, LYE_SPEC, LY_VLOG_NONE, NULL, "Importing multiple revisions of module \"%s\".", value);
return -1;
} else {
LOGWRN(ctx, "Module \"%s\" is imported by \"%s\" multiple times with different prefixes.", dup->name, module->name);
}
}
return 0;
}
|
Safe
|
[
"CWE-787"
] |
libyang
|
f6d684ade99dd37b21babaa8a856f64faa1e2e0d
|
1.3640022948275861e+38
| 76 |
parser BUGFIX long identity name buffer overflow
STRING_OVERFLOW (CWE-120)
| 0 |
void X509V3_conf_free(CONF_VALUE *conf)
{
if (!conf)
return;
OPENSSL_free(conf->name);
OPENSSL_free(conf->value);
OPENSSL_free(conf->section);
OPENSSL_free(conf);
}
|
Safe
|
[
"CWE-125"
] |
openssl
|
bb4d2ed4091408404e18b3326e3df67848ef63d0
|
1.223885888610276e+38
| 9 |
Fix append_ia5 function to not assume NUL terminated strings
ASN.1 strings may not be NUL terminated. Don't assume they are.
CVE-2021-3712
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
Reviewed-by: Paul Dale <pauli@openssl.org>
| 0 |
void WebContents::InitZoomController(content::WebContents* web_contents,
const gin_helper::Dictionary& options) {
WebContentsZoomController::CreateForWebContents(web_contents);
zoom_controller_ = WebContentsZoomController::FromWebContents(web_contents);
double zoom_factor;
if (options.Get(options::kZoomFactor, &zoom_factor))
zoom_controller_->SetDefaultZoomFactor(zoom_factor);
}
|
Safe
|
[] |
electron
|
e9fa834757f41c0b9fe44a4dffe3d7d437f52d34
|
1.6328794935639432e+38
| 8 |
fix: ensure ElectronBrowser mojo service is only bound to appropriate render frames (#33344)
* fix: ensure ElectronBrowser mojo service is only bound to authorized render frames
Notes: no-notes
* refactor: extract electron API IPC to its own mojo interface
* fix: just check main frame not primary main frame
Co-authored-by: Samuel Attard <samuel.r.attard@gmail.com>
Co-authored-by: Samuel Attard <sattard@salesforce.com>
| 0 |
perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
{
struct perf_event_context *ctx;
retry:
/*
* One of the few rules of preemptible RCU is that one cannot do
* rcu_read_unlock() while holding a scheduler (or nested) lock when
* part of the read side critical section was irqs-enabled -- see
* rcu_read_unlock_special().
*
* Since ctx->lock nests under rq->lock we must ensure the entire read
* side critical section has interrupts disabled.
*/
local_irq_save(*flags);
rcu_read_lock();
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
if (ctx) {
/*
* If this context is a clone of another, it might
* get swapped for another underneath us by
* perf_event_task_sched_out, though the
* rcu_read_lock() protects us from any context
* getting freed. Lock the context and check if it
* got swapped before we could get the lock, and retry
* if so. If we locked the right context, then it
* can't get swapped on us any more.
*/
raw_spin_lock(&ctx->lock);
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
raw_spin_unlock(&ctx->lock);
rcu_read_unlock();
local_irq_restore(*flags);
goto retry;
}
if (!atomic_inc_not_zero(&ctx->refcount)) {
raw_spin_unlock(&ctx->lock);
ctx = NULL;
}
}
rcu_read_unlock();
if (!ctx)
local_irq_restore(*flags);
return ctx;
}
|
Safe
|
[
"CWE-416",
"CWE-362"
] |
linux
|
12ca6ad2e3a896256f086497a7c7406a547ee373
|
3.348261583503143e+37
| 46 |
perf: Fix race in swevent hash
There's a race on CPU unplug where we free the swevent hash array
while it can still have events on. This will result in a
use-after-free which is BAD.
Simply do not free the hash array on unplug. This leaves the thing
around and no use-after-free takes place.
When the last swevent dies, we do a for_each_possible_cpu() iteration
anyway to clean these up, at which time we'll free it, so no leakage
will occur.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
| 0 |
**/
CImg<T>& load_other(const char *const filename) {
if (!filename)
throw CImgArgumentException(_cimg_instance
"load_other(): Specified filename is (null).",
cimg_instance);
const unsigned int omode = cimg::exception_mode();
cimg::exception_mode(0);
try { load_magick(filename); }
catch (CImgException&) {
try { load_imagemagick_external(filename); }
catch (CImgException&) {
try { load_graphicsmagick_external(filename); }
catch (CImgException&) {
try { load_cimg(filename); }
catch (CImgException&) {
try {
std::fclose(cimg::fopen(filename,"rb"));
} catch (CImgException&) {
cimg::exception_mode(omode);
throw CImgIOException(_cimg_instance
"load_other(): Failed to open file '%s'.",
cimg_instance,
filename);
}
cimg::exception_mode(omode);
throw CImgIOException(_cimg_instance
"load_other(): Failed to recognize format of file '%s'.",
cimg_instance,
filename);
}
}
}
}
cimg::exception_mode(omode);
return *this;
|
Safe
|
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
|
3.29308025916989e+38
| 37 |
Fix other issues in 'CImg<T>::load_bmp()'.
| 0 |
int ctrl_x_mode_thesaurus(void) { return ctrl_x_mode == CTRL_X_THESAURUS; }
|
Safe
|
[
"CWE-125"
] |
vim
|
f12129f1714f7d2301935bb21d896609bdac221c
|
3.1014730999093988e+38
| 1 |
patch 9.0.0020: with some completion reading past end of string
Problem: With some completion reading past end of string.
Solution: Check the length of the string.
| 0 |
asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned int save_r0,
unsigned long thread_info_flags)
{
/* deal with pending signal delivery */
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs, save_r0);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
}
}
|
Vulnerable
|
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
|
3.3653004754566857e+38
| 12 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
| 1 |
ExceptHandler(expr_ty type, identifier name, asdl_seq * body, int lineno, int
col_offset, int end_lineno, int end_col_offset, PyArena *arena)
{
excepthandler_ty p;
p = (excepthandler_ty)PyArena_Malloc(arena, sizeof(*p));
if (!p)
return NULL;
p->kind = ExceptHandler_kind;
p->v.ExceptHandler.type = type;
p->v.ExceptHandler.name = name;
p->v.ExceptHandler.body = body;
p->lineno = lineno;
p->col_offset = col_offset;
p->end_lineno = end_lineno;
p->end_col_offset = end_col_offset;
return p;
}
|
Safe
|
[
"CWE-125"
] |
cpython
|
dcfcd146f8e6fc5c2fc16a4c192a0c5f5ca8c53c
|
3.3498173904380433e+38
| 17 |
bpo-35766: Merge typed_ast back into CPython (GH-11645)
| 0 |
void perf_event_update_userpage(struct perf_event *event)
{
struct perf_event_mmap_page *userpg;
struct perf_buffer *rb;
u64 enabled, running, now;
rcu_read_lock();
rb = rcu_dereference(event->rb);
if (!rb)
goto unlock;
/*
* compute total_time_enabled, total_time_running
* based on snapshot values taken when the event
* was last scheduled in.
*
* we cannot simply called update_context_time()
* because of locking issue as we can be called in
* NMI context
*/
calc_timer_values(event, &now, &enabled, &running);
userpg = rb->user_page;
/*
* Disable preemption to guarantee consistent time stamps are stored to
* the user page.
*/
preempt_disable();
++userpg->lock;
barrier();
userpg->index = perf_event_index(event);
userpg->offset = perf_event_count(event);
if (userpg->index)
userpg->offset -= local64_read(&event->hw.prev_count);
userpg->time_enabled = enabled +
atomic64_read(&event->child_total_time_enabled);
userpg->time_running = running +
atomic64_read(&event->child_total_time_running);
arch_perf_update_userpage(event, userpg, now);
barrier();
++userpg->lock;
preempt_enable();
unlock:
rcu_read_unlock();
}
|
Safe
|
[
"CWE-401"
] |
tip
|
7bdb157cdebbf95a1cd94ed2e01b338714075d00
|
1.5321513694825396e+38
| 49 |
perf/core: Fix a memory leak in perf_event_parse_addr_filter()
As shown through runtime testing, the "filename" allocation is not
always freed in perf_event_parse_addr_filter().
There are three possible ways that this could happen:
- It could be allocated twice on subsequent iterations through the loop,
- or leaked on the success path,
- or on the failure path.
Clean up the code flow to make it obvious that 'filename' is always
freed in the reallocation path and in the two return paths as well.
We rely on the fact that kfree(NULL) is NOP and filename is initialized
with NULL.
This fixes the leak. No other side effects expected.
[ Dan Carpenter: cleaned up the code flow & added a changelog. ]
[ Ingo Molnar: updated the changelog some more. ]
Fixes: 375637bc5249 ("perf/core: Introduce address range filtering")
Signed-off-by: "kiyin(尹亮)" <kiyin@tencent.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Cc: Anthony Liguori <aliguori@amazon.com>
--
kernel/events/core.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
| 0 |
kadm5_create_principal_3(void *server_handle,
kadm5_principal_ent_t entry, long mask,
int n_ks_tuple, krb5_key_salt_tuple *ks_tuple,
char *password)
{
krb5_db_entry *kdb;
osa_princ_ent_rec adb;
kadm5_policy_ent_rec polent;
krb5_boolean have_polent = FALSE;
krb5_int32 now;
krb5_tl_data *tl_data_tail;
unsigned int ret;
kadm5_server_handle_t handle = server_handle;
krb5_keyblock *act_mkey;
krb5_kvno act_kvno;
int new_n_ks_tuple = 0;
krb5_key_salt_tuple *new_ks_tuple = NULL;
CHECK_HANDLE(server_handle);
krb5_clear_error_message(handle->context);
check_1_6_dummy(entry, mask, n_ks_tuple, ks_tuple, &password);
/*
* Argument sanity checking, and opening up the DB
*/
if (entry == NULL)
return EINVAL;
if(!(mask & KADM5_PRINCIPAL) || (mask & KADM5_MOD_NAME) ||
(mask & KADM5_MOD_TIME) || (mask & KADM5_LAST_PWD_CHANGE) ||
(mask & KADM5_MKVNO) || (mask & KADM5_AUX_ATTRIBUTES) ||
(mask & KADM5_LAST_SUCCESS) || (mask & KADM5_LAST_FAILED) ||
(mask & KADM5_FAIL_AUTH_COUNT))
return KADM5_BAD_MASK;
if ((mask & KADM5_KEY_DATA) && entry->n_key_data != 0)
return KADM5_BAD_MASK;
if((mask & KADM5_POLICY) && entry->policy == NULL)
return KADM5_BAD_MASK;
if((mask & KADM5_POLICY) && (mask & KADM5_POLICY_CLR))
return KADM5_BAD_MASK;
if((mask & ~ALL_PRINC_MASK))
return KADM5_BAD_MASK;
/*
* Check to see if the principal exists
*/
ret = kdb_get_entry(handle, entry->principal, &kdb, &adb);
switch(ret) {
case KADM5_UNK_PRINC:
break;
case 0:
kdb_free_entry(handle, kdb, &adb);
return KADM5_DUP;
default:
return ret;
}
kdb = krb5_db_alloc(handle->context, NULL, sizeof(*kdb));
if (kdb == NULL)
return ENOMEM;
memset(kdb, 0, sizeof(*kdb));
memset(&adb, 0, sizeof(osa_princ_ent_rec));
/*
* If a policy was specified, load it.
* If we can not find the one specified return an error
*/
if ((mask & KADM5_POLICY)) {
ret = get_policy(handle, entry->policy, &polent, &have_polent);
if (ret)
goto cleanup;
}
if (password) {
ret = passwd_check(handle, password, have_polent ? &polent : NULL,
entry->principal);
if (ret)
goto cleanup;
}
/*
* Start populating the various DB fields, using the
* "defaults" for fields that were not specified by the
* mask.
*/
if ((ret = krb5_timeofday(handle->context, &now)))
goto cleanup;
kdb->magic = KRB5_KDB_MAGIC_NUMBER;
kdb->len = KRB5_KDB_V1_BASE_LENGTH; /* gag me with a chainsaw */
if ((mask & KADM5_ATTRIBUTES))
kdb->attributes = entry->attributes;
else
kdb->attributes = handle->params.flags;
if ((mask & KADM5_MAX_LIFE))
kdb->max_life = entry->max_life;
else
kdb->max_life = handle->params.max_life;
if (mask & KADM5_MAX_RLIFE)
kdb->max_renewable_life = entry->max_renewable_life;
else
kdb->max_renewable_life = handle->params.max_rlife;
if ((mask & KADM5_PRINC_EXPIRE_TIME))
kdb->expiration = entry->princ_expire_time;
else
kdb->expiration = handle->params.expiration;
kdb->pw_expiration = 0;
if (have_polent) {
if(polent.pw_max_life)
kdb->pw_expiration = now + polent.pw_max_life;
else
kdb->pw_expiration = 0;
}
if ((mask & KADM5_PW_EXPIRATION))
kdb->pw_expiration = entry->pw_expiration;
kdb->last_success = 0;
kdb->last_failed = 0;
kdb->fail_auth_count = 0;
/* this is kind of gross, but in order to free the tl data, I need
to free the entire kdb entry, and that will try to free the
principal. */
if ((ret = kadm5_copy_principal(handle->context,
entry->principal, &(kdb->princ))))
goto cleanup;
if ((ret = krb5_dbe_update_last_pwd_change(handle->context, kdb, now)))
goto cleanup;
if (mask & KADM5_TL_DATA) {
/* splice entry->tl_data onto the front of kdb->tl_data */
for (tl_data_tail = entry->tl_data; tl_data_tail;
tl_data_tail = tl_data_tail->tl_data_next)
{
ret = krb5_dbe_update_tl_data(handle->context, kdb, tl_data_tail);
if( ret )
goto cleanup;
}
}
/*
* We need to have setup the TL data, so we have strings, so we can
* check enctype policy, which is why we check/initialize ks_tuple
* this late.
*/
ret = apply_keysalt_policy(handle, entry->policy, n_ks_tuple, ks_tuple,
&new_n_ks_tuple, &new_ks_tuple);
if (ret)
goto cleanup;
/* initialize the keys */
ret = kdb_get_active_mkey(handle, &act_kvno, &act_mkey);
if (ret)
goto cleanup;
if (mask & KADM5_KEY_DATA) {
/* The client requested no keys for this principal. */
assert(entry->n_key_data == 0);
} else if (password) {
ret = krb5_dbe_cpw(handle->context, act_mkey, new_ks_tuple,
new_n_ks_tuple, password,
(mask & KADM5_KVNO)?entry->kvno:1,
FALSE, kdb);
} else {
/* Null password means create with random key (new in 1.8). */
ret = krb5_dbe_crk(handle->context, &master_keyblock,
new_ks_tuple, new_n_ks_tuple, FALSE, kdb);
}
if (ret)
goto cleanup;
/* Record the master key VNO used to encrypt this entry's keys */
ret = krb5_dbe_update_mkvno(handle->context, kdb, act_kvno);
if (ret)
goto cleanup;
ret = k5_kadm5_hook_create(handle->context, handle->hook_handles,
KADM5_HOOK_STAGE_PRECOMMIT, entry, mask,
new_n_ks_tuple, new_ks_tuple, password);
if (ret)
goto cleanup;
/* populate the admin-server-specific fields. In the OV server,
this used to be in a separate database. Since there's already
marshalling code for the admin fields, to keep things simple,
I'm going to keep it, and make all the admin stuff occupy a
single tl_data record, */
adb.admin_history_kvno = INITIAL_HIST_KVNO;
if (mask & KADM5_POLICY) {
adb.aux_attributes = KADM5_POLICY;
/* this does *not* need to be strdup'ed, because adb is xdr */
/* encoded in osa_adb_create_princ, and not ever freed */
adb.policy = entry->policy;
}
/* In all cases key and the principal data is set, let the database provider know */
kdb->mask = mask | KADM5_KEY_DATA | KADM5_PRINCIPAL ;
/* store the new db entry */
ret = kdb_put_entry(handle, kdb, &adb);
(void) k5_kadm5_hook_create(handle->context, handle->hook_handles,
KADM5_HOOK_STAGE_POSTCOMMIT, entry, mask,
new_n_ks_tuple, new_ks_tuple, password);
cleanup:
free(new_ks_tuple);
krb5_db_free_principal(handle->context, kdb);
if (have_polent)
(void) kadm5_free_policy_ent(handle->lhandle, &polent);
return ret;
}
|
Safe
|
[
"CWE-703"
] |
krb5
|
b863de7fbf080b15e347a736fdda0a82d42f4f6b
|
1.882684669349457e+38
| 223 |
Check for null kadm5 policy name [CVE-2015-8630]
In kadm5_create_principal_3() and kadm5_modify_principal(), check for
entry->policy being null when KADM5_POLICY is included in the mask.
CVE-2015-8630:
In MIT krb5 1.12 and later, an authenticated attacker with permission
to modify a principal entry can cause kadmind to dereference a null
pointer by supplying a null policy value but including KADM5_POLICY in
the mask.
CVSSv2 Vector: AV:N/AC:H/Au:S/C:N/I:N/A:C/E:POC/RL:OF/RC:C
ticket: 8342 (new)
target_version: 1.14-next
target_version: 1.13-next
tags: pullup
| 0 |
irc_server_gnutls_callback (const void *pointer, void *data,
gnutls_session_t tls_session,
const gnutls_datum_t *req_ca, int nreq,
const gnutls_pk_algorithm_t *pk_algos,
int pk_algos_len,
#if LIBGNUTLS_VERSION_NUMBER >= 0x020b00 /* 2.11.0 */
gnutls_retr2_st *answer,
#else
gnutls_retr_st *answer,
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x020b00 */
int action)
{
struct t_irc_server *server;
#if LIBGNUTLS_VERSION_NUMBER >= 0x020b00 /* 2.11.0 */
gnutls_retr2_st tls_struct;
#else
gnutls_retr_st tls_struct;
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x020b00 */
gnutls_x509_crt_t cert_temp;
const gnutls_datum_t *cert_list;
gnutls_datum_t filedatum;
unsigned int i, cert_list_len, status;
time_t cert_time;
char *cert_path0, *cert_path1, *cert_path2, *cert_str, *fingerprint_eval;
char *weechat_dir, *ssl_password;
const char *ptr_fingerprint;
int rc, ret, fingerprint_match, hostname_match, cert_temp_init;
#if LIBGNUTLS_VERSION_NUMBER >= 0x010706 /* 1.7.6 */
gnutls_datum_t cinfo;
int rinfo;
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x010706 */
/* make C compiler happy */
(void) data;
(void) req_ca;
(void) nreq;
(void) pk_algos;
(void) pk_algos_len;
rc = 0;
if (!pointer)
return -1;
server = (struct t_irc_server *) pointer;
cert_temp_init = 0;
cert_list = NULL;
cert_list_len = 0;
fingerprint_eval = NULL;
weechat_dir = NULL;
if (action == WEECHAT_HOOK_CONNECT_GNUTLS_CB_VERIFY_CERT)
{
weechat_printf (
server->buffer,
_("%sgnutls: connected using %d-bit Diffie-Hellman shared secret "
"exchange"),
weechat_prefix ("network"),
IRC_SERVER_OPTION_INTEGER (server,
IRC_SERVER_OPTION_SSL_DHKEY_SIZE));
/* initialize the certificate structure */
if (gnutls_x509_crt_init (&cert_temp) != GNUTLS_E_SUCCESS)
{
weechat_printf (
server->buffer,
_("%sgnutls: failed to initialize certificate structure"),
weechat_prefix ("error"));
rc = -1;
goto end;
}
/* flag to do the "deinit" (at the end of function) */
cert_temp_init = 1;
/* get fingerprint option in server */
ptr_fingerprint = IRC_SERVER_OPTION_STRING(server,
IRC_SERVER_OPTION_SSL_FINGERPRINT);
fingerprint_eval = irc_server_eval_fingerprint (server);
if (!fingerprint_eval)
{
rc = -1;
goto end;
}
/* set match options */
fingerprint_match = (ptr_fingerprint && ptr_fingerprint[0]) ? 0 : 1;
hostname_match = 0;
/* get the peer's raw certificate (chain) as sent by the peer */
cert_list = gnutls_certificate_get_peers (tls_session, &cert_list_len);
if (cert_list)
{
weechat_printf (
server->buffer,
NG_("%sgnutls: receiving %d certificate",
"%sgnutls: receiving %d certificates",
cert_list_len),
weechat_prefix ("network"),
cert_list_len);
for (i = 0; i < cert_list_len; i++)
{
if (gnutls_x509_crt_import (cert_temp,
&cert_list[i],
GNUTLS_X509_FMT_DER) != GNUTLS_E_SUCCESS)
{
weechat_printf (
server->buffer,
_("%sgnutls: failed to import certificate[%d]"),
weechat_prefix ("error"), i + 1);
rc = -1;
goto end;
}
/* checks on first certificate received */
if (i == 0)
{
/* check if fingerprint matches the first certificate */
if (fingerprint_eval && fingerprint_eval[0])
{
fingerprint_match = irc_server_check_certificate_fingerprint (
server, cert_temp, fingerprint_eval);
}
/* check if hostname matches in the first certificate */
if (gnutls_x509_crt_check_hostname (cert_temp,
server->current_address) != 0)
{
hostname_match = 1;
}
}
#if LIBGNUTLS_VERSION_NUMBER >= 0x010706 /* 1.7.6 */
/* display infos about certificate */
#if LIBGNUTLS_VERSION_NUMBER < 0x020400 /* 2.4.0 */
rinfo = gnutls_x509_crt_print (cert_temp,
GNUTLS_X509_CRT_ONELINE, &cinfo);
#else
rinfo = gnutls_x509_crt_print (cert_temp,
GNUTLS_CRT_PRINT_ONELINE, &cinfo);
#endif /* LIBGNUTLS_VERSION_NUMBER < 0x020400 */
if (rinfo == 0)
{
weechat_printf (
server->buffer,
_("%s - certificate[%d] info:"),
weechat_prefix ("network"), i + 1);
weechat_printf (
server->buffer,
"%s - %s",
weechat_prefix ("network"), cinfo.data);
gnutls_free (cinfo.data);
}
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x010706 */
/* check dates, only if fingerprint is not set */
if (!ptr_fingerprint || !ptr_fingerprint[0])
{
/* check expiration date */
cert_time = gnutls_x509_crt_get_expiration_time (cert_temp);
if (cert_time < time (NULL))
{
weechat_printf (
server->buffer,
_("%sgnutls: certificate has expired"),
weechat_prefix ("error"));
rc = -1;
}
/* check activation date */
cert_time = gnutls_x509_crt_get_activation_time (cert_temp);
if (cert_time > time (NULL))
{
weechat_printf (
server->buffer,
_("%sgnutls: certificate is not yet activated"),
weechat_prefix ("error"));
rc = -1;
}
}
}
/*
* if fingerprint is set, display if matches, and don't check
* anything else
*/
if (ptr_fingerprint && ptr_fingerprint[0])
{
if (fingerprint_match)
{
weechat_printf (
server->buffer,
_("%sgnutls: certificate fingerprint matches"),
weechat_prefix ("network"));
}
else
{
weechat_printf (
server->buffer,
_("%sgnutls: certificate fingerprint does NOT match "
"(check value of option "
"irc.server.%s.ssl_fingerprint)"),
weechat_prefix ("error"), server->name);
rc = -1;
}
goto end;
}
if (!hostname_match)
{
weechat_printf (
server->buffer,
_("%sgnutls: the hostname in the certificate does NOT "
"match \"%s\""),
weechat_prefix ("error"), server->current_address);
rc = -1;
}
}
/* verify the peer’s certificate */
if (gnutls_certificate_verify_peers2 (tls_session, &status) < 0)
{
weechat_printf (
server->buffer,
_("%sgnutls: error while checking peer's certificate"),
weechat_prefix ("error"));
rc = -1;
goto end;
}
/* check if certificate is trusted */
if (status & GNUTLS_CERT_INVALID)
{
weechat_printf (
server->buffer,
_("%sgnutls: peer's certificate is NOT trusted"),
weechat_prefix ("error"));
rc = -1;
}
else
{
weechat_printf (
server->buffer,
_("%sgnutls: peer's certificate is trusted"),
weechat_prefix ("network"));
}
/* check if certificate issuer is known */
if (status & GNUTLS_CERT_SIGNER_NOT_FOUND)
{
weechat_printf (
server->buffer,
_("%sgnutls: peer's certificate issuer is unknown"),
weechat_prefix ("error"));
rc = -1;
}
/* check that certificate is not revoked */
if (status & GNUTLS_CERT_REVOKED)
{
weechat_printf (
server->buffer,
_("%sgnutls: the certificate has been revoked"),
weechat_prefix ("error"));
rc = -1;
}
}
else if (action == WEECHAT_HOOK_CONNECT_GNUTLS_CB_SET_CERT)
{
/* using client certificate if it exists */
cert_path0 = (char *) IRC_SERVER_OPTION_STRING(
server, IRC_SERVER_OPTION_SSL_CERT);
if (cert_path0 && cert_path0[0])
{
weechat_dir = weechat_info_get ("weechat_dir", "");
cert_path1 = weechat_string_replace (cert_path0, "%h", weechat_dir);
cert_path2 = (cert_path1) ?
weechat_string_expand_home (cert_path1) : NULL;
if (cert_path2)
{
cert_str = weechat_file_get_content (cert_path2);
if (cert_str)
{
weechat_printf (
server->buffer,
_("%sgnutls: sending one certificate"),
weechat_prefix ("network"));
filedatum.data = (unsigned char *) cert_str;
filedatum.size = strlen (cert_str);
/* certificate */
gnutls_x509_crt_init (&server->tls_cert);
gnutls_x509_crt_import (server->tls_cert, &filedatum,
GNUTLS_X509_FMT_PEM);
/* key password */
ssl_password = irc_server_eval_expression (
server,
IRC_SERVER_OPTION_STRING(server,
IRC_SERVER_OPTION_SSL_PASSWORD));
/* key */
gnutls_x509_privkey_init (&server->tls_cert_key);
/*
* gnutls_x509_privkey_import2 has no "Since: ..." in GnuTLS manual but
* GnuTLS NEWS file lists it being added in 3.1.0:
* https://gitlab.com/gnutls/gnutls/blob/2b715b9564681acb3008a5574dcf25464de8b038/NEWS#L2552
*/
#if LIBGNUTLS_VERSION_NUMBER >= 0x030100 /* 3.1.0 */
ret = gnutls_x509_privkey_import2 (server->tls_cert_key,
&filedatum,
GNUTLS_X509_FMT_PEM,
ssl_password,
0);
#else
ret = gnutls_x509_privkey_import (server->tls_cert_key,
&filedatum,
GNUTLS_X509_FMT_PEM);
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x0301000 */
if (ret < 0)
{
ret = gnutls_x509_privkey_import_pkcs8 (
server->tls_cert_key,
&filedatum,
GNUTLS_X509_FMT_PEM,
ssl_password,
GNUTLS_PKCS_PLAIN);
}
if (ret < 0)
{
weechat_printf (
server->buffer,
_("%sgnutls: invalid certificate \"%s\", error: "
"%s"),
weechat_prefix ("error"), cert_path2,
gnutls_strerror (ret));
rc = -1;
}
else
{
#if LIBGNUTLS_VERSION_NUMBER >= 0x020b00 /* 2.11.0 */
tls_struct.cert_type = GNUTLS_CRT_X509;
tls_struct.key_type = GNUTLS_PRIVKEY_X509;
#else
tls_struct.type = GNUTLS_CRT_X509;
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x020b00 */
tls_struct.ncerts = 1;
tls_struct.deinit_all = 0;
tls_struct.cert.x509 = &server->tls_cert;
tls_struct.key.x509 = server->tls_cert_key;
#if LIBGNUTLS_VERSION_NUMBER >= 0x010706 /* 1.7.6 */
/* client certificate info */
#if LIBGNUTLS_VERSION_NUMBER < 0x020400 /* 2.4.0 */
rinfo = gnutls_x509_crt_print (server->tls_cert,
GNUTLS_X509_CRT_ONELINE,
&cinfo);
#else
rinfo = gnutls_x509_crt_print (server->tls_cert,
GNUTLS_CRT_PRINT_ONELINE,
&cinfo);
#endif /* LIBGNUTLS_VERSION_NUMBER < 0x020400 */
if (rinfo == 0)
{
weechat_printf (
server->buffer,
_("%s - client certificate info (%s):"),
weechat_prefix ("network"), cert_path2);
weechat_printf (
server->buffer, "%s - %s",
weechat_prefix ("network"), cinfo.data);
gnutls_free (cinfo.data);
}
#endif /* LIBGNUTLS_VERSION_NUMBER >= 0x010706 */
memcpy (answer, &tls_struct, sizeof (tls_struct));
free (cert_str);
}
if (ssl_password)
free (ssl_password);
}
else
{
weechat_printf (
server->buffer,
_("%sgnutls: unable to read certificate \"%s\""),
weechat_prefix ("error"), cert_path2);
}
}
if (cert_path1)
free (cert_path1);
if (cert_path2)
free (cert_path2);
}
}
end:
/* an error should stop the handshake unless the user doesn't care */
if ((rc == -1)
&& (IRC_SERVER_OPTION_BOOLEAN(server, IRC_SERVER_OPTION_SSL_VERIFY) == 0))
{
rc = 0;
}
if (cert_temp_init)
gnutls_x509_crt_deinit (cert_temp);
if (weechat_dir)
free (weechat_dir);
if (fingerprint_eval)
free (fingerprint_eval);
return rc;
}
|
Safe
|
[
"CWE-120",
"CWE-787"
] |
weechat
|
40ccacb4330a64802b1f1e28ed9a6b6d3ca9197f
|
1.6366549527022147e+38
| 415 |
irc: fix crash when a new message 005 is received with longer nick prefixes
Thanks to Stuart Nevans Locke for reporting the issue.
| 0 |
ImagingResampleHorizontal(Imaging imIn, int xsize, int filter)
{
ImagingSectionCookie cookie;
Imaging imOut;
struct filter *filterp;
float support, scale, filterscale;
float center, ww, ss, ss0, ss1, ss2, ss3;
int xx, yy, x, kmax, xmin, xmax;
int *xbounds;
float *k, *kk;
/* check filter */
switch (filter) {
case IMAGING_TRANSFORM_LANCZOS:
filterp = &LANCZOS;
break;
case IMAGING_TRANSFORM_BILINEAR:
filterp = &BILINEAR;
break;
case IMAGING_TRANSFORM_BICUBIC:
filterp = &BICUBIC;
break;
default:
return (Imaging) ImagingError_ValueError(
"unsupported resampling filter"
);
}
/* prepare for horizontal stretch */
filterscale = scale = (float) imIn->xsize / xsize;
/* determine support size (length of resampling filter) */
support = filterp->support;
if (filterscale < 1.0) {
filterscale = 1.0;
}
support = support * filterscale;
/* maximum number of coofs */
kmax = (int) ceil(support) * 2 + 1;
// check for overflow
if (kmax > 0 && xsize > SIZE_MAX / kmax)
return (Imaging) ImagingError_MemoryError();
// sizeof(float) should be greater than 0
if (xsize * kmax > SIZE_MAX / sizeof(float))
return (Imaging) ImagingError_MemoryError();
/* coefficient buffer */
kk = malloc(xsize * kmax * sizeof(float));
if ( ! kk)
return (Imaging) ImagingError_MemoryError();
// sizeof(int) should be greater than 0 as well
if (xsize > SIZE_MAX / (2 * sizeof(int)))
return (Imaging) ImagingError_MemoryError();
xbounds = malloc(xsize * 2 * sizeof(int));
if ( ! xbounds) {
free(kk);
return (Imaging) ImagingError_MemoryError();
}
for (xx = 0; xx < xsize; xx++) {
k = &kk[xx * kmax];
center = (xx + 0.5) * scale;
ww = 0.0;
ss = 1.0 / filterscale;
xmin = (int) floor(center - support);
if (xmin < 0)
xmin = 0;
xmax = (int) ceil(center + support);
if (xmax > imIn->xsize)
xmax = imIn->xsize;
for (x = xmin; x < xmax; x++) {
float w = filterp->filter((x - center + 0.5) * ss) * ss;
k[x - xmin] = w;
ww += w;
}
for (x = 0; x < xmax - xmin; x++) {
if (ww != 0.0)
k[x] /= ww;
}
xbounds[xx * 2 + 0] = xmin;
xbounds[xx * 2 + 1] = xmax;
}
imOut = ImagingNew(imIn->mode, xsize, imIn->ysize);
if ( ! imOut) {
free(kk);
free(xbounds);
return NULL;
}
ImagingSectionEnter(&cookie);
/* horizontal stretch */
for (yy = 0; yy < imOut->ysize; yy++) {
if (imIn->image8) {
/* 8-bit grayscale */
for (xx = 0; xx < xsize; xx++) {
xmin = xbounds[xx * 2 + 0];
xmax = xbounds[xx * 2 + 1];
k = &kk[xx * kmax];
ss = 0.5;
for (x = xmin; x < xmax; x++)
ss += i2f(imIn->image8[yy][x]) * k[x - xmin];
imOut->image8[yy][xx] = clip8(ss);
}
} else {
switch(imIn->type) {
case IMAGING_TYPE_UINT8:
/* n-bit grayscale */
if (imIn->bands == 2) {
for (xx = 0; xx < xsize; xx++) {
xmin = xbounds[xx * 2 + 0];
xmax = xbounds[xx * 2 + 1];
k = &kk[xx * kmax];
ss0 = ss1 = 0.5;
for (x = xmin; x < xmax; x++) {
ss0 += i2f((UINT8) imIn->image[yy][x*4 + 0]) * k[x - xmin];
ss1 += i2f((UINT8) imIn->image[yy][x*4 + 3]) * k[x - xmin];
}
imOut->image[yy][xx*4 + 0] = clip8(ss0);
imOut->image[yy][xx*4 + 3] = clip8(ss1);
}
} else if (imIn->bands == 3) {
for (xx = 0; xx < xsize; xx++) {
xmin = xbounds[xx * 2 + 0];
xmax = xbounds[xx * 2 + 1];
k = &kk[xx * kmax];
ss0 = ss1 = ss2 = 0.5;
for (x = xmin; x < xmax; x++) {
ss0 += i2f((UINT8) imIn->image[yy][x*4 + 0]) * k[x - xmin];
ss1 += i2f((UINT8) imIn->image[yy][x*4 + 1]) * k[x - xmin];
ss2 += i2f((UINT8) imIn->image[yy][x*4 + 2]) * k[x - xmin];
}
imOut->image[yy][xx*4 + 0] = clip8(ss0);
imOut->image[yy][xx*4 + 1] = clip8(ss1);
imOut->image[yy][xx*4 + 2] = clip8(ss2);
}
} else {
for (xx = 0; xx < xsize; xx++) {
xmin = xbounds[xx * 2 + 0];
xmax = xbounds[xx * 2 + 1];
k = &kk[xx * kmax];
ss0 = ss1 = ss2 = ss3 = 0.5;
for (x = xmin; x < xmax; x++) {
ss0 += i2f((UINT8) imIn->image[yy][x*4 + 0]) * k[x - xmin];
ss1 += i2f((UINT8) imIn->image[yy][x*4 + 1]) * k[x - xmin];
ss2 += i2f((UINT8) imIn->image[yy][x*4 + 2]) * k[x - xmin];
ss3 += i2f((UINT8) imIn->image[yy][x*4 + 3]) * k[x - xmin];
}
imOut->image[yy][xx*4 + 0] = clip8(ss0);
imOut->image[yy][xx*4 + 1] = clip8(ss1);
imOut->image[yy][xx*4 + 2] = clip8(ss2);
imOut->image[yy][xx*4 + 3] = clip8(ss3);
}
}
break;
case IMAGING_TYPE_INT32:
/* 32-bit integer */
for (xx = 0; xx < xsize; xx++) {
xmin = xbounds[xx * 2 + 0];
xmax = xbounds[xx * 2 + 1];
k = &kk[xx * kmax];
ss = 0.0;
for (x = xmin; x < xmax; x++)
ss += i2f(IMAGING_PIXEL_I(imIn, x, yy)) * k[x - xmin];
IMAGING_PIXEL_I(imOut, xx, yy) = (int) ss;
}
break;
case IMAGING_TYPE_FLOAT32:
/* 32-bit float */
for (xx = 0; xx < xsize; xx++) {
xmin = xbounds[xx * 2 + 0];
xmax = xbounds[xx * 2 + 1];
k = &kk[xx * kmax];
ss = 0.0;
for (x = xmin; x < xmax; x++)
ss += IMAGING_PIXEL_F(imIn, x, yy) * k[x - xmin];
IMAGING_PIXEL_F(imOut, xx, yy) = ss;
}
break;
}
}
}
ImagingSectionLeave(&cookie);
free(kk);
free(xbounds);
return imOut;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
Pillow
|
4e0d9b0b9740d258ade40cce248c93777362ac1e
|
2.78368102239443e+38
| 194 |
fix integer overflow in Resample.c
| 0 |
static int stream_destroy_iter(void *ctx, void *val)
{
h2_mplx *m = ctx;
h2_stream *stream = val;
h2_ihash_remove(m->spurge, stream->id);
ap_assert(stream->state == H2_SS_CLEANUP);
if (stream->input) {
/* Process outstanding events before destruction */
input_consumed_signal(m, stream);
h2_beam_log(stream->input, m->c, APLOG_TRACE2, "stream_destroy");
h2_beam_destroy(stream->input);
stream->input = NULL;
}
if (stream->task) {
h2_task *task = stream->task;
conn_rec *slave;
int reuse_slave = 0;
stream->task = NULL;
slave = task->c;
if (slave) {
/* On non-serialized requests, the IO logging has not accounted for any
* meta data send over the network: response headers and h2 frame headers. we
* counted this on the stream and need to add this now.
* This is supposed to happen before the EOR bucket triggers the
* logging of the transaction. *fingers crossed* */
if (task->request && !task->request->serialize && h2_task_logio_add_bytes_out) {
apr_off_t unaccounted = stream->out_frame_octets - stream->out_data_octets;
if (unaccounted > 0) {
h2_task_logio_add_bytes_out(slave, unaccounted);
}
}
if (m->s->keep_alive_max == 0 || slave->keepalives < m->s->keep_alive_max) {
reuse_slave = ((m->spare_slaves->nelts < (m->limit_active * 3 / 2))
&& !task->rst_error);
}
if (reuse_slave && slave->keepalive == AP_CONN_KEEPALIVE) {
h2_beam_log(task->output.beam, m->c, APLOG_DEBUG,
APLOGNO(03385) "h2_task_destroy, reuse slave");
h2_task_destroy(task);
APR_ARRAY_PUSH(m->spare_slaves, conn_rec*) = slave;
}
else {
h2_beam_log(task->output.beam, m->c, APLOG_TRACE1,
"h2_task_destroy, destroy slave");
h2_slave_destroy(slave);
}
}
}
h2_stream_destroy(stream);
return 0;
}
|
Safe
|
[
"CWE-444"
] |
mod_h2
|
825de6a46027b2f4c30d7ff5a0c8b852d639c207
|
7.767777010768395e+37
| 57 |
* Fixed keepalives counter on slave connections.
| 0 |
zfs_zaccess_aces_check(znode_t *zp, uint32_t *working_mode,
boolean_t anyaccess, cred_t *cr)
{
zfsvfs_t *zfsvfs = zp->z_zfsvfs;
zfs_acl_t *aclp;
int error;
uid_t uid = crgetuid(cr);
uint64_t who;
uint16_t type, iflags;
uint16_t entry_type;
uint32_t access_mask;
uint32_t deny_mask = 0;
zfs_ace_hdr_t *acep = NULL;
boolean_t checkit;
uid_t gowner;
uid_t fowner;
zfs_fuid_map_ids(zp, cr, &fowner, &gowner);
mutex_enter(&zp->z_acl_lock);
if (zp->z_zfsvfs->z_replay == B_FALSE)
ASSERT_VOP_LOCKED(ZTOV(zp), __func__);
error = zfs_acl_node_read(zp, B_TRUE, &aclp, B_FALSE);
if (error != 0) {
mutex_exit(&zp->z_acl_lock);
return (error);
}
ASSERT(zp->z_acl_cached);
while ((acep = zfs_acl_next_ace(aclp, acep, &who, &access_mask,
&iflags, &type))) {
uint32_t mask_matched;
if (!zfs_acl_valid_ace_type(type, iflags))
continue;
if (ZTOV(zp)->v_type == VDIR && (iflags & ACE_INHERIT_ONLY_ACE))
continue;
/* Skip ACE if it does not affect any AoI */
mask_matched = (access_mask & *working_mode);
if (!mask_matched)
continue;
entry_type = (iflags & ACE_TYPE_FLAGS);
checkit = B_FALSE;
switch (entry_type) {
case ACE_OWNER:
if (uid == fowner)
checkit = B_TRUE;
break;
case OWNING_GROUP:
who = gowner;
/*FALLTHROUGH*/
case ACE_IDENTIFIER_GROUP:
checkit = zfs_groupmember(zfsvfs, who, cr);
break;
case ACE_EVERYONE:
checkit = B_TRUE;
break;
/* USER Entry */
default:
if (entry_type == 0) {
uid_t newid;
newid = zfs_fuid_map_id(zfsvfs, who, cr,
ZFS_ACE_USER);
if (newid != UID_NOBODY &&
uid == newid)
checkit = B_TRUE;
break;
} else {
mutex_exit(&zp->z_acl_lock);
return (SET_ERROR(EIO));
}
}
if (checkit) {
if (type == DENY) {
DTRACE_PROBE3(zfs__ace__denies,
znode_t *, zp,
zfs_ace_hdr_t *, acep,
uint32_t, mask_matched);
deny_mask |= mask_matched;
} else {
DTRACE_PROBE3(zfs__ace__allows,
znode_t *, zp,
zfs_ace_hdr_t *, acep,
uint32_t, mask_matched);
if (anyaccess) {
mutex_exit(&zp->z_acl_lock);
return (0);
}
}
*working_mode &= ~mask_matched;
}
/* Are we done? */
if (*working_mode == 0)
break;
}
mutex_exit(&zp->z_acl_lock);
/* Put the found 'denies' back on the working mode */
if (deny_mask) {
*working_mode |= deny_mask;
return (SET_ERROR(EACCES));
} else if (*working_mode) {
return (-1);
}
return (0);
}
|
Safe
|
[
"CWE-200",
"CWE-732"
] |
zfs
|
716b53d0a14c72bda16c0872565dd1909757e73f
|
6.884406879080577e+37
| 119 |
FreeBSD: Fix UNIX permissions checking
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matt Macy <mmacy@FreeBSD.org>
Closes #10727
| 0 |
static char *phar_get_link_location(phar_entry_info *entry) /* {{{ */
{
char *p, *ret = NULL;
if (!entry->link) {
return NULL;
}
if (entry->link[0] == '/') {
return estrdup(entry->link + 1);
}
p = strrchr(entry->filename, '/');
if (p) {
*p = '\0';
spprintf(&ret, 0, "%s/%s", entry->filename, entry->link);
return ret;
}
return entry->link;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
php-src
|
0bfb970f43acd1e81d11be1154805f86655f15d5
|
3.0267486518109204e+38
| 17 |
Fix bug #72928 - Out of bound when verify signature of zip phar in phar_parse_zipfile
(cherry picked from commit 19484ab77466f99c78fc0e677f7e03da0584d6a2)
| 0 |
struct ion_buffer *ion_handle_buffer(struct ion_handle *handle)
{
return handle->buffer;
}
|
Safe
|
[
"CWE-416",
"CWE-284"
] |
linux
|
9590232bb4f4cc824f3425a6e1349afbe6d6d2b7
|
7.544292891969821e+37
| 4 |
staging/android/ion : fix a race condition in the ion driver
There is a use-after-free problem in the ion driver.
This is caused by a race condition in the ion_ioctl()
function.
A handle has ref count of 1 and two tasks on different
cpus calls ION_IOC_FREE simultaneously.
cpu 0 cpu 1
-------------------------------------------------------
ion_handle_get_by_id()
(ref == 2)
ion_handle_get_by_id()
(ref == 3)
ion_free()
(ref == 2)
ion_handle_put()
(ref == 1)
ion_free()
(ref == 0 so ion_handle_destroy() is
called
and the handle is freed.)
ion_handle_put() is called and it
decreases the slub's next free pointer
The problem is detected as an unaligned access in the
spin lock functions since it uses load exclusive
instruction. In some cases it corrupts the slub's
free pointer which causes a mis-aligned access to the
next free pointer.(kmalloc returns a pointer like
ffffc0745b4580aa). And it causes lots of other
hard-to-debug problems.
This symptom is caused since the first member in the
ion_handle structure is the reference count and the
ion driver decrements the reference after it has been
freed.
To fix this problem client->lock mutex is extended
to protect all the codes that uses the handle.
Signed-off-by: Eun Taik Lee <eun.taik.lee@samsung.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
static int offsetCmp(const void * avp, const void * bvp)
{
indexEntry ap = (indexEntry) avp, bp = (indexEntry) bvp;
int rc = (ap->info.offset - bp->info.offset);
if (rc == 0) {
/* Within a region, entries sort by address. Added drips sort by tag. */
if (ap->info.offset < 0)
rc = (((char *)ap->data) - ((char *)bp->data));
else
rc = (ap->info.tag - bp->info.tag);
}
return rc;
}
|
Safe
|
[
"CWE-125"
] |
rpm
|
8f4b3c3cab8922a2022b9e47c71f1ecf906077ef
|
2.724866371173349e+38
| 14 |
hdrblobInit() needs bounds checks too
Users can pass untrusted data to hdrblobInit() and it must be robust
against this.
| 0 |
MATCHER_P(ApplicationProtocolListEq, expected, "") {
const Network::TransportSocketOptionsConstSharedPtr& options = arg;
EXPECT_EQ(options->applicationProtocolListOverride(), std::vector<std::string>{expected});
return true;
}
|
Safe
|
[
"CWE-476"
] |
envoy
|
9b1c3962172a972bc0359398af6daa3790bb59db
|
3.2080772838231007e+38
| 5 |
healthcheck: fix grpc inline removal crashes (#749)
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Pradeep Rao <pcrao@google.com>
| 0 |
void reset() {
tType = DRW::LAYER;
lineType = "CONTINUOUS";
color = 7; // default BYLAYER (256)
plotF = true; // default TRUE (plot yes)
lWeight = DRW_LW_Conv::widthDefault; // default BYDEFAULT (dxf -3, dwg 31)
color24 = -1; //default -1 not set
DRW_TableEntry::reset();
}
|
Safe
|
[
"CWE-191"
] |
libdxfrw
|
fcd977cc7f8f6cc7f012e5b72d33cf7d77b3fa69
|
3.0597329593155363e+38
| 9 |
fixed heap use after free vulnerability CVE-2021-21900
as reported in TALOS-2021-1351 / CVE-2021-21900,
DRW_TableEntry::parseCode had the potential to trigger an use after free exception with a malformed DXF file.
| 0 |
int ar6000_target_config_wlan_params(struct ar6_softc *ar)
{
int status = 0;
#ifdef CONFIG_HOST_TCMD_SUPPORT
if (ar->arTargetMode != AR6000_WLAN_MODE) {
return 0;
}
#endif /* CONFIG_HOST_TCMD_SUPPORT */
/*
* configure the device for rx dot11 header rules 0,0 are the default values
* therefore this command can be skipped if the inputs are 0,FALSE,FALSE.Required
* if checksum offload is needed. Set RxMetaVersion to 2
*/
if ((wmi_set_rx_frame_format_cmd(ar->arWmi,ar->rxMetaVersion, processDot11Hdr, processDot11Hdr)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set the rx frame format.\n"));
status = A_ERROR;
}
status = ath6kl_config_btcoex_params(ar);
if (status)
return status;
#if WLAN_CONFIG_IGNORE_POWER_SAVE_FAIL_EVENT_DURING_SCAN
if ((wmi_pmparams_cmd(ar->arWmi, 0, 1, 0, 0, 1, IGNORE_POWER_SAVE_FAIL_EVENT_DURING_SCAN)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set power save fail event policy\n"));
status = A_ERROR;
}
#endif
#if WLAN_CONFIG_DONOT_IGNORE_BARKER_IN_ERP
if ((wmi_set_lpreamble_cmd(ar->arWmi, 0, WMI_DONOT_IGNORE_BARKER_IN_ERP)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set barker preamble policy\n"));
status = A_ERROR;
}
#endif
if ((wmi_set_keepalive_cmd(ar->arWmi, WLAN_CONFIG_KEEP_ALIVE_INTERVAL)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set keep alive interval\n"));
status = A_ERROR;
}
#if WLAN_CONFIG_DISABLE_11N
{
WMI_SET_HT_CAP_CMD htCap;
memset(&htCap, 0, sizeof(WMI_SET_HT_CAP_CMD));
htCap.band = 0;
if ((wmi_set_ht_cap_cmd(ar->arWmi, &htCap)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set ht capabilities \n"));
status = A_ERROR;
}
htCap.band = 1;
if ((wmi_set_ht_cap_cmd(ar->arWmi, &htCap)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set ht capabilities \n"));
status = A_ERROR;
}
}
#endif /* WLAN_CONFIG_DISABLE_11N */
#ifdef ATH6K_CONFIG_OTA_MODE
if ((wmi_powermode_cmd(ar->arWmi, MAX_PERF_POWER)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set power mode \n"));
status = A_ERROR;
}
#endif
if ((wmi_disctimeout_cmd(ar->arWmi, WLAN_CONFIG_DISCONNECT_TIMEOUT)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set disconnect timeout \n"));
status = A_ERROR;
}
#if WLAN_CONFIG_DISABLE_TX_BURSTING
if ((wmi_set_wmm_txop(ar->arWmi, WMI_TXOP_DISABLED)) != 0) {
AR_DEBUG_PRINTF(ATH_DEBUG_ERR,("Unable to set txop bursting \n"));
status = A_ERROR;
}
#endif
return status;
}
|
Safe
|
[
"CWE-703",
"CWE-264"
] |
linux
|
550fd08c2cebad61c548def135f67aba284c6162
|
2.930819150088068e+37
| 83 |
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Karsten Keil <isdn@linux-pingi.de>
CC: "David S. Miller" <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Krzysztof Halasa <khc@pm.waw.pl>
CC: "John W. Linville" <linville@tuxdriver.com>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: Marcel Holtmann <marcel@holtmann.org>
CC: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
select_send_analyze(THD *thd_arg): select_send(thd_arg) {}
|
Safe
|
[
"CWE-416"
] |
server
|
4681b6f2d8c82b4ec5cf115e83698251963d80d5
|
8.548375904356982e+37
| 1 |
MDEV-26281 ASAN use-after-poison when complex conversion is involved in blob
the bug was that in_vector array in Item_func_in was allocated in the
statement arena, not in the table->expr_arena.
revert part of the 5acd391e8b2d. Instead, change the arena correctly
in fix_all_session_vcol_exprs().
Remove TABLE_ARENA, that was introduced in 5acd391e8b2d to force
item tree changes to be rolled back (because they were allocated in the
wrong arena and didn't persist. now they do)
| 0 |
cmd_loadfile(char *fn)
{
Buffer *buf;
buf = loadGeneralFile(file_to_url(fn), NULL, NO_REFERER, 0, NULL);
if (buf == NULL) {
/* FIXME: gettextize? */
char *emsg = Sprintf("%s not found", conv_from_system(fn))->ptr;
disp_err_message(emsg, FALSE);
}
else if (buf != NO_BUFFER) {
pushBuffer(buf);
if (RenderFrame && Currentbuf->frameset != NULL)
rFrame();
}
displayBuffer(Currentbuf, B_NORMAL);
}
|
Safe
|
[
"CWE-59",
"CWE-241"
] |
w3m
|
18dcbadf2771cdb0c18509b14e4e73505b242753
|
4.632499223237737e+37
| 17 |
Make temporary directory safely when ~/.w3m is unwritable
| 0 |
mm_answer_authserv(int sock, Buffer *m)
{
monitor_permit_authentications(1);
authctxt->service = buffer_get_string(m, NULL);
authctxt->style = buffer_get_string(m, NULL);
debug3("%s: service=%s, style=%s",
__func__, authctxt->service, authctxt->style);
if (strlen(authctxt->style) == 0) {
free(authctxt->style);
authctxt->style = NULL;
}
return (0);
}
|
Safe
|
[
"CWE-20",
"CWE-200"
] |
openssh-portable
|
d4697fe9a28dab7255c60433e4dd23cf7fce8a8b
|
2.209702364914724e+38
| 16 |
Don't resend username to PAM; it already has it.
Pointed out by Moritz Jodeit; ok dtucker@
| 0 |
sctp_disposition_t sctp_sf_do_8_5_1_E_sa(const struct sctp_endpoint *ep,
const struct sctp_association *asoc,
const sctp_subtype_t type,
void *arg,
sctp_cmd_seq_t *commands)
{
/* Although we do have an association in this case, it corresponds
* to a restarted association. So the packet is treated as an OOTB
* packet and the state function that handles OOTB SHUTDOWN_ACK is
* called with a NULL association.
*/
return sctp_sf_shut_8_4_5(ep, NULL, type, arg, commands);
}
|
Safe
|
[] |
linux-2.6
|
7c3ceb4fb9667f34f1599a062efecf4cdc4a4ce5
|
2.0094805051975307e+37
| 13 |
[SCTP]: Allow spillover of receive buffer to avoid deadlock.
This patch fixes a deadlock situation in the receive path by allowing
temporary spillover of the receive buffer.
- If the chunk we receive has a tsn that immediately follows the ctsn,
accept it even if we run out of receive buffer space and renege data with
higher TSNs.
- Once we accept one chunk in a packet, accept all the remaining chunks
even if we run out of receive buffer space.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Mark Butler <butlerm@middle.net>
Acked-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index)
{
struct extent_info ei;
struct inode *inode = dn->inode;
if (f2fs_lookup_extent_cache(inode, index, &ei)) {
dn->data_blkaddr = ei.blk + index - ei.fofs;
return 0;
}
return f2fs_reserve_block(dn, index);
}
|
Safe
|
[
"CWE-190"
] |
linux
|
b86e33075ed1909d8002745b56ecf73b833db143
|
2.314121643295011e+38
| 12 |
f2fs: fix a dead loop in f2fs_fiemap()
A dead loop can be triggered in f2fs_fiemap() using the test case
as below:
...
fd = open();
fallocate(fd, 0, 0, 4294967296);
ioctl(fd, FS_IOC_FIEMAP, fiemap_buf);
...
It's caused by an overflow in __get_data_block():
...
bh->b_size = map.m_len << inode->i_blkbits;
...
map.m_len is an unsigned int, and bh->b_size is a size_t which is 64 bits
on 64 bits archtecture, type conversion from an unsigned int to a size_t
will result in an overflow.
In the above-mentioned case, bh->b_size will be zero, and f2fs_fiemap()
will call get_data_block() at block 0 again an again.
Fix this by adding a force conversion before left shift.
Signed-off-by: Wei Fang <fangwei1@huawei.com>
Acked-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
| 0 |
check_options(
dle_t *dle)
{
if (GPOINTER_TO_INT(dle->estimatelist->data) == ES_CALCSIZE) {
need_calcsize=1;
}
if (strcmp(dle->program,"GNUTAR") == 0) {
need_gnutar=1;
if(dle->device && dle->device[0] == '/' && dle->device[1] == '/') {
if(dle->exclude_file && dle->exclude_file->nb_element > 1) {
g_printf(_("ERROR [samba support only one exclude file]\n"));
}
if (dle->exclude_list && dle->exclude_list->nb_element > 0 &&
dle->exclude_optional==0) {
g_printf(_("ERROR [samba does not support exclude list]\n"));
}
if (dle->include_file && dle->include_file->nb_element > 0) {
g_printf(_("ERROR [samba does not support include file]\n"));
}
if (dle->include_list && dle->include_list->nb_element > 0 &&
dle->include_optional==0) {
g_printf(_("ERROR [samba does not support include list]\n"));
}
need_samba=1;
} else {
int nb_exclude = 0;
int nb_include = 0;
char *file_exclude = NULL;
char *file_include = NULL;
if (dle->exclude_file) nb_exclude += dle->exclude_file->nb_element;
if (dle->exclude_list) nb_exclude += dle->exclude_list->nb_element;
if (dle->include_file) nb_include += dle->include_file->nb_element;
if (dle->include_list) nb_include += dle->include_list->nb_element;
if (nb_exclude > 0) file_exclude = build_exclude(dle, 1);
if (nb_include > 0) file_include = build_include(dle, 1);
amfree(file_exclude);
amfree(file_include);
need_runtar=1;
}
}
if (strcmp(dle->program,"DUMP") == 0) {
if (dle->exclude_file && dle->exclude_file->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support exclude file]\n"));
}
if (dle->exclude_list && dle->exclude_list->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support exclude list]\n"));
}
if (dle->include_file && dle->include_file->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support include file]\n"));
}
if (dle->include_list && dle->include_list->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support include list]\n"));
}
#ifdef USE_RUNDUMP
need_rundump=1;
#endif
#ifndef AIX_BACKUP
#ifdef VDUMP
#ifdef DUMP
if (dle->device && strcmp(amname_to_fstype(dle->device), "advfs") == 0)
#else
if (1)
#endif
{
need_vdump=1;
need_rundump=1;
if (dle->create_index)
need_vrestore=1;
}
else
#endif /* VDUMP */
#ifdef XFSDUMP
#ifdef DUMP
if (dle->device && strcmp(amname_to_fstype(dle->device), "xfs") == 0)
#else
if (1)
#endif
{
need_xfsdump=1;
need_rundump=1;
if (dle->create_index)
need_xfsrestore=1;
}
else
#endif /* XFSDUMP */
#ifdef VXDUMP
#ifdef DUMP
if (dle->device && strcmp(amname_to_fstype(dle->device), "vxfs") == 0)
#else
if (1)
#endif
{
need_vxdump=1;
if (dle->create_index)
need_vxrestore=1;
}
else
#endif /* VXDUMP */
{
need_dump=1;
if (dle->create_index)
need_restore=1;
}
#else
/* AIX backup program */
need_dump=1;
if (dle->create_index)
need_restore=1;
#endif
}
if ((dle->compress == COMP_BEST) || (dle->compress == COMP_FAST)
|| (dle->compress == COMP_CUST)) {
need_compress_path=1;
}
if (dle->auth && amandad_auth) {
if (strcasecmp(dle->auth, amandad_auth) != 0) {
g_fprintf(stdout,_("ERROR [client configured for auth=%s while server requested '%s']\n"),
amandad_auth, dle->auth);
if (strcmp(dle->auth, "ssh") == 0) {
g_fprintf(stderr, _("ERROR [The auth in ~/.ssh/authorized_keys "
"should be \"--auth=ssh\", or use another auth "
" for the DLE]\n"));
}
else {
g_fprintf(stderr, _("ERROR [The auth in the inetd/xinetd configuration "
" must be the same as the DLE]\n"));
}
}
}
}
|
Safe
|
[
"CWE-264"
] |
amanda
|
4bf5b9b356848da98560ffbb3a07a9cb5c4ea6d7
|
9.170119737122664e+37
| 136 |
* Add a /etc/amanda-security.conf file
git-svn-id: https://svn.code.sf.net/p/amanda/code/amanda/branches/3_3@6486 a8d146d6-cc15-0410-8900-af154a0219e0
| 0 |
void Compute(OpKernelContext* context) override {
const Tensor& input = context->input(0);
const auto& input_min_tensor = context->input(1);
OP_REQUIRES(context, input_min_tensor.NumElements() == 1,
errors::InvalidArgument("input_min must have 1 element"));
const float input_min = input_min_tensor.flat<float>()(0);
const auto& input_max_tensor = context->input(2);
OP_REQUIRES(context, input_max_tensor.NumElements() == 1,
errors::InvalidArgument("input_max must have 1 element"));
const float input_max = input_max_tensor.flat<float>()(0);
const Tensor& mean = context->input(3);
const auto& mean_min_tensor = context->input(4);
OP_REQUIRES(context, mean_min_tensor.NumElements() == 1,
errors::InvalidArgument("mean_min must have 1 element"));
const float mean_min = mean_min_tensor.flat<float>()(0);
const auto& mean_max_tensor = context->input(5);
OP_REQUIRES(context, mean_max_tensor.NumElements() == 1,
errors::InvalidArgument("mean_max must have 1 element"));
const float mean_max = mean_max_tensor.flat<float>()(0);
const Tensor& var = context->input(6);
const auto& var_min_tensor = context->input(7);
OP_REQUIRES(context, var_min_tensor.NumElements() == 1,
errors::InvalidArgument("var_min must have 1 element"));
const float var_min = var_min_tensor.flat<float>()(0);
const auto& var_max_tensor = context->input(8);
OP_REQUIRES(context, var_max_tensor.NumElements() == 1,
errors::InvalidArgument("var_max must have 1 element"));
const float var_max = var_max_tensor.flat<float>()(0);
const Tensor& beta = context->input(9);
const auto& beta_min_tensor = context->input(10);
OP_REQUIRES(context, beta_min_tensor.NumElements() == 1,
errors::InvalidArgument("beta_min must have 1 element"));
const float beta_min = beta_min_tensor.flat<float>()(0);
const auto& beta_max_tensor = context->input(11);
OP_REQUIRES(context, beta_max_tensor.NumElements() == 1,
errors::InvalidArgument("beta_max must have 1 element"));
const float beta_max = beta_max_tensor.flat<float>()(0);
const Tensor& gamma = context->input(12);
const auto& gamma_min_tensor = context->input(13);
OP_REQUIRES(context, gamma_min_tensor.NumElements() == 1,
errors::InvalidArgument("gamma_min must have 1 element"));
const float gamma_min = gamma_min_tensor.flat<float>()(0);
const auto& gamma_max_tensor = context->input(14);
OP_REQUIRES(context, gamma_max_tensor.NumElements() == 1,
errors::InvalidArgument("gamma_max must have 1 element"));
const float gamma_max = gamma_max_tensor.flat<float>()(0);
OP_REQUIRES(context, input.dims() == 4,
errors::InvalidArgument("input must be 4-dimensional",
input.shape().DebugString()));
OP_REQUIRES(context, mean.dims() == 1,
errors::InvalidArgument("mean must be 1-dimensional",
mean.shape().DebugString()));
OP_REQUIRES(context, var.dims() == 1,
errors::InvalidArgument("var must be 1-dimensional",
var.shape().DebugString()));
OP_REQUIRES(context, beta.dims() == 1,
errors::InvalidArgument("beta must be 1-dimensional",
beta.shape().DebugString()));
OP_REQUIRES(context, gamma.dims() == 1,
errors::InvalidArgument("gamma must be 1-dimensional",
gamma.shape().DebugString()));
OP_REQUIRES(context, mean.NumElements() > 1,
errors::InvalidArgument("Must have at least a mean value",
gamma.shape().DebugString()));
OP_REQUIRES(context, mean.NumElements() > 1,
errors::InvalidArgument("Must have at least a mean value"));
const auto last_dim = input.shape().dims() - 1;
OP_REQUIRES(context,
mean.shape().dim_size(0) == input.shape().dim_size(last_dim),
errors::InvalidArgument("Must provide as many means as the "
"last dimension of the input tensor: ",
mean.shape().DebugString(), " vs. ",
input.shape().DebugString()));
OP_REQUIRES(
context, mean.shape().dim_size(0) == var.shape().dim_size(0),
errors::InvalidArgument(
"Mean and variance tensors must have the same shape: ",
mean.shape().DebugString(), " vs. ", var.shape().DebugString()));
OP_REQUIRES(
context, mean.shape().dim_size(0) == beta.shape().dim_size(0),
errors::InvalidArgument(
"Mean and beta tensors must have the same shape: ",
mean.shape().DebugString(), " vs. ", beta.shape().DebugString()));
OP_REQUIRES(
context, mean.shape().dim_size(0) == gamma.shape().dim_size(0),
errors::InvalidArgument(
"Mean and gamma tensors must have the same shape: ",
mean.shape().DebugString(), " vs. ", gamma.shape().DebugString()));
Tensor* output = nullptr;
OP_REQUIRES_OK(context,
context->allocate_output(0, input.shape(), &output));
float output_min;
float output_max;
FixedPointBatchNorm<T1, T2>(input, input_min, input_max, mean, mean_min,
mean_max, var, var_min, var_max, beta, beta_min,
beta_max, gamma, gamma_min, gamma_max,
variance_epsilon_, scale_after_normalization_,
output, &output_min, &output_max);
Tensor* output_min_tensor = nullptr;
OP_REQUIRES_OK(context,
context->allocate_output(1, {}, &output_min_tensor));
output_min_tensor->flat<float>()(0) = output_min;
Tensor* output_max_tensor = nullptr;
OP_REQUIRES_OK(context,
context->allocate_output(2, {}, &output_max_tensor));
output_max_tensor->flat<float>()(0) = output_max;
}
|
Safe
|
[
"CWE-369"
] |
tensorflow
|
d6ed5bcfe1dcab9e85a4d39931bd18d99018e75b
|
1.1591315942927755e+38
| 111 |
Add missing validation in `QuantizedBatchNormWithGlobalNormalization`
PiperOrigin-RevId: 370123451
Change-Id: Id234d6dab1ec21230bb8e503dba30f899af87f33
| 0 |
static void loongarch_cpu_synchronize_from_tb(CPUState *cs,
const TranslationBlock *tb)
{
LoongArchCPU *cpu = LOONGARCH_CPU(cs);
CPULoongArchState *env = &cpu->env;
env->pc = tb->pc;
}
|
Safe
|
[] |
qemu
|
3517fb726741c109cae7995f9ea46f0cab6187d6
|
5.819876663416372e+37
| 8 |
target/loongarch: Clean up tlb when cpu reset
We should make sure that tlb is clean when cpu reset.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220705070950.2364243-1-gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| 0 |
void sctp_assoc_sync_pmtu(struct sctp_association *asoc)
{
struct sctp_transport *t;
struct list_head *pos;
__u32 pmtu = 0;
if (!asoc)
return;
/* Get the lowest pmtu of all the transports. */
list_for_each(pos, &asoc->peer.transport_addr_list) {
t = list_entry(pos, struct sctp_transport, transports);
if (t->pmtu_pending && t->dst) {
sctp_transport_update_pmtu(t, dst_mtu(t->dst));
t->pmtu_pending = 0;
}
if (!pmtu || (t->pathmtu < pmtu))
pmtu = t->pathmtu;
}
if (pmtu) {
struct sctp_sock *sp = sctp_sk(asoc->base.sk);
asoc->pathmtu = pmtu;
asoc->frag_point = sctp_frag_point(sp, pmtu);
}
SCTP_DEBUG_PRINTK("%s: asoc:%p, pmtu:%d, frag_point:%d\n",
__FUNCTION__, asoc, asoc->pathmtu, asoc->frag_point);
}
|
Safe
|
[] |
linux
|
bbd0d59809f923ea2b540cbd781b32110e249f6e
|
2.7632804677737838e+38
| 29 |
[SCTP]: Implement the receive and verification of AUTH chunk
This patch implements the receive path needed to process authenticated
chunks. Add ability to process the AUTH chunk and handle edge cases
for authenticated COOKIE-ECHO as well.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs)
{
struct desc_ptr dt;
kvm_get_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
kvm_get_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
kvm_get_segment(vcpu, &sregs->es, VCPU_SREG_ES);
kvm_get_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
kvm_get_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
kvm_get_segment(vcpu, &sregs->ss, VCPU_SREG_SS);
kvm_get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
kvm_get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
kvm_x86_ops->get_idt(vcpu, &dt);
sregs->idt.limit = dt.size;
sregs->idt.base = dt.address;
kvm_x86_ops->get_gdt(vcpu, &dt);
sregs->gdt.limit = dt.size;
sregs->gdt.base = dt.address;
sregs->cr0 = kvm_read_cr0(vcpu);
sregs->cr2 = vcpu->arch.cr2;
sregs->cr3 = vcpu->arch.cr3;
sregs->cr4 = kvm_read_cr4(vcpu);
sregs->cr8 = kvm_get_cr8(vcpu);
sregs->efer = vcpu->arch.efer;
sregs->apic_base = kvm_get_apic_base(vcpu);
memset(sregs->interrupt_bitmap, 0, sizeof sregs->interrupt_bitmap);
if (vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft)
set_bit(vcpu->arch.interrupt.nr,
(unsigned long *)sregs->interrupt_bitmap);
return 0;
}
|
Safe
|
[
"CWE-200"
] |
kvm
|
831d9d02f9522e739825a51a11e3bc5aa531a905
|
1.4551822881507596e+38
| 38 |
KVM: x86: fix information leak to userland
Structures kvm_vcpu_events, kvm_debugregs, kvm_pit_state2 and
kvm_clock_data are copied to userland with some padding and reserved
fields unitialized. It leads to leaking of contents of kernel stack
memory. We have to initialize them to zero.
In patch v1 Jan Kiszka suggested to fill reserved fields with zeros
instead of memset'ting the whole struct. It makes sense as these
fields are explicitly marked as padding. No more fields need zeroing.
KVM-Stable-Tag.
Signed-off-by: Vasiliy Kulikov <segooon@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| 0 |
ZEND_API void multi_convert_to_double_ex(int argc, ...) /* {{{ */
{
zval *arg;
va_list ap;
va_start(ap, argc);
while (argc--) {
arg = va_arg(ap, zval *);
convert_to_double_ex(arg);
}
va_end(ap);
}
|
Safe
|
[
"CWE-787"
] |
php-src
|
f1ce8d5f5839cb2069ea37ff424fb96b8cd6932d
|
1.8771989190409137e+38
| 14 |
Fix #73122: Integer Overflow when concatenating strings
We must avoid integer overflows in memory allocations, so we introduce
an additional check in the VM, and bail out in the rare case of an
overflow. Since the recent fix for bug #74960 still doesn't catch all
possible overflows, we fix that right away.
| 0 |
CBINDInstallDlg::UpdateService(CString StartName) {
SC_HANDLE hSCManager;
SC_HANDLE hService;
if(m_toolsOnly)
return;
SetCurrent(IDS_OPEN_SCM);
hSCManager= OpenSCManager(NULL, NULL, SC_MANAGER_ALL_ACCESS);
if (!hSCManager) {
MsgBox(IDS_ERR_OPEN_SCM, GetErrMessage());
return;
}
DWORD dwStart = SERVICE_DEMAND_START;
if (m_autoStart)
dwStart = SERVICE_AUTO_START;
DWORD dwServiceType = SERVICE_WIN32_OWN_PROCESS;
CString namedLoc;
namedLoc.Format("%s\\bin\\named.exe", m_targetDir);
SetCurrent(IDS_OPEN_SERVICE);
hService = OpenService(hSCManager, BIND_SERVICE_NAME,
SERVICE_CHANGE_CONFIG);
if (!hService)
{
MsgBox(IDS_ERR_OPEN_SERVICE, GetErrMessage());
if (hSCManager)
CloseServiceHandle(hSCManager);
return;
} else {
if (ChangeServiceConfig(hService, dwServiceType, dwStart,
SERVICE_ERROR_NORMAL, namedLoc, NULL, NULL, NULL,
StartName, m_accountPassword, BIND_DISPLAY_NAME)
!= TRUE) {
DWORD err = GetLastError();
MsgBox(IDS_ERR_UPDATE_SERVICE, GetErrMessage());
}
}
if (hService)
CloseServiceHandle(hService);
if (hSCManager)
CloseServiceHandle(hSCManager);
SetItemStatus(IDC_REG_SERVICE);
}
|
Vulnerable
|
[
"CWE-284"
] |
bind9
|
967a3b9419a3c12b8c0870c86d1ee3840bcbbad7
|
1.8455422412040354e+38
| 50 |
[master] quote service registry paths
4532. [security] The BIND installer on Windows used an unquoted
service path, which can enable privilege escalation.
(CVE-2017-3141) [RT #45229]
| 1 |
SMB_OFF_T smb_vfs_call_lseek(struct vfs_handle_struct *handle,
struct files_struct *fsp, SMB_OFF_T offset,
int whence)
{
VFS_FIND(lseek);
return handle->fns->lseek(handle, fsp, offset, whence);
}
|
Safe
|
[
"CWE-22"
] |
samba
|
bd269443e311d96ef495a9db47d1b95eb83bb8f4
|
2.266967618923397e+37
| 7 |
Fix bug 7104 - "wide links" and "unix extensions" are incompatible.
Change parameter "wide links" to default to "no".
Ensure "wide links = no" if "unix extensions = yes" on a share.
Fix man pages to refect this.
Remove "within share" checks for a UNIX symlink set - even if
widelinks = no. The server will not follow that link anyway.
Correct DEBUG message in check_reduced_name() to add missing "\n"
so it's really clear when a path is being denied as it's outside
the enclosing share path.
Jeremy.
| 0 |
static void lan9118_update(lan9118_state *s)
{
int level;
/* TODO: Implement FIFO level IRQs. */
level = (s->int_sts & s->int_en) != 0;
if (level) {
s->irq_cfg |= IRQ_INT;
} else {
s->irq_cfg &= ~IRQ_INT;
}
if ((s->irq_cfg & IRQ_EN) == 0) {
level = 0;
}
if ((s->irq_cfg & (IRQ_TYPE | IRQ_POL)) != (IRQ_TYPE | IRQ_POL)) {
/* Interrupt is active low unless we're configured as
* active-high polarity, push-pull type.
*/
level = !level;
}
qemu_set_irq(s->irq, level);
}
|
Safe
|
[
"CWE-835"
] |
qemu
|
37cee01784ff0df13e5209517e1b3594a5e792d1
|
2.6733885804411173e+38
| 22 |
lan9118: switch to use qemu_receive_packet() for loopback
This patch switches to use qemu_receive_packet() which can detect
reentrancy and return early.
This is intended to address CVE-2021-3416.
Cc: Prasad J Pandit <ppandit@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Jason Wang <jasowang@redhat.com>
| 0 |
struct sta_info *sta_info_get_by_idx(struct ieee80211_sub_if_data *sdata,
int idx)
{
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
int i = 0;
list_for_each_entry_rcu(sta, &local->sta_list, list) {
if (sdata != sta->sdata)
continue;
if (i < idx) {
++i;
continue;
}
return sta;
}
return NULL;
}
|
Safe
|
[
"CWE-287"
] |
linux
|
3e493173b7841259a08c5c8e5cbe90adb349da7e
|
1.008092863378692e+38
| 19 |
mac80211: Do not send Layer 2 Update frame before authorization
The Layer 2 Update frame is used to update bridges when a station roams
to another AP even if that STA does not transmit any frames after the
reassociation. This behavior was described in IEEE Std 802.11F-2003 as
something that would happen based on MLME-ASSOCIATE.indication, i.e.,
before completing 4-way handshake. However, this IEEE trial-use
recommended practice document was published before RSN (IEEE Std
802.11i-2004) and as such, did not consider RSN use cases. Furthermore,
IEEE Std 802.11F-2003 was withdrawn in 2006 and as such, has not been
maintained amd should not be used anymore.
Sending out the Layer 2 Update frame immediately after association is
fine for open networks (and also when using SAE, FT protocol, or FILS
authentication when the station is actually authenticated by the time
association completes). However, it is not appropriate for cases where
RSN is used with PSK or EAP authentication since the station is actually
fully authenticated only once the 4-way handshake completes after
authentication and attackers might be able to use the unauthenticated
triggering of Layer 2 Update frame transmission to disrupt bridge
behavior.
Fix this by postponing transmission of the Layer 2 Update frame from
station entry addition to the point when the station entry is marked
authorized. Similarly, send out the VLAN binding update only if the STA
entry has already been authorized.
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
__zzip_fetch_disk_trailer(int fd, zzip_off_t filesize,
struct _disk_trailer *_zzip_restrict trailer,
zzip_plugin_io_t io)
{
#ifdef DEBUG
#define return(val) { e=val; HINT2("%s", zzip_strerror(e)); goto cleanup; }
#else
#define return(val) { e=val; goto cleanup; }
#endif
register int e;
#ifndef _LOWSTK
auto char buffer[2 * ZZIP_BUFSIZ];
char *buf = buffer;
#else
char *buf = malloc(2 * ZZIP_BUFSIZ);
#endif
zzip_off_t offset = 0;
zzip_ssize_t maplen = 0; /* mmap(),read(),getpagesize() use size_t !! */
char *fd_map = 0;
if (! trailer)
{ return(EINVAL); }
if (filesize < __sizeof(struct zzip_disk_trailer))
{ return(ZZIP_DIR_TOO_SHORT); }
if (! buf)
{ return(ZZIP_OUTOFMEM); }
offset = filesize; /* a.k.a. old offset */
while (1) /* outer loop */
{
register unsigned char *mapped;
if (offset <= 0)
{ return(ZZIP_DIR_EDH_MISSING); }
/* trailer cannot be farther away than 64K from fileend */
if (filesize - offset > 64 * 1024)
{ return(ZZIP_DIR_EDH_MISSING); }
/* the new offset shall overlap with the area after the old offset! */
if (USE_MMAP && io->fd.sys)
{
zzip_off_t mapoff = offset;
{
zzip_ssize_t pagesize = _zzip_getpagesize(io->fd.sys);
if (pagesize < ZZIP_BUFSIZ)
goto non_mmap; /* an error? */
if (mapoff == filesize && filesize > pagesize)
mapoff -= pagesize;
if (mapoff < pagesize)
{
maplen = (zzip_ssize_t) mapoff + pagesize;
mapoff = 0;
} else
{
mapoff -= pagesize;
maplen = 2 * pagesize;
if ((zzip_ssize_t) mapoff & (pagesize - 1))
{ /*only 1. run */
pagesize -= (zzip_ssize_t) mapoff & (pagesize - 1);
mapoff += pagesize;
maplen -= pagesize;
}
}
if (mapoff + maplen > filesize)
maplen = filesize - mapoff;
}
fd_map = _zzip_mmap(io->fd.sys, fd, mapoff, (zzip_size_t) maplen);
if (fd_map == MAP_FAILED)
goto non_mmap;
mapped = (unsigned char *) fd_map;
offset = mapoff; /* success */
HINT3("mapped *%p len=%li", fd_map, (long) maplen);
} else
{
non_mmap:
fd_map = 0; /* have no mmap */
{
zzip_off_t pagesize = ZZIP_BUFSIZ;
if (offset == filesize && filesize > pagesize)
offset -= pagesize;
if (offset < pagesize)
{
maplen = (zzip_ssize_t) offset + pagesize;
offset = 0;
} else
{
offset -= pagesize;
maplen = 2 * pagesize;
if ((zzip_ssize_t) offset & (pagesize - 1))
{ /*on 1st run */
pagesize -= (zzip_ssize_t) offset & (pagesize - 1);
offset += pagesize;
maplen -= pagesize;
}
}
if (offset + maplen > filesize)
maplen = filesize - offset;
}
if (io->fd.seeks(fd, offset, SEEK_SET) < 0)
{ return(ZZIP_DIR_SEEK); }
if (io->fd.read(fd, buf, (zzip_size_t) maplen) < maplen)
{ return(ZZIP_DIR_READ); }
mapped = (unsigned char *) buf; /* success */
HINT5("offs=$%lx len=%li filesize=%li pagesize=%i",
(long) offset, (long) maplen, (long) filesize, ZZIP_BUFSIZ);
}
{ /* now, check for the trailer-magic, hopefully near the end of file */
register unsigned char *end = mapped + maplen;
register unsigned char *tail;
for (tail = end - 1; (tail >= mapped); tail--)
{
if ((*tail == 'P') && /* quick pre-check for trailer magic */
end - tail >= __sizeof(struct zzip_disk_trailer) - 2 &&
zzip_disk_trailer_check_magic(tail))
{
# ifndef ZZIP_DISK64_TRAILER
/* if the file-comment is not present, it happens
that the z_comment field often isn't either */
if (end - tail >= __sizeof(*trailer))
{
memcpy(trailer, tail, sizeof(*trailer));
} else
{
memcpy(trailer, tail, sizeof(*trailer) - 2);
trailer->z_comment[0] = 0;
trailer->z_comment[1] = 0;
}
# else
struct zzip_disk_trailer *orig =
(struct zzip_disk_trailer *) tail;
trailer->zz_tail = tail;
trailer->zz_entries = zzip_disk_trailer_localentries(orig);
trailer->zz_finalentries =
zzip_disk_trailer_finalentries(orig);
trailer->zz_rootseek = zzip_disk_trailer_rootseek(orig);
trailer->zz_rootsize = zzip_disk_trailer_rootsize(orig);
# endif
__fixup_rootseek(offset + tail - mapped, trailer);
/*
* "extract data from files archived in a single zip file."
* So the file offsets must be within the current ZIP archive!
*/
if (trailer->zz_rootseek >= filesize || (trailer->zz_rootseek + trailer->zz_rootsize) >= filesize)
return(ZZIP_CORRUPTED);
{ return(0); }
} else if ((*tail == 'P') &&
end - tail >=
__sizeof(struct zzip_disk64_trailer) - 2
&& zzip_disk64_trailer_check_magic(tail))
{
# ifndef ZZIP_DISK64_TRAILER
return (ZZIP_DIR_LARGEFILE);
# else
struct zzip_disk64_trailer *orig =
(struct zzip_disk64_trailer *) tail;
trailer->zz_tail = tail;
trailer->zz_entries =
zzip_disk64_trailer_localentries(orig);
trailer->zz_finalentries =
zzip_disk64_trailer_finalentries(orig);
trailer->zz_rootseek = zzip_disk64_trailer_rootseek(orig);
trailer->zz_rootsize = zzip_disk64_trailer_rootsize(orig);
/*
* "extract data from files archived in a single zip file."
* So the file offsets must be within the current ZIP archive!
*/
if (trailer->zz_rootseek >= filesize || (trailer->zz_rootseek + trailer->zz_rootsize) >= filesize)
return(ZZIP_CORRUPTED);
{ return(0); }
# endif
}
}
}
if (USE_MMAP && fd_map)
{
HINT3("unmap *%p len=%li", fd_map, (long) maplen);
_zzip_munmap(io->fd.sys, fd_map, (zzip_size_t) maplen);
fd_map = 0;
}
} /*outer loop */
cleanup:
if (USE_MMAP && fd_map)
{
HINT3("unmap *%p len=%li", fd_map, (long) maplen);
_zzip_munmap(io->fd.sys, fd_map, (zzip_size_t) maplen);
}
# ifdef _LOWSTK
free(buf);
# endif
# undef return
return e;
}
|
Vulnerable
|
[
"CWE-770"
] |
zziplib
|
8f48323c181e20b7e527b8be7229d6eb1148ec5f
|
1.2289393554136018e+38
| 202 |
check rootseek and rootsize to be positive #27
| 1 |
snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_info *inf)
{
struct seq_oss_synth *rec;
if (dp->synths[dev].is_midi) {
struct midi_info minf;
snd_seq_oss_midi_make_info(dp, dp->synths[dev].midi_mapped, &minf);
inf->synth_type = SYNTH_TYPE_MIDI;
inf->synth_subtype = 0;
inf->nr_voices = 16;
inf->device = dev;
strlcpy(inf->name, minf.name, sizeof(inf->name));
} else {
if ((rec = get_synthdev(dp, dev)) == NULL)
return -ENXIO;
inf->synth_type = rec->synth_type;
inf->synth_subtype = rec->synth_subtype;
inf->nr_voices = rec->nr_voices;
inf->device = dev;
strlcpy(inf->name, rec->name, sizeof(inf->name));
snd_use_lock_free(&rec->use_lock);
}
return 0;
}
|
Vulnerable
|
[
"CWE-200"
] |
linux-2.6
|
82e68f7ffec3800425f2391c8c86277606860442
|
2.161668561094385e+38
| 24 |
sound: ensure device number is valid in snd_seq_oss_synth_make_info
snd_seq_oss_synth_make_info() incorrectly reports information
to userspace without first checking for the validity of the
device number, leading to possible information leak (CVE-2008-3272).
Reported-By: Tobias Klein <tk@trapkit.de>
Acked-and-tested-by: Takashi Iwai <tiwai@suse.de>
Cc: stable@kernel.org
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 1 |
static int dvb_frontend_ioctl(struct file *file, unsigned int cmd, void *parg)
{
struct dvb_device *dvbdev = file->private_data;
struct dvb_frontend *fe = dvbdev->priv;
struct dvb_frontend_private *fepriv = fe->frontend_priv;
int err;
dev_dbg(fe->dvb->device, "%s: (%d)\n", __func__, _IOC_NR(cmd));
if (down_interruptible(&fepriv->sem))
return -ERESTARTSYS;
if (fe->exit != DVB_FE_NO_EXIT) {
up(&fepriv->sem);
return -ENODEV;
}
/*
* If the frontend is opened in read-only mode, only the ioctls
* that don't interfere with the tune logic should be accepted.
* That allows an external application to monitor the DVB QoS and
* statistics parameters.
*
* That matches all _IOR() ioctls, except for two special cases:
* - FE_GET_EVENT is part of the tuning logic on a DVB application;
* - FE_DISEQC_RECV_SLAVE_REPLY is part of DiSEqC 2.0
* setup
* So, those two ioctls should also return -EPERM, as otherwise
* reading from them would interfere with a DVB tune application
*/
if ((file->f_flags & O_ACCMODE) == O_RDONLY
&& (_IOC_DIR(cmd) != _IOC_READ
|| cmd == FE_GET_EVENT
|| cmd == FE_DISEQC_RECV_SLAVE_REPLY)) {
up(&fepriv->sem);
return -EPERM;
}
err = dvb_frontend_handle_ioctl(file, cmd, parg);
up(&fepriv->sem);
return err;
}
|
Safe
|
[
"CWE-416"
] |
linux
|
b1cb7372fa822af6c06c8045963571d13ad6348b
|
6.416317698923954e+37
| 42 |
dvb_frontend: don't use-after-free the frontend struct
dvb_frontend_invoke_release() may free the frontend struct.
So, the free logic can't update it anymore after calling it.
That's OK, as __dvb_frontend_free() is called only when the
krefs are zeroed, so nobody is using it anymore.
That should fix the following KASAN error:
The KASAN report looks like this (running on kernel 3e0cc09a3a2c40ec1ffb6b4e12da86e98feccb11 (4.14-rc5+)):
==================================================================
BUG: KASAN: use-after-free in __dvb_frontend_free+0x113/0x120
Write of size 8 at addr ffff880067d45a00 by task kworker/0:1/24
CPU: 0 PID: 24 Comm: kworker/0:1 Not tainted 4.14.0-rc5-43687-g06ab8a23e0e6 #545
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Workqueue: usb_hub_wq hub_event
Call Trace:
__dump_stack lib/dump_stack.c:16
dump_stack+0x292/0x395 lib/dump_stack.c:52
print_address_description+0x78/0x280 mm/kasan/report.c:252
kasan_report_error mm/kasan/report.c:351
kasan_report+0x23d/0x350 mm/kasan/report.c:409
__asan_report_store8_noabort+0x1c/0x20 mm/kasan/report.c:435
__dvb_frontend_free+0x113/0x120 drivers/media/dvb-core/dvb_frontend.c:156
dvb_frontend_put+0x59/0x70 drivers/media/dvb-core/dvb_frontend.c:176
dvb_frontend_detach+0x120/0x150 drivers/media/dvb-core/dvb_frontend.c:2803
dvb_usb_adapter_frontend_exit+0xd6/0x160 drivers/media/usb/dvb-usb/dvb-usb-dvb.c:340
dvb_usb_adapter_exit drivers/media/usb/dvb-usb/dvb-usb-init.c:116
dvb_usb_exit+0x9b/0x200 drivers/media/usb/dvb-usb/dvb-usb-init.c:132
dvb_usb_device_exit+0xa5/0xf0 drivers/media/usb/dvb-usb/dvb-usb-init.c:295
usb_unbind_interface+0x21c/0xa90 drivers/usb/core/driver.c:423
__device_release_driver drivers/base/dd.c:861
device_release_driver_internal+0x4f1/0x5c0 drivers/base/dd.c:893
device_release_driver+0x1e/0x30 drivers/base/dd.c:918
bus_remove_device+0x2f4/0x4b0 drivers/base/bus.c:565
device_del+0x5c4/0xab0 drivers/base/core.c:1985
usb_disable_device+0x1e9/0x680 drivers/usb/core/message.c:1170
usb_disconnect+0x260/0x7a0 drivers/usb/core/hub.c:2124
hub_port_connect drivers/usb/core/hub.c:4754
hub_port_connect_change drivers/usb/core/hub.c:5009
port_event drivers/usb/core/hub.c:5115
hub_event+0x1318/0x3740 drivers/usb/core/hub.c:5195
process_one_work+0xc73/0x1d90 kernel/workqueue.c:2119
worker_thread+0x221/0x1850 kernel/workqueue.c:2253
kthread+0x363/0x440 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
Allocated by task 24:
save_stack_trace+0x1b/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459
kasan_kmalloc+0xad/0xe0 mm/kasan/kasan.c:551
kmem_cache_alloc_trace+0x11e/0x2d0 mm/slub.c:2772
kmalloc ./include/linux/slab.h:493
kzalloc ./include/linux/slab.h:666
dtt200u_fe_attach+0x4c/0x110 drivers/media/usb/dvb-usb/dtt200u-fe.c:212
dtt200u_frontend_attach+0x35/0x80 drivers/media/usb/dvb-usb/dtt200u.c:136
dvb_usb_adapter_frontend_init+0x32b/0x660 drivers/media/usb/dvb-usb/dvb-usb-dvb.c:286
dvb_usb_adapter_init drivers/media/usb/dvb-usb/dvb-usb-init.c:86
dvb_usb_init drivers/media/usb/dvb-usb/dvb-usb-init.c:162
dvb_usb_device_init+0xf73/0x17f0 drivers/media/usb/dvb-usb/dvb-usb-init.c:277
dtt200u_usb_probe+0xa1/0xe0 drivers/media/usb/dvb-usb/dtt200u.c:155
usb_probe_interface+0x35d/0x8e0 drivers/usb/core/driver.c:361
really_probe drivers/base/dd.c:413
driver_probe_device+0x610/0xa00 drivers/base/dd.c:557
__device_attach_driver+0x230/0x290 drivers/base/dd.c:653
bus_for_each_drv+0x161/0x210 drivers/base/bus.c:463
__device_attach+0x26b/0x3c0 drivers/base/dd.c:710
device_initial_probe+0x1f/0x30 drivers/base/dd.c:757
bus_probe_device+0x1eb/0x290 drivers/base/bus.c:523
device_add+0xd0b/0x1660 drivers/base/core.c:1835
usb_set_configuration+0x104e/0x1870 drivers/usb/core/message.c:1932
generic_probe+0x73/0xe0 drivers/usb/core/generic.c:174
usb_probe_device+0xaf/0xe0 drivers/usb/core/driver.c:266
really_probe drivers/base/dd.c:413
driver_probe_device+0x610/0xa00 drivers/base/dd.c:557
__device_attach_driver+0x230/0x290 drivers/base/dd.c:653
bus_for_each_drv+0x161/0x210 drivers/base/bus.c:463
__device_attach+0x26b/0x3c0 drivers/base/dd.c:710
device_initial_probe+0x1f/0x30 drivers/base/dd.c:757
bus_probe_device+0x1eb/0x290 drivers/base/bus.c:523
device_add+0xd0b/0x1660 drivers/base/core.c:1835
usb_new_device+0x7b8/0x1020 drivers/usb/core/hub.c:2457
hub_port_connect drivers/usb/core/hub.c:4903
hub_port_connect_change drivers/usb/core/hub.c:5009
port_event drivers/usb/core/hub.c:5115
hub_event+0x194d/0x3740 drivers/usb/core/hub.c:5195
process_one_work+0xc73/0x1d90 kernel/workqueue.c:2119
worker_thread+0x221/0x1850 kernel/workqueue.c:2253
kthread+0x363/0x440 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
Freed by task 24:
save_stack_trace+0x1b/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459
kasan_slab_free+0x72/0xc0 mm/kasan/kasan.c:524
slab_free_hook mm/slub.c:1390
slab_free_freelist_hook mm/slub.c:1412
slab_free mm/slub.c:2988
kfree+0xf6/0x2f0 mm/slub.c:3919
dtt200u_fe_release+0x3c/0x50 drivers/media/usb/dvb-usb/dtt200u-fe.c:202
dvb_frontend_invoke_release.part.13+0x1c/0x30 drivers/media/dvb-core/dvb_frontend.c:2790
dvb_frontend_invoke_release drivers/media/dvb-core/dvb_frontend.c:2789
__dvb_frontend_free+0xad/0x120 drivers/media/dvb-core/dvb_frontend.c:153
dvb_frontend_put+0x59/0x70 drivers/media/dvb-core/dvb_frontend.c:176
dvb_frontend_detach+0x120/0x150 drivers/media/dvb-core/dvb_frontend.c:2803
dvb_usb_adapter_frontend_exit+0xd6/0x160 drivers/media/usb/dvb-usb/dvb-usb-dvb.c:340
dvb_usb_adapter_exit drivers/media/usb/dvb-usb/dvb-usb-init.c:116
dvb_usb_exit+0x9b/0x200 drivers/media/usb/dvb-usb/dvb-usb-init.c:132
dvb_usb_device_exit+0xa5/0xf0 drivers/media/usb/dvb-usb/dvb-usb-init.c:295
usb_unbind_interface+0x21c/0xa90 drivers/usb/core/driver.c:423
__device_release_driver drivers/base/dd.c:861
device_release_driver_internal+0x4f1/0x5c0 drivers/base/dd.c:893
device_release_driver+0x1e/0x30 drivers/base/dd.c:918
bus_remove_device+0x2f4/0x4b0 drivers/base/bus.c:565
device_del+0x5c4/0xab0 drivers/base/core.c:1985
usb_disable_device+0x1e9/0x680 drivers/usb/core/message.c:1170
usb_disconnect+0x260/0x7a0 drivers/usb/core/hub.c:2124
hub_port_connect drivers/usb/core/hub.c:4754
hub_port_connect_change drivers/usb/core/hub.c:5009
port_event drivers/usb/core/hub.c:5115
hub_event+0x1318/0x3740 drivers/usb/core/hub.c:5195
process_one_work+0xc73/0x1d90 kernel/workqueue.c:2119
worker_thread+0x221/0x1850 kernel/workqueue.c:2253
kthread+0x363/0x440 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
The buggy address belongs to the object at ffff880067d45500
which belongs to the cache kmalloc-2048 of size 2048
The buggy address is located 1280 bytes inside of
2048-byte region [ffff880067d45500, ffff880067d45d00)
The buggy address belongs to the page:
page:ffffea00019f5000 count:1 mapcount:0 mapping: (null)
index:0x0 compound_mapcount: 0
flags: 0x100000000008100(slab|head)
raw: 0100000000008100 0000000000000000 0000000000000000 00000001000f000f
raw: dead000000000100 dead000000000200 ffff88006c002d80 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff880067d45900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880067d45980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880067d45a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff880067d45a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff880067d45b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
Fixes: ead666000a5f ("media: dvb_frontend: only use kref after initialized")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Suggested-by: Matthias Schwarzott <zzam@gentoo.org>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
| 0 |
static int ath6kl_wmi_delba_req_event_rx(struct wmi *wmi, u8 *datap, int len,
struct ath6kl_vif *vif)
{
struct wmi_delba_event *cmd = (struct wmi_delba_event *) datap;
aggr_recv_delba_req_evt(vif, cmd->tid);
return 0;
}
|
Safe
|
[
"CWE-125"
] |
linux
|
5d6751eaff672ea77642e74e92e6c0ac7f9709ab
|
2.496067166921692e+38
| 9 |
ath6kl: add some bounds checking
The "ev->traffic_class" and "reply->ac" variables come from the network
and they're used as an offset into the wmi->stream_exist_for_ac[] array.
Those variables are u8 so they can be 0-255 but the stream_exist_for_ac[]
array only has WMM_NUM_AC (4) elements. We need to add a couple bounds
checks to prevent array overflows.
I also modified one existing check from "if (traffic_class > 3) {" to
"if (traffic_class >= WMM_NUM_AC) {" just to make them all consistent.
Fixes: bdcd81707973 (" Add ath6kl cleaned up driver")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.