func
string | target
string | cwe
list | project
string | commit_id
string | hash
string | size
int64 | message
string | vul
int64 |
---|---|---|---|---|---|---|---|---|
int handle_data(bufferlist& bl, off_t bl_ofs, off_t bl_len) override {
return op->get_data_cb(bl, bl_ofs, bl_len);
}
|
Safe
|
[
"CWE-770"
] |
ceph
|
ab29bed2fc9f961fe895de1086a8208e21ddaddc
|
2.052877963499978e+38
| 3 |
rgw: fix issues with 'enforce bounds' patch
The patch to enforce bounds on max-keys/max-uploads/max-parts had a few
issues that would prevent us from compiling it. Instead of changing the
code provided by the submitter, we're addressing them in a separate
commit to maintain the DCO.
Signed-off-by: Joao Eduardo Luis <joao@suse.de>
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 29bc434a6a81a2e5c5b8cfc4c8d5c82ca5bf538a)
mimic specific fixes:
As the largeish change from master g_conf() isn't in mimic yet, use the g_conf
global structure, also make rgw_op use the value from req_info ceph context as
we do for all the requests
| 0 |
static inline struct task_struct *alloc_task_struct_node(int node)
{
return kmem_cache_alloc_node(task_struct_cachep, GFP_KERNEL, node);
}
|
Safe
|
[
"CWE-416",
"CWE-703"
] |
linux
|
2b7e8665b4ff51c034c55df3cff76518d1a9ee3a
|
3.0625450243197753e+38
| 4 |
fork: fix incorrect fput of ->exe_file causing use-after-free
Commit 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for
write killable") made it possible to kill a forking task while it is
waiting to acquire its ->mmap_sem for write, in dup_mmap().
However, it was overlooked that this introduced an new error path before
a reference is taken on the mm_struct's ->exe_file. Since the
->exe_file of the new mm_struct was already set to the old ->exe_file by
the memcpy() in dup_mm(), it was possible for the mmput() in the error
path of dup_mm() to drop a reference to ->exe_file which was never
taken.
This caused the struct file to later be freed prematurely.
Fix it by updating mm_init() to NULL out the ->exe_file, in the same
place it clears other things like the list of mmaps.
This bug was found by syzkaller. It can be reproduced using the
following C program:
#define _GNU_SOURCE
#include <pthread.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/syscall.h>
#include <sys/wait.h>
#include <unistd.h>
static void *mmap_thread(void *_arg)
{
for (;;) {
mmap(NULL, 0x1000000, PROT_READ,
MAP_POPULATE|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
}
}
static void *fork_thread(void *_arg)
{
usleep(rand() % 10000);
fork();
}
int main(void)
{
fork();
fork();
fork();
for (;;) {
if (fork() == 0) {
pthread_t t;
pthread_create(&t, NULL, mmap_thread, NULL);
pthread_create(&t, NULL, fork_thread, NULL);
usleep(rand() % 10000);
syscall(__NR_exit_group, 0);
}
wait(NULL);
}
}
No special kernel config options are needed. It usually causes a NULL
pointer dereference in __remove_shared_vm_struct() during exit, or in
dup_mmap() (which is usually inlined into copy_process()) during fork.
Both are due to a vm_area_struct's ->vm_file being used after it's
already been freed.
Google Bug Id: 64772007
Link: http://lkml.kernel.org/r/20170823211408.31198-1-ebiggers3@gmail.com
Fixes: 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for write killable")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org> [v4.7+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
static void sctp_send_stale_cookie_err(const struct sctp_endpoint *ep,
const struct sctp_association *asoc,
const struct sctp_chunk *chunk,
sctp_cmd_seq_t *commands,
struct sctp_chunk *err_chunk)
{
struct sctp_packet *packet;
if (err_chunk) {
packet = sctp_ootb_pkt_new(asoc, chunk);
if (packet) {
struct sctp_signed_cookie *cookie;
/* Override the OOTB vtag from the cookie. */
cookie = chunk->subh.cookie_hdr;
packet->vtag = cookie->c.peer_vtag;
/* Set the skb to the belonging sock for accounting. */
err_chunk->skb->sk = ep->base.sk;
sctp_packet_append_chunk(packet, err_chunk);
sctp_add_cmd_sf(commands, SCTP_CMD_SEND_PKT,
SCTP_PACKET(packet));
SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS);
} else
sctp_chunk_free (err_chunk);
}
}
|
Safe
|
[] |
linux-2.6
|
7c3ceb4fb9667f34f1599a062efecf4cdc4a4ce5
|
1.539682768513079e+38
| 27 |
[SCTP]: Allow spillover of receive buffer to avoid deadlock.
This patch fixes a deadlock situation in the receive path by allowing
temporary spillover of the receive buffer.
- If the chunk we receive has a tsn that immediately follows the ctsn,
accept it even if we run out of receive buffer space and renege data with
higher TSNs.
- Once we accept one chunk in a packet, accept all the remaining chunks
even if we run out of receive buffer space.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Mark Butler <butlerm@middle.net>
Acked-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
checksum (const unsigned char *p, unsigned int n)
{
u16 a;
for (a=0; n; n-- )
a += *p++;
return a;
}
|
Safe
|
[
"CWE-20"
] |
gnupg
|
2183683bd633818dd031b090b5530951de76f392
|
2.41046755902528e+38
| 8 |
Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <wk@gnupg.org>
| 0 |
void smb1cli_tcon_set_id(struct smbXcli_tcon *tcon, uint16_t tcon_id)
{
tcon->is_smb1 = true;
tcon->smb1.tcon_id = tcon_id;
}
|
Safe
|
[
"CWE-20"
] |
samba
|
a819d2b440aafa3138d95ff6e8b824da885a70e9
|
1.3320480346514537e+38
| 5 |
CVE-2015-5296: libcli/smb: make sure we require signing when we demand encryption on a session
BUG: https://bugzilla.samba.org/show_bug.cgi?id=11536
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
| 0 |
static int megasas_mgmt_open(struct inode *inode, struct file *filep)
{
/*
* Allow only those users with admin rights
*/
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
return 0;
}
|
Safe
|
[
"CWE-476"
] |
linux
|
bcf3b67d16a4c8ffae0aa79de5853435e683945c
|
7.050413720688627e+37
| 10 |
scsi: megaraid_sas: return error when create DMA pool failed
when create DMA pool for cmd frames failed, we should return -ENOMEM,
instead of 0.
In some case in:
megasas_init_adapter_fusion()
-->megasas_alloc_cmds()
-->megasas_create_frame_pool
create DMA pool failed,
--> megasas_free_cmds() [1]
-->megasas_alloc_cmds_fusion()
failed, then goto fail_alloc_cmds.
-->megasas_free_cmds() [2]
we will call megasas_free_cmds twice, [1] will kfree cmd_list,
[2] will use cmd_list.it will cause a problem:
Unable to handle kernel NULL pointer dereference at virtual address
00000000
pgd = ffffffc000f70000
[00000000] *pgd=0000001fbf893003, *pud=0000001fbf893003,
*pmd=0000001fbf894003, *pte=006000006d000707
Internal error: Oops: 96000005 [#1] SMP
Modules linked in:
CPU: 18 PID: 1 Comm: swapper/0 Not tainted
task: ffffffdfb9290000 ti: ffffffdfb923c000 task.ti: ffffffdfb923c000
PC is at megasas_free_cmds+0x30/0x70
LR is at megasas_free_cmds+0x24/0x70
...
Call trace:
[<ffffffc0005b779c>] megasas_free_cmds+0x30/0x70
[<ffffffc0005bca74>] megasas_init_adapter_fusion+0x2f4/0x4d8
[<ffffffc0005b926c>] megasas_init_fw+0x2dc/0x760
[<ffffffc0005b9ab0>] megasas_probe_one+0x3c0/0xcd8
[<ffffffc0004a5abc>] local_pci_probe+0x4c/0xb4
[<ffffffc0004a5c40>] pci_device_probe+0x11c/0x14c
[<ffffffc00053a5e4>] driver_probe_device+0x1ec/0x430
[<ffffffc00053a92c>] __driver_attach+0xa8/0xb0
[<ffffffc000538178>] bus_for_each_dev+0x74/0xc8
[<ffffffc000539e88>] driver_attach+0x28/0x34
[<ffffffc000539a18>] bus_add_driver+0x16c/0x248
[<ffffffc00053b234>] driver_register+0x6c/0x138
[<ffffffc0004a5350>] __pci_register_driver+0x5c/0x6c
[<ffffffc000ce3868>] megasas_init+0xc0/0x1a8
[<ffffffc000082a58>] do_one_initcall+0xe8/0x1ec
[<ffffffc000ca7be8>] kernel_init_freeable+0x1c8/0x284
[<ffffffc0008d90b8>] kernel_init+0x1c/0xe4
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Acked-by: Sumit Saxena <sumit.saxena@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
| 0 |
void GrpcHealthCheckerImpl::GrpcActiveHealthCheckSession::onTimeout() {
ENVOY_CONN_LOG(debug, "connection/stream timeout health_flags={}", *client_,
HostUtility::healthFlagsToString(*host_));
expect_reset_ = true;
if (received_no_error_goaway_ || !parent_.reuse_connection_) {
client_->close();
} else {
request_encoder_->getStream().resetStream(Http::StreamResetReason::LocalReset);
}
}
|
Safe
|
[
"CWE-476"
] |
envoy
|
9b1c3962172a972bc0359398af6daa3790bb59db
|
1.4723297758320419e+38
| 10 |
healthcheck: fix grpc inline removal crashes (#749)
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Pradeep Rao <pcrao@google.com>
| 0 |
sctp_disposition_t sctp_sf_do_5_1D_ce(const struct sctp_endpoint *ep,
const struct sctp_association *asoc,
const sctp_subtype_t type, void *arg,
sctp_cmd_seq_t *commands)
{
struct sctp_chunk *chunk = arg;
struct sctp_association *new_asoc;
sctp_init_chunk_t *peer_init;
struct sctp_chunk *repl;
struct sctp_ulpevent *ev, *ai_ev = NULL;
int error = 0;
struct sctp_chunk *err_chk_p;
struct sock *sk;
/* If the packet is an OOTB packet which is temporarily on the
* control endpoint, respond with an ABORT.
*/
if (ep == sctp_sk((sctp_get_ctl_sock()))->ep)
return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands);
/* Make sure that the COOKIE_ECHO chunk has a valid length.
* In this case, we check that we have enough for at least a
* chunk header. More detailed verification is done
* in sctp_unpack_cookie().
*/
if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))
return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
/* If the endpoint is not listening or if the number of associations
* on the TCP-style socket exceed the max backlog, respond with an
* ABORT.
*/
sk = ep->base.sk;
if (!sctp_sstate(sk, LISTENING) ||
(sctp_style(sk, TCP) && sk_acceptq_is_full(sk)))
return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands);
/* "Decode" the chunk. We have no optional parameters so we
* are in good shape.
*/
chunk->subh.cookie_hdr =
(struct sctp_signed_cookie *)chunk->skb->data;
if (!pskb_pull(chunk->skb, ntohs(chunk->chunk_hdr->length) -
sizeof(sctp_chunkhdr_t)))
goto nomem;
/* 5.1 D) Upon reception of the COOKIE ECHO chunk, Endpoint
* "Z" will reply with a COOKIE ACK chunk after building a TCB
* and moving to the ESTABLISHED state.
*/
new_asoc = sctp_unpack_cookie(ep, asoc, chunk, GFP_ATOMIC, &error,
&err_chk_p);
/* FIXME:
* If the re-build failed, what is the proper error path
* from here?
*
* [We should abort the association. --piggy]
*/
if (!new_asoc) {
/* FIXME: Several errors are possible. A bad cookie should
* be silently discarded, but think about logging it too.
*/
switch (error) {
case -SCTP_IERROR_NOMEM:
goto nomem;
case -SCTP_IERROR_STALE_COOKIE:
sctp_send_stale_cookie_err(ep, asoc, chunk, commands,
err_chk_p);
return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
case -SCTP_IERROR_BAD_SIG:
default:
return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
}
}
/* Delay state machine commands until later.
*
* Re-build the bind address for the association is done in
* the sctp_unpack_cookie() already.
*/
/* This is a brand-new association, so these are not yet side
* effects--it is safe to run them here.
*/
peer_init = &chunk->subh.cookie_hdr->c.peer_init[0];
if (!sctp_process_init(new_asoc, chunk->chunk_hdr->type,
&chunk->subh.cookie_hdr->c.peer_addr,
peer_init, GFP_ATOMIC))
goto nomem_init;
/* SCTP-AUTH: Now that we've populate required fields in
* sctp_process_init, set up the assocaition shared keys as
* necessary so that we can potentially authenticate the ACK
*/
error = sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC);
if (error)
goto nomem_init;
/* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo
* is supposed to be authenticated and we have to do delayed
* authentication. We've just recreated the association using
* the information in the cookie and now it's much easier to
* do the authentication.
*/
if (chunk->auth_chunk) {
struct sctp_chunk auth;
sctp_ierror_t ret;
/* set-up our fake chunk so that we can process it */
auth.skb = chunk->auth_chunk;
auth.asoc = chunk->asoc;
auth.sctp_hdr = chunk->sctp_hdr;
auth.chunk_hdr = (sctp_chunkhdr_t *)skb_push(chunk->auth_chunk,
sizeof(sctp_chunkhdr_t));
skb_pull(chunk->auth_chunk, sizeof(sctp_chunkhdr_t));
auth.transport = chunk->transport;
ret = sctp_sf_authenticate(ep, new_asoc, type, &auth);
/* We can now safely free the auth_chunk clone */
kfree_skb(chunk->auth_chunk);
if (ret != SCTP_IERROR_NO_ERROR) {
sctp_association_free(new_asoc);
return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
}
}
repl = sctp_make_cookie_ack(new_asoc, chunk);
if (!repl)
goto nomem_init;
/* RFC 2960 5.1 Normal Establishment of an Association
*
* D) IMPLEMENTATION NOTE: An implementation may choose to
* send the Communication Up notification to the SCTP user
* upon reception of a valid COOKIE ECHO chunk.
*/
ev = sctp_ulpevent_make_assoc_change(new_asoc, 0, SCTP_COMM_UP, 0,
new_asoc->c.sinit_num_ostreams,
new_asoc->c.sinit_max_instreams,
NULL, GFP_ATOMIC);
if (!ev)
goto nomem_ev;
/* Sockets API Draft Section 5.3.1.6
* When a peer sends a Adaptation Layer Indication parameter , SCTP
* delivers this notification to inform the application that of the
* peers requested adaptation layer.
*/
if (new_asoc->peer.adaptation_ind) {
ai_ev = sctp_ulpevent_make_adaptation_indication(new_asoc,
GFP_ATOMIC);
if (!ai_ev)
goto nomem_aiev;
}
/* Add all the state machine commands now since we've created
* everything. This way we don't introduce memory corruptions
* during side-effect processing and correclty count established
* associations.
*/
sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));
sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
SCTP_STATE(SCTP_STATE_ESTABLISHED));
SCTP_INC_STATS(SCTP_MIB_CURRESTAB);
SCTP_INC_STATS(SCTP_MIB_PASSIVEESTABS);
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
if (new_asoc->autoclose)
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
/* This will send the COOKIE ACK */
sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
/* Queue the ASSOC_CHANGE event */
sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
/* Send up the Adaptation Layer Indication event */
if (ai_ev)
sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,
SCTP_ULPEVENT(ai_ev));
return SCTP_DISPOSITION_CONSUME;
nomem_aiev:
sctp_ulpevent_free(ev);
nomem_ev:
sctp_chunk_free(repl);
nomem_init:
sctp_association_free(new_asoc);
nomem:
return SCTP_DISPOSITION_NOMEM;
}
|
Safe
|
[
"CWE-20"
] |
linux-2.6
|
ba0166708ef4da7eeb61dd92bbba4d5a749d6561
|
2.7532416593170585e+38
| 199 |
sctp: Fix kernel panic while process protocol violation parameter
Since call to function sctp_sf_abort_violation() need paramter 'arg' with
'struct sctp_chunk' type, it will read the chunk type and chunk length from
the chunk_hdr member of chunk. But call to sctp_sf_violation_paramlen()
always with 'struct sctp_paramhdr' type's parameter, it will be passed to
sctp_sf_abort_violation(). This may cause kernel panic.
sctp_sf_violation_paramlen()
|-- sctp_sf_abort_violation()
|-- sctp_make_abort_violation()
This patch fixed this problem. This patch also fix two place which called
sctp_sf_violation_paramlen() with wrong paramter type.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
lldpd_cleanup(struct lldpd *cfg)
{
struct lldpd_hardware *hardware, *hardware_next;
struct lldpd_chassis *chassis, *chassis_next;
log_debug("localchassis", "cleanup all ports");
for (hardware = TAILQ_FIRST(&cfg->g_hardware); hardware != NULL;
hardware = hardware_next) {
hardware_next = TAILQ_NEXT(hardware, h_entries);
if (!hardware->h_flags) {
TRACE(LLDPD_INTERFACES_DELETE(hardware->h_ifname));
TAILQ_REMOVE(&cfg->g_hardware, hardware, h_entries);
lldpd_remote_cleanup(hardware, notify_clients_deletion, 1);
lldpd_hardware_cleanup(cfg, hardware);
} else
lldpd_remote_cleanup(hardware, notify_clients_deletion,
!(hardware->h_flags & IFF_RUNNING));
}
log_debug("localchassis", "cleanup all chassis");
for (chassis = TAILQ_FIRST(&cfg->g_chassis); chassis;
chassis = chassis_next) {
chassis_next = TAILQ_NEXT(chassis, c_entries);
if (chassis->c_refcount == 0) {
TAILQ_REMOVE(&cfg->g_chassis, chassis, c_entries);
lldpd_chassis_cleanup(chassis, 1);
}
}
lldpd_count_neighbors(cfg);
levent_schedule_cleanup(cfg);
}
|
Safe
|
[
"CWE-617",
"CWE-703"
] |
lldpd
|
9221b5c249f9e4843f77c7f888d5705348d179c0
|
1.5842134401099396e+38
| 34 |
protocols: don't use assert on paths that can be reached
Malformed packets should not make lldpd crash. Ensure we can handle them
by not using assert() in this part.
| 0 |
static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
struct kvm_segment seg;
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_EFER)
vcpu->arch.efer = vmcs12->host_ia32_efer;
else if (vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE)
vcpu->arch.efer |= (EFER_LMA | EFER_LME);
else
vcpu->arch.efer &= ~(EFER_LMA | EFER_LME);
vmx_set_efer(vcpu, vcpu->arch.efer);
kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp);
kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip);
vmx_set_rflags(vcpu, X86_EFLAGS_FIXED);
/*
* Note that calling vmx_set_cr0 is important, even if cr0 hasn't
* actually changed, because it depends on the current state of
* fpu_active (which may have changed).
* Note that vmx_set_cr0 refers to efer set above.
*/
vmx_set_cr0(vcpu, vmcs12->host_cr0);
/*
* If we did fpu_activate()/fpu_deactivate() during L2's run, we need
* to apply the same changes to L1's vmcs. We just set cr0 correctly,
* but we also need to update cr0_guest_host_mask and exception_bitmap.
*/
update_exception_bitmap(vcpu);
vcpu->arch.cr0_guest_owned_bits = (vcpu->fpu_active ? X86_CR0_TS : 0);
vmcs_writel(CR0_GUEST_HOST_MASK, ~vcpu->arch.cr0_guest_owned_bits);
/*
* Note that CR4_GUEST_HOST_MASK is already set in the original vmcs01
* (KVM doesn't change it)- no reason to call set_cr4_guest_host_mask();
*/
vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
kvm_set_cr4(vcpu, vmcs12->host_cr4);
nested_ept_uninit_mmu_context(vcpu);
kvm_set_cr3(vcpu, vmcs12->host_cr3);
kvm_mmu_reset_context(vcpu);
if (!enable_ept)
vcpu->arch.walk_mmu->inject_page_fault = kvm_inject_page_fault;
if (enable_vpid) {
/*
* Trivially support vpid by letting L2s share their parent
* L1's vpid. TODO: move to a more elaborate solution, giving
* each L2 its own vpid and exposing the vpid feature to L1.
*/
vmx_flush_tlb(vcpu);
}
vmcs_write32(GUEST_SYSENTER_CS, vmcs12->host_ia32_sysenter_cs);
vmcs_writel(GUEST_SYSENTER_ESP, vmcs12->host_ia32_sysenter_esp);
vmcs_writel(GUEST_SYSENTER_EIP, vmcs12->host_ia32_sysenter_eip);
vmcs_writel(GUEST_IDTR_BASE, vmcs12->host_idtr_base);
vmcs_writel(GUEST_GDTR_BASE, vmcs12->host_gdtr_base);
/* If not VM_EXIT_CLEAR_BNDCFGS, the L2 value propagates to L1. */
if (vmcs12->vm_exit_controls & VM_EXIT_CLEAR_BNDCFGS)
vmcs_write64(GUEST_BNDCFGS, 0);
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PAT) {
vmcs_write64(GUEST_IA32_PAT, vmcs12->host_ia32_pat);
vcpu->arch.pat = vmcs12->host_ia32_pat;
}
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)
vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL,
vmcs12->host_ia32_perf_global_ctrl);
/* Set L1 segment info according to Intel SDM
27.5.2 Loading Host Segment and Descriptor-Table Registers */
seg = (struct kvm_segment) {
.base = 0,
.limit = 0xFFFFFFFF,
.selector = vmcs12->host_cs_selector,
.type = 11,
.present = 1,
.s = 1,
.g = 1
};
if (vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE)
seg.l = 1;
else
seg.db = 1;
vmx_set_segment(vcpu, &seg, VCPU_SREG_CS);
seg = (struct kvm_segment) {
.base = 0,
.limit = 0xFFFFFFFF,
.type = 3,
.present = 1,
.s = 1,
.db = 1,
.g = 1
};
seg.selector = vmcs12->host_ds_selector;
vmx_set_segment(vcpu, &seg, VCPU_SREG_DS);
seg.selector = vmcs12->host_es_selector;
vmx_set_segment(vcpu, &seg, VCPU_SREG_ES);
seg.selector = vmcs12->host_ss_selector;
vmx_set_segment(vcpu, &seg, VCPU_SREG_SS);
seg.selector = vmcs12->host_fs_selector;
seg.base = vmcs12->host_fs_base;
vmx_set_segment(vcpu, &seg, VCPU_SREG_FS);
seg.selector = vmcs12->host_gs_selector;
seg.base = vmcs12->host_gs_base;
vmx_set_segment(vcpu, &seg, VCPU_SREG_GS);
seg = (struct kvm_segment) {
.base = vmcs12->host_tr_base,
.limit = 0x67,
.selector = vmcs12->host_tr_selector,
.type = 11,
.present = 1
};
vmx_set_segment(vcpu, &seg, VCPU_SREG_TR);
kvm_set_dr(vcpu, 7, 0x400);
vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
if (cpu_has_vmx_msr_bitmap())
vmx_set_msr_bitmap(vcpu);
if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
vmcs12->vm_exit_msr_load_count))
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL);
}
|
Safe
|
[
"CWE-284",
"CWE-264"
] |
linux
|
3ce424e45411cf5a13105e0386b6ecf6eeb4f66f
|
1.180620257691948e+38
| 131 |
kvm:vmx: more complete state update on APICv on/off
The function to update APICv on/off state (in particular, to deactivate
it when enabling Hyper-V SynIC) is incomplete: it doesn't adjust
APICv-related fields among secondary processor-based VM-execution
controls. As a result, Windows 2012 guests get stuck when SynIC-based
auto-EOI interrupt intersected with e.g. an IPI in the guest.
In addition, the MSR intercept bitmap isn't updated every time "virtualize
x2APIC mode" is toggled. This path can only be triggered by a malicious
guest, because Windows didn't use x2APIC but rather their own synthetic
APIC access MSRs; however a guest running in a SynIC-enabled VM could
switch to x2APIC and thus obtain direct access to host APIC MSRs
(CVE-2016-4440).
The patch fixes those omissions.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Reported-by: Steve Rutherford <srutherford@google.com>
Reported-by: Yang Zhang <yang.zhang.wz@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
GBool isGray() { return gray; }
|
Safe
|
[] |
poppler
|
abf167af8b15e5f3b510275ce619e6fdb42edd40
|
1.1300751587475614e+38
| 1 |
Implement tiling/patterns in SplashOutputDev
Fixes bug 13518
| 0 |
bool readNumber( N* out ) {
if ( ( _position + sizeof(N) ) > _maxLength )
return false;
if ( out ) {
const N* temp = reinterpret_cast<const N*>(_buffer + _position);
*out = *temp;
}
_position += sizeof(N);
return true;
}
|
Safe
|
[
"CWE-20"
] |
mongo
|
6889d1658136c753998b4a408dc8d1a3ec28e3b9
|
5.998448021447996e+37
| 10 |
SERVER-7769 - fast bson validate
| 0 |
static int mac80211_hwsim_get_et_sset_count(struct ieee80211_hw *hw,
struct ieee80211_vif *vif, int sset)
{
if (sset == ETH_SS_STATS)
return MAC80211_HWSIM_SSTATS_LEN;
return 0;
}
|
Safe
|
[
"CWE-703",
"CWE-772"
] |
linux
|
0ddcff49b672239dda94d70d0fcf50317a9f4b51
|
1.50298309446228e+38
| 7 |
mac80211_hwsim: fix possible memory leak in hwsim_new_radio_nl()
'hwname' is malloced in hwsim_new_radio_nl() and should be freed
before leaving from the error handling cases, otherwise it will cause
memory leak.
Fixes: ff4dd73dd2b4 ("mac80211_hwsim: check HWSIM_ATTR_RADIO_NAME length")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| 0 |
static void opj_j2k_read_int16_to_float(const void * p_src_data,
void * p_dest_data, OPJ_UINT32 p_nb_elem)
{
OPJ_BYTE * l_src_data = (OPJ_BYTE *) p_src_data;
OPJ_FLOAT32 * l_dest_data = (OPJ_FLOAT32 *) p_dest_data;
OPJ_UINT32 i;
OPJ_UINT32 l_temp;
for (i = 0; i < p_nb_elem; ++i) {
opj_read_bytes(l_src_data, &l_temp, 2);
l_src_data += sizeof(OPJ_INT16);
*(l_dest_data++) = (OPJ_FLOAT32) l_temp;
}
}
|
Safe
|
[
"CWE-416",
"CWE-787"
] |
openjpeg
|
4241ae6fbbf1de9658764a80944dc8108f2b4154
|
1.588439994702003e+38
| 16 |
Fix assertion in debug mode / heap-based buffer overflow in opj_write_bytes_LE for Cinema profiles with numresolutions = 1 (#985)
| 0 |
cgi_initialize_string(const char *data) /* I - Form data string */
{
int done; /* True if we're done reading a form variable */
char *s, /* Pointer to current form string */
ch, /* Temporary character */
name[255], /* Name of form variable */
value[65536]; /* Variable value */
/*
* Check input...
*/
if (data == NULL)
return (0);
/*
* Loop until we've read all the form data...
*/
while (*data != '\0')
{
/*
* Get the variable name...
*/
for (s = name; *data != '\0'; data ++)
if (*data == '=')
break;
else if (*data >= ' ' && s < (name + sizeof(name) - 1))
*s++ = *data;
*s = '\0';
if (*data == '=')
data ++;
else
return (0);
/*
* Read the variable value...
*/
for (s = value, done = 0; !done && *data != '\0'; data ++)
switch (*data)
{
case '&' : /* End of data... */
done = 1;
break;
case '+' : /* Escaped space character */
if (s < (value + sizeof(value) - 1))
*s++ = ' ';
break;
case '%' : /* Escaped control character */
/*
* Read the hex code...
*/
if (!isxdigit(data[1] & 255) || !isxdigit(data[2] & 255))
return (0);
if (s < (value + sizeof(value) - 1))
{
data ++;
ch = *data - '0';
if (ch > 9)
ch -= 7;
*s = (char)(ch << 4);
data ++;
ch = *data - '0';
if (ch > 9)
ch -= 7;
*s++ |= ch;
}
else
data += 2;
break;
default : /* Other characters come straight through */
if (*data >= ' ' && s < (value + sizeof(value) - 1))
*s++ = *data;
break;
}
*s = '\0'; /* nul terminate the string */
/*
* Remove trailing whitespace...
*/
if (s > value)
s --;
while (s >= value && isspace(*s & 255))
*s-- = '\0';
/*
* Add the string to the variable "database"...
*/
if ((s = strrchr(name, '-')) != NULL && isdigit(s[1] & 255))
{
*s++ = '\0';
if (value[0])
cgiSetArray(name, atoi(s) - 1, value);
}
else if (cgiGetVariable(name) != NULL)
cgiSetArray(name, cgiGetSize(name), value);
else
cgiSetVariable(name, value);
}
return (1);
}
|
Safe
|
[] |
cups
|
b9ff93ce913ff633a3f667317e5a81fa7fe0d5d3
|
1.8961812402104174e+38
| 116 |
CVE-2018-4700: Linux session cookies used a predictable random number seed.
| 0 |
megasas_mgmt_compat_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
switch (cmd) {
case MEGASAS_IOC_FIRMWARE32:
return megasas_mgmt_compat_ioctl_fw(file, arg);
case MEGASAS_IOC_GET_AEN:
return megasas_mgmt_ioctl_aen(file, arg);
}
return -ENOTTY;
}
|
Safe
|
[
"CWE-476"
] |
linux
|
bcf3b67d16a4c8ffae0aa79de5853435e683945c
|
3.3630031346007478e+38
| 12 |
scsi: megaraid_sas: return error when create DMA pool failed
when create DMA pool for cmd frames failed, we should return -ENOMEM,
instead of 0.
In some case in:
megasas_init_adapter_fusion()
-->megasas_alloc_cmds()
-->megasas_create_frame_pool
create DMA pool failed,
--> megasas_free_cmds() [1]
-->megasas_alloc_cmds_fusion()
failed, then goto fail_alloc_cmds.
-->megasas_free_cmds() [2]
we will call megasas_free_cmds twice, [1] will kfree cmd_list,
[2] will use cmd_list.it will cause a problem:
Unable to handle kernel NULL pointer dereference at virtual address
00000000
pgd = ffffffc000f70000
[00000000] *pgd=0000001fbf893003, *pud=0000001fbf893003,
*pmd=0000001fbf894003, *pte=006000006d000707
Internal error: Oops: 96000005 [#1] SMP
Modules linked in:
CPU: 18 PID: 1 Comm: swapper/0 Not tainted
task: ffffffdfb9290000 ti: ffffffdfb923c000 task.ti: ffffffdfb923c000
PC is at megasas_free_cmds+0x30/0x70
LR is at megasas_free_cmds+0x24/0x70
...
Call trace:
[<ffffffc0005b779c>] megasas_free_cmds+0x30/0x70
[<ffffffc0005bca74>] megasas_init_adapter_fusion+0x2f4/0x4d8
[<ffffffc0005b926c>] megasas_init_fw+0x2dc/0x760
[<ffffffc0005b9ab0>] megasas_probe_one+0x3c0/0xcd8
[<ffffffc0004a5abc>] local_pci_probe+0x4c/0xb4
[<ffffffc0004a5c40>] pci_device_probe+0x11c/0x14c
[<ffffffc00053a5e4>] driver_probe_device+0x1ec/0x430
[<ffffffc00053a92c>] __driver_attach+0xa8/0xb0
[<ffffffc000538178>] bus_for_each_dev+0x74/0xc8
[<ffffffc000539e88>] driver_attach+0x28/0x34
[<ffffffc000539a18>] bus_add_driver+0x16c/0x248
[<ffffffc00053b234>] driver_register+0x6c/0x138
[<ffffffc0004a5350>] __pci_register_driver+0x5c/0x6c
[<ffffffc000ce3868>] megasas_init+0xc0/0x1a8
[<ffffffc000082a58>] do_one_initcall+0xe8/0x1ec
[<ffffffc000ca7be8>] kernel_init_freeable+0x1c8/0x284
[<ffffffc0008d90b8>] kernel_init+0x1c/0xe4
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Acked-by: Sumit Saxena <sumit.saxena@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
| 0 |
dirserv_expire_measured_bw_cache(time_t now)
{
if (mbw_cache) {
/* Iterate through the cache and check each entry */
DIGESTMAP_FOREACH_MODIFY(mbw_cache, k, mbw_cache_entry_t *, e) {
if (now > e->as_of + MAX_MEASUREMENT_AGE) {
tor_free(e);
MAP_DEL_CURRENT(k);
}
} DIGESTMAP_FOREACH_END;
/* Check if we cleared the whole thing and free if so */
if (digestmap_size(mbw_cache) == 0) {
digestmap_free(mbw_cache, tor_free_);
mbw_cache = 0;
}
}
}
|
Safe
|
[] |
tor
|
02e05bd74dbec614397b696cfcda6525562a4675
|
1.348368804519888e+38
| 19 |
When examining descriptors as a dirserver, reject ones with bad versions
This is an extra fix for bug 21278: it ensures that these
descriptors and platforms will never be listed in a legit consensus.
| 0 |
do_add_counters(struct net *net, const void __user *user, unsigned int len,
int compat)
{
unsigned int i, curcpu;
struct xt_counters_info tmp;
struct xt_counters *paddc;
unsigned int num_counters;
char *name;
int size;
void *ptmp;
struct xt_table *t;
const struct xt_table_info *private;
int ret = 0;
const void *loc_cpu_entry;
struct ip6t_entry *iter;
#ifdef CONFIG_COMPAT
struct compat_xt_counters_info compat_tmp;
if (compat) {
ptmp = &compat_tmp;
size = sizeof(struct compat_xt_counters_info);
} else
#endif
{
ptmp = &tmp;
size = sizeof(struct xt_counters_info);
}
if (copy_from_user(ptmp, user, size) != 0)
return -EFAULT;
#ifdef CONFIG_COMPAT
if (compat) {
num_counters = compat_tmp.num_counters;
name = compat_tmp.name;
} else
#endif
{
num_counters = tmp.num_counters;
name = tmp.name;
}
if (len != size + num_counters * sizeof(struct xt_counters))
return -EINVAL;
paddc = vmalloc(len - size);
if (!paddc)
return -ENOMEM;
if (copy_from_user(paddc, user + size, len - size) != 0) {
ret = -EFAULT;
goto free;
}
t = xt_find_table_lock(net, AF_INET6, name);
if (!t || IS_ERR(t)) {
ret = t ? PTR_ERR(t) : -ENOENT;
goto free;
}
local_bh_disable();
private = t->private;
if (private->number != num_counters) {
ret = -EINVAL;
goto unlock_up_free;
}
i = 0;
/* Choose the copy that is on our node */
curcpu = smp_processor_id();
xt_info_wrlock(curcpu);
loc_cpu_entry = private->entries[curcpu];
xt_entry_foreach(iter, loc_cpu_entry, private->size) {
ADD_COUNTER(iter->counters, paddc[i].bcnt, paddc[i].pcnt);
++i;
}
xt_info_wrunlock(curcpu);
unlock_up_free:
local_bh_enable();
xt_table_unlock(t);
module_put(t->me);
free:
vfree(paddc);
return ret;
}
|
Safe
|
[
"CWE-200"
] |
linux-2.6
|
6a8ab060779779de8aea92ce3337ca348f973f54
|
8.474446805638489e+37
| 88 |
ipv6: netfilter: ip6_tables: fix infoleak to userspace
Structures ip6t_replace, compat_ip6t_replace, and xt_get_revision are
copied from userspace. Fields of these structs that are
zero-terminated strings are not checked. When they are used as argument
to a format string containing "%s" in request_module(), some sensitive
information is leaked to userspace via argument of spawned modprobe
process.
The first bug was introduced before the git epoch; the second was
introduced in 3bc3fe5e (v2.6.25-rc1); the third is introduced by
6b7d31fc (v2.6.15-rc1). To trigger the bug one should have
CAP_NET_ADMIN.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
| 0 |
check_flags6(
sockaddr_u *psau,
const char *name,
u_int32 flags6
)
{
#if defined(INCLUDE_IPV6_SUPPORT) && defined(SIOCGIFAFLAG_IN6)
struct in6_ifreq ifr6;
int fd;
if (psau->sa.sa_family != AF_INET6)
return ISC_FALSE;
if ((fd = socket(AF_INET6, SOCK_DGRAM, 0)) < 0)
return ISC_FALSE;
ZERO(ifr6);
memcpy(&ifr6.ifr_addr, &psau->sa6, sizeof(ifr6.ifr_addr));
strlcpy(ifr6.ifr_name, name, sizeof(ifr6.ifr_name));
if (ioctl(fd, SIOCGIFAFLAG_IN6, &ifr6) < 0) {
close(fd);
return ISC_FALSE;
}
close(fd);
if ((ifr6.ifr_ifru.ifru_flags6 & flags6) != 0)
return ISC_TRUE;
#endif /* INCLUDE_IPV6_SUPPORT && SIOCGIFAFLAG_IN6 */
return ISC_FALSE;
}
|
Safe
|
[
"CWE-287"
] |
ntp
|
71a962710bfe066f76da9679cf4cfdeffe34e95e
|
3.1639928129268585e+38
| 27 |
[Sec 2936] Skeleton Key: Any trusted key system can serve time. HStenn.
| 0 |
htmlNodeDumpOutputInternal(xmlSaveCtxtPtr ctxt, xmlNodePtr cur) {
const xmlChar *oldenc = NULL;
const xmlChar *oldctxtenc = ctxt->encoding;
const xmlChar *encoding = ctxt->encoding;
xmlOutputBufferPtr buf = ctxt->buf;
int switched_encoding = 0;
xmlDocPtr doc;
xmlInitParser();
doc = cur->doc;
if (doc != NULL) {
oldenc = doc->encoding;
if (ctxt->encoding != NULL) {
doc->encoding = BAD_CAST ctxt->encoding;
} else if (doc->encoding != NULL) {
encoding = doc->encoding;
}
}
if ((encoding != NULL) && (doc != NULL))
htmlSetMetaEncoding(doc, (const xmlChar *) encoding);
if ((encoding == NULL) && (doc != NULL))
encoding = htmlGetMetaEncoding(doc);
if (encoding == NULL)
encoding = BAD_CAST "HTML";
if ((encoding != NULL) && (oldctxtenc == NULL) &&
(buf->encoder == NULL) && (buf->conv == NULL)) {
if (xmlSaveSwitchEncoding(ctxt, (const char*) encoding) < 0) {
doc->encoding = oldenc;
return(-1);
}
switched_encoding = 1;
}
if (ctxt->options & XML_SAVE_FORMAT)
htmlNodeDumpFormatOutput(buf, doc, cur,
(const char *)encoding, 1);
else
htmlNodeDumpFormatOutput(buf, doc, cur,
(const char *)encoding, 0);
/*
* Restore the state of the saving context at the end of the document
*/
if ((switched_encoding) && (oldctxtenc == NULL)) {
xmlSaveClearEncoding(ctxt);
}
if (doc != NULL)
doc->encoding = oldenc;
return(0);
}
|
Safe
|
[
"CWE-502"
] |
libxml2
|
c97750d11bb8b6f3303e7131fe526a61ac65bcfd
|
8.251047601569189e+35
| 50 |
Avoid an out of bound access when serializing malformed strings
For https://bugzilla.gnome.org/show_bug.cgi?id=766414
* xmlsave.c: xmlBufAttrSerializeTxtContent() if an attribute value
is not UTF-8 be more careful when serializing it as we may do an
out of bound access as a result.
| 0 |
static UINT rdpei_recv_suspend_touch_pdu(RDPEI_CHANNEL_CALLBACK* callback, wStream* s)
{
RdpeiClientContext* rdpei = (RdpeiClientContext*)callback->plugin->pInterface;
UINT error = CHANNEL_RC_OK;
IFCALLRET(rdpei->SuspendTouch, error, rdpei);
if (error)
WLog_ERR(TAG, "rdpei->SuspendTouch failed with error %" PRIu32 "!", error);
return error;
}
|
Safe
|
[
"CWE-125"
] |
FreeRDP
|
6b485b146a1b9d6ce72dfd7b5f36456c166e7a16
|
6.266996112816199e+37
| 11 |
Fixed oob read in irp_write and similar
| 0 |
static unsigned int __init pid_entry_nlink(const struct pid_entry *entries,
unsigned int n)
{
unsigned int i;
unsigned int count;
count = 2;
for (i = 0; i < n; ++i) {
if (S_ISDIR(entries[i].mode))
++count;
}
return count;
}
|
Safe
|
[
"CWE-119"
] |
linux
|
7f7ccc2ccc2e70c6054685f5e3522efa81556830
|
1.316248231881583e+38
| 14 |
proc: do not access cmdline nor environ from file-backed areas
proc_pid_cmdline_read() and environ_read() directly access the target
process' VM to retrieve the command line and environment. If this
process remaps these areas onto a file via mmap(), the requesting
process may experience various issues such as extra delays if the
underlying device is slow to respond.
Let's simply refuse to access file-backed areas in these functions.
For this we add a new FOLL_ANON gup flag that is passed to all calls
to access_remote_vm(). The code already takes care of such failures
(including unmapped areas). Accesses via /proc/pid/mem were not
changed though.
This was assigned CVE-2018-1120.
Note for stable backports: the patch may apply to kernels prior to 4.11
but silently miss one location; it must be checked that no call to
access_remote_vm() keeps zero as the last argument.
Reported-by: Qualys Security Advisory <qsa@qualys.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
struct bpf_tramp_progs *tp, int stack_size)
{
int i;
u8 *prog = *pprog;
for (i = 0; i < tp->nr_progs; i++) {
if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false))
return -EINVAL;
}
*pprog = prog;
return 0;
}
|
Safe
|
[
"CWE-77"
] |
linux
|
e4d4d456436bfb2fe412ee2cd489f7658449b098
|
1.6217679558250608e+38
| 13 |
bpf, x86: Validate computation of branch displacements for x86-64
The branch displacement logic in the BPF JIT compilers for x86 assumes
that, for any generated branch instruction, the distance cannot
increase between optimization passes.
But this assumption can be violated due to how the distances are
computed. Specifically, whenever a backward branch is processed in
do_jit(), the distance is computed by subtracting the positions in the
machine code from different optimization passes. This is because part
of addrs[] is already updated for the current optimization pass, before
the branch instruction is visited.
And so the optimizer can expand blocks of machine code in some cases.
This can confuse the optimizer logic, where it assumes that a fixed
point has been reached for all machine code blocks once the total
program size stops changing. And then the JIT compiler can output
abnormal machine code containing incorrect branch displacements.
To mitigate this issue, we assert that a fixed point is reached while
populating the output image. This rejects any problematic programs.
The issue affects both x86-32 and x86-64. We mitigate separately to
ease backporting.
Signed-off-by: Piotr Krysiuk <piotras@gmail.com>
Reviewed-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| 0 |
box_above(PG_FUNCTION_ARGS)
{
BOX *box1 = PG_GETARG_BOX_P(0);
BOX *box2 = PG_GETARG_BOX_P(1);
PG_RETURN_BOOL(FPgt(box1->low.y, box2->high.y));
}
|
Safe
|
[
"CWE-703",
"CWE-189"
] |
postgres
|
31400a673325147e1205326008e32135a78b4d8a
|
9.956853943397954e+37
| 7 |
Predict integer overflow to avoid buffer overruns.
Several functions, mostly type input functions, calculated an allocation
size such that the calculation wrapped to a small positive value when
arguments implied a sufficiently-large requirement. Writes past the end
of the inadvertent small allocation followed shortly thereafter.
Coverity identified the path_in() vulnerability; code inspection led to
the rest. In passing, add check_stack_depth() to prevent stack overflow
in related functions.
Back-patch to 8.4 (all supported versions). The non-comment hstore
changes touch code that did not exist in 8.4, so that part stops at 9.0.
Noah Misch and Heikki Linnakangas, reviewed by Tom Lane.
Security: CVE-2014-0064
| 0 |
mwifiex_uap_bss_wpa(u8 **tlv_buf, void *cmd_buf, u16 *param_size)
{
struct host_cmd_tlv_pwk_cipher *pwk_cipher;
struct host_cmd_tlv_gwk_cipher *gwk_cipher;
struct host_cmd_tlv_passphrase *passphrase;
struct host_cmd_tlv_akmp *tlv_akmp;
struct mwifiex_uap_bss_param *bss_cfg = cmd_buf;
u16 cmd_size = *param_size;
u8 *tlv = *tlv_buf;
tlv_akmp = (struct host_cmd_tlv_akmp *)tlv;
tlv_akmp->header.type = cpu_to_le16(TLV_TYPE_UAP_AKMP);
tlv_akmp->header.len = cpu_to_le16(sizeof(struct host_cmd_tlv_akmp) -
sizeof(struct mwifiex_ie_types_header));
tlv_akmp->key_mgmt_operation = cpu_to_le16(bss_cfg->key_mgmt_operation);
tlv_akmp->key_mgmt = cpu_to_le16(bss_cfg->key_mgmt);
cmd_size += sizeof(struct host_cmd_tlv_akmp);
tlv += sizeof(struct host_cmd_tlv_akmp);
if (bss_cfg->wpa_cfg.pairwise_cipher_wpa & VALID_CIPHER_BITMAP) {
pwk_cipher = (struct host_cmd_tlv_pwk_cipher *)tlv;
pwk_cipher->header.type = cpu_to_le16(TLV_TYPE_PWK_CIPHER);
pwk_cipher->header.len =
cpu_to_le16(sizeof(struct host_cmd_tlv_pwk_cipher) -
sizeof(struct mwifiex_ie_types_header));
pwk_cipher->proto = cpu_to_le16(PROTOCOL_WPA);
pwk_cipher->cipher = bss_cfg->wpa_cfg.pairwise_cipher_wpa;
cmd_size += sizeof(struct host_cmd_tlv_pwk_cipher);
tlv += sizeof(struct host_cmd_tlv_pwk_cipher);
}
if (bss_cfg->wpa_cfg.pairwise_cipher_wpa2 & VALID_CIPHER_BITMAP) {
pwk_cipher = (struct host_cmd_tlv_pwk_cipher *)tlv;
pwk_cipher->header.type = cpu_to_le16(TLV_TYPE_PWK_CIPHER);
pwk_cipher->header.len =
cpu_to_le16(sizeof(struct host_cmd_tlv_pwk_cipher) -
sizeof(struct mwifiex_ie_types_header));
pwk_cipher->proto = cpu_to_le16(PROTOCOL_WPA2);
pwk_cipher->cipher = bss_cfg->wpa_cfg.pairwise_cipher_wpa2;
cmd_size += sizeof(struct host_cmd_tlv_pwk_cipher);
tlv += sizeof(struct host_cmd_tlv_pwk_cipher);
}
if (bss_cfg->wpa_cfg.group_cipher & VALID_CIPHER_BITMAP) {
gwk_cipher = (struct host_cmd_tlv_gwk_cipher *)tlv;
gwk_cipher->header.type = cpu_to_le16(TLV_TYPE_GWK_CIPHER);
gwk_cipher->header.len =
cpu_to_le16(sizeof(struct host_cmd_tlv_gwk_cipher) -
sizeof(struct mwifiex_ie_types_header));
gwk_cipher->cipher = bss_cfg->wpa_cfg.group_cipher;
cmd_size += sizeof(struct host_cmd_tlv_gwk_cipher);
tlv += sizeof(struct host_cmd_tlv_gwk_cipher);
}
if (bss_cfg->wpa_cfg.length) {
passphrase = (struct host_cmd_tlv_passphrase *)tlv;
passphrase->header.type =
cpu_to_le16(TLV_TYPE_UAP_WPA_PASSPHRASE);
passphrase->header.len = cpu_to_le16(bss_cfg->wpa_cfg.length);
memcpy(passphrase->passphrase, bss_cfg->wpa_cfg.passphrase,
bss_cfg->wpa_cfg.length);
cmd_size += sizeof(struct mwifiex_ie_types_header) +
bss_cfg->wpa_cfg.length;
tlv += sizeof(struct mwifiex_ie_types_header) +
bss_cfg->wpa_cfg.length;
}
*param_size = cmd_size;
*tlv_buf = tlv;
return;
}
|
Safe
|
[
"CWE-120",
"CWE-787"
] |
linux
|
7caac62ed598a196d6ddf8d9c121e12e082cac3a
|
7.406581751570275e+37
| 72 |
mwifiex: Fix three heap overflow at parsing element in cfg80211_ap_settings
mwifiex_update_vs_ie(),mwifiex_set_uap_rates() and
mwifiex_set_wmm_params() call memcpy() without checking
the destination size.Since the source is given from
user-space, this may trigger a heap buffer overflow.
Fix them by putting the length check before performing memcpy().
This fix addresses CVE-2019-14814,CVE-2019-14815,CVE-2019-14816.
Signed-off-by: Wen Huang <huangwenabc@gmail.com>
Acked-by: Ganapathi Bhat <gbhat@marvell.comg>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
| 0 |
static void update_and_free_page(struct hstate *h, struct page *page)
{
int i;
if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
return;
h->nr_huge_pages--;
h->nr_huge_pages_node[page_to_nid(page)]--;
for (i = 0; i < pages_per_huge_page(h); i++) {
page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
1 << PG_referenced | 1 << PG_dirty |
1 << PG_active | 1 << PG_private |
1 << PG_writeback);
}
VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
set_page_refcounted(page);
if (hstate_is_gigantic(h)) {
/*
* Temporarily drop the hugetlb_lock, because
* we might block in free_gigantic_page().
*/
spin_unlock(&hugetlb_lock);
destroy_compound_gigantic_page(page, huge_page_order(h));
free_gigantic_page(page, huge_page_order(h));
spin_lock(&hugetlb_lock);
} else {
__free_pages(page, huge_page_order(h));
}
}
|
Safe
|
[
"CWE-362"
] |
linux
|
17743798d81238ab13050e8e2833699b54e15467
|
2.2431645943018153e+38
| 32 |
mm/hugetlb: fix a race between hugetlb sysctl handlers
There is a race between the assignment of `table->data` and write value
to the pointer of `table->data` in the __do_proc_doulongvec_minmax() on
the other thread.
CPU0: CPU1:
proc_sys_write
hugetlb_sysctl_handler proc_sys_call_handler
hugetlb_sysctl_handler_common hugetlb_sysctl_handler
table->data = &tmp; hugetlb_sysctl_handler_common
table->data = &tmp;
proc_doulongvec_minmax
do_proc_doulongvec_minmax sysctl_head_finish
__do_proc_doulongvec_minmax unuse_table
i = table->data;
*i = val; // corrupt CPU1's stack
Fix this by duplicating the `table`, and only update the duplicate of
it. And introduce a helper of proc_hugetlb_doulongvec_minmax() to
simplify the code.
The following oops was seen:
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
Code: Bad RIP value.
...
Call Trace:
? set_max_huge_pages+0x3da/0x4f0
? alloc_pool_huge_page+0x150/0x150
? proc_doulongvec_minmax+0x46/0x60
? hugetlb_sysctl_handler_common+0x1c7/0x200
? nr_hugepages_store+0x20/0x20
? copy_fd_bitmaps+0x170/0x170
? hugetlb_sysctl_handler+0x1e/0x20
? proc_sys_call_handler+0x2f1/0x300
? unregister_sysctl_table+0xb0/0xb0
? __fd_install+0x78/0x100
? proc_sys_write+0x14/0x20
? __vfs_write+0x4d/0x90
? vfs_write+0xef/0x240
? ksys_write+0xc0/0x160
? __ia32_sys_read+0x50/0x50
? __close_fd+0x129/0x150
? __x64_sys_write+0x43/0x50
? do_syscall_64+0x6c/0x200
? entry_SYSCALL_64_after_hwframe+0x44/0xa9
Fixes: e5ff215941d5 ("hugetlb: multiple hstates for multiple page sizes")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/20200828031146.43035-1-songmuchun@bytedance.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
**/
T& atNXY(const int pos, const int x, const int y, const int z=0, const int c=0) {
if (is_empty())
throw CImgInstanceException(_cimglist_instance
"atNXY(): Empty instance.",
cimglist_instance);
return _atNXY(pos,x,y,z,c);
|
Safe
|
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
|
1.2071425397637564e+38
| 8 |
Fix other issues in 'CImg<T>::load_bmp()'.
| 0 |
void stq_le_phys(AddressSpace *as, hwaddr addr, uint64_t val)
{
val = cpu_to_le64(val);
address_space_rw(as, addr, (void *) &val, 8, 1);
}
|
Safe
|
[] |
qemu
|
c3c1bb99d1c11978d9ce94d1bdcf0705378c1459
|
3.2307372523614964e+38
| 5 |
exec: Respect as_tranlsate_internal length clamp
address_space_translate_internal will clamp the *plen length argument
based on the size of the memory region being queried. The iommu walker
logic in addresss_space_translate was ignoring this by discarding the
post fn call value of *plen. Fix by just always using *plen as the
length argument throughout the fn, removing the len local variable.
This fixes a bootloader bug when a single elf section spans multiple
QEMU memory regions.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-Id: <1426570554-15940-1-git-send-email-peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
static int br_ip6_multicast_query(struct net_bridge *br,
struct net_bridge_port *port,
struct sk_buff *skb)
{
struct ipv6hdr *ip6h = ipv6_hdr(skb);
struct mld_msg *mld = (struct mld_msg *) icmp6_hdr(skb);
struct net_bridge_mdb_entry *mp;
struct mld2_query *mld2q;
struct net_bridge_port_group *p;
struct net_bridge_port_group __rcu **pp;
unsigned long max_delay;
unsigned long now = jiffies;
struct in6_addr *group = NULL;
int err = 0;
spin_lock(&br->multicast_lock);
if (!netif_running(br->dev) ||
(port && port->state == BR_STATE_DISABLED))
goto out;
br_multicast_query_received(br, port, !ipv6_addr_any(&ip6h->saddr));
if (skb->len == sizeof(*mld)) {
if (!pskb_may_pull(skb, sizeof(*mld))) {
err = -EINVAL;
goto out;
}
mld = (struct mld_msg *) icmp6_hdr(skb);
max_delay = msecs_to_jiffies(htons(mld->mld_maxdelay));
if (max_delay)
group = &mld->mld_mca;
} else if (skb->len >= sizeof(*mld2q)) {
if (!pskb_may_pull(skb, sizeof(*mld2q))) {
err = -EINVAL;
goto out;
}
mld2q = (struct mld2_query *)icmp6_hdr(skb);
if (!mld2q->mld2q_nsrcs)
group = &mld2q->mld2q_mca;
max_delay = mld2q->mld2q_mrc ? MLDV2_MRC(mld2q->mld2q_mrc) : 1;
}
if (!group)
goto out;
mp = br_mdb_ip6_get(mlock_dereference(br->mdb, br), group);
if (!mp)
goto out;
max_delay *= br->multicast_last_member_count;
if (!hlist_unhashed(&mp->mglist) &&
(timer_pending(&mp->timer) ?
time_after(mp->timer.expires, now + max_delay) :
try_to_del_timer_sync(&mp->timer) >= 0))
mod_timer(&mp->timer, now + max_delay);
for (pp = &mp->ports;
(p = mlock_dereference(*pp, br)) != NULL;
pp = &p->next) {
if (timer_pending(&p->timer) ?
time_after(p->timer.expires, now + max_delay) :
try_to_del_timer_sync(&p->timer) >= 0)
mod_timer(&mp->timer, now + max_delay);
}
out:
spin_unlock(&br->multicast_lock);
return err;
}
|
Safe
|
[
"CWE-399"
] |
linux
|
6b0d6a9b4296fa16a28d10d416db7a770fc03287
|
2.156952894553536e+38
| 69 |
bridge: Fix mglist corruption that leads to memory corruption
The list mp->mglist is used to indicate whether a multicast group
is active on the bridge interface itself as opposed to one of the
constituent interfaces in the bridge.
Unfortunately the operation that adds the mp->mglist node to the
list neglected to check whether it has already been added. This
leads to list corruption in the form of nodes pointing to itself.
Normally this would be quite obvious as it would cause an infinite
loop when walking the list. However, as this list is never actually
walked (which means that we don't really need it, I'll get rid of
it in a subsequent patch), this instead is hidden until we perform
a delete operation on the affected nodes.
As the same node may now be pointed to by more than one node, the
delete operations can then cause modification of freed memory.
This was observed in practice to cause corruption in 512-byte slabs,
most commonly leading to crashes in jbd2.
Thanks to Josef Bacik for pointing me in the right direction.
Reported-by: Ian Page Hands <ihands@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
evbuffer_add_vprintf(struct evbuffer *buf, const char *fmt, va_list ap)
{
char *buffer;
size_t space;
size_t oldoff = buf->off;
int sz;
va_list aq;
/* make sure that at least some space is available */
if (evbuffer_expand(buf, 64) < 0)
return (-1);
for (;;) {
size_t used = buf->misalign + buf->off;
buffer = (char *)buf->buffer + buf->off;
assert(buf->totallen >= used);
space = buf->totallen - used;
#ifndef va_copy
#define va_copy(dst, src) memcpy(&(dst), &(src), sizeof(va_list))
#endif
va_copy(aq, ap);
sz = evutil_vsnprintf(buffer, space, fmt, aq);
va_end(aq);
if (sz < 0)
return (-1);
if ((size_t)sz < space) {
buf->off += sz;
if (buf->cb != NULL)
(*buf->cb)(buf, oldoff, buf->off, buf->cbarg);
return (sz);
}
if (evbuffer_expand(buf, sz + 1) == -1)
return (-1);
}
/* NOTREACHED */
}
|
Safe
|
[
"CWE-189"
] |
libevent
|
7b21c4eabf1f3946d3f63cce1319c490caab8ecf
|
2.9618378986981645e+38
| 40 |
Fix CVE-2014-6272 in Libevent 1.4
For this fix, we need to make sure that passing too-large inputs to
the evbuffer functions can't make us do bad things with the heap.
| 0 |
static void binder_free_proc(struct binder_proc *proc)
{
BUG_ON(!list_empty(&proc->todo));
BUG_ON(!list_empty(&proc->delivered_death));
binder_alloc_deferred_release(&proc->alloc);
put_task_struct(proc->tsk);
binder_stats_deleted(BINDER_STAT_PROC);
kfree(proc);
}
|
Safe
|
[
"CWE-416"
] |
linux
|
7bada55ab50697861eee6bb7d60b41e68a961a9c
|
1.9169507258410026e+38
| 9 |
binder: fix race that allows malicious free of live buffer
Malicious code can attempt to free buffers using the BC_FREE_BUFFER
ioctl to binder. There are protections against a user freeing a buffer
while in use by the kernel, however there was a window where
BC_FREE_BUFFER could be used to free a recently allocated buffer that
was not completely initialized. This resulted in a use-after-free
detected by KASAN with a malicious test program.
This window is closed by setting the buffer's allow_user_free attribute
to 0 when the buffer is allocated or when the user has previously freed
it instead of waiting for the caller to set it. The problem was that
when the struct buffer was recycled, allow_user_free was stale and set
to 1 allowing a free to go through.
Signed-off-by: Todd Kjos <tkjos@google.com>
Acked-by: Arve Hjønnevåg <arve@android.com>
Cc: stable <stable@vger.kernel.org> # 4.14
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
bool is_complex_type(t_type* ttype) {
ttype = get_true_type(ttype);
return ttype->is_container() || ttype->is_struct() || ttype->is_xception()
|| (ttype->is_base_type()
&& (((t_base_type*)ttype)->get_base() == t_base_type::TYPE_STRING));
}
|
Safe
|
[
"CWE-20"
] |
thrift
|
cfaadcc4adcfde2a8232c62ec89870b73ef40df1
|
2.0667050018480682e+38
| 7 |
THRIFT-3231 CPP: Limit recursion depth to 64
Client: cpp
Patch: Ben Craig <bencraig@apache.org>
| 0 |
bgp_capability_receive (struct peer *peer, bgp_size_t size)
{
u_char *pnt;
int ret;
/* Fetch pointer. */
pnt = stream_pnt (peer->ibuf);
if (BGP_DEBUG (normal, NORMAL))
zlog_debug ("%s rcv CAPABILITY", peer->host);
/* If peer does not have the capability, send notification. */
if (! CHECK_FLAG (peer->cap, PEER_CAP_DYNAMIC_ADV))
{
plog_err (peer->log, "%s [Error] BGP dynamic capability is not enabled",
peer->host);
bgp_notify_send (peer,
BGP_NOTIFY_HEADER_ERR,
BGP_NOTIFY_HEADER_BAD_MESTYPE);
return;
}
/* Status must be Established. */
if (peer->status != Established)
{
plog_err (peer->log,
"%s [Error] Dynamic capability packet received under status %s", peer->host, LOOKUP (bgp_status_msg, peer->status));
bgp_notify_send (peer, BGP_NOTIFY_FSM_ERR, 0);
return;
}
/* Parse packet. */
ret = bgp_capability_msg_parse (peer, pnt, size);
}
|
Vulnerable
|
[
"CWE-125"
] |
frr
|
6d58272b4cf96f0daa846210dd2104877900f921
|
6.575634373484823e+37
| 34 |
[bgpd] cleanup, compact and consolidate capability parsing code
2007-07-26 Paul Jakma <paul.jakma@sun.com>
* (general) Clean up and compact capability parsing slightly.
Consolidate validation of length and logging of generic TLV, and
memcpy of capability data, thus removing such from cap specifc
code (not always present or correct).
* bgp_open.h: Add structures for the generic capability TLV header
and for the data formats of the various specific capabilities we
support. Hence remove the badly named, or else misdefined, struct
capability.
* bgp_open.c: (bgp_capability_vty_out) Use struct capability_mp_data.
Do the length checks *before* memcpy()'ing based on that length
(stored capability - should have been validated anyway on input,
but..).
(bgp_afi_safi_valid_indices) new function to validate (afi,safi)
which is about to be used as index into arrays, consolidates
several instances of same, at least one of which appeared to be
incomplete..
(bgp_capability_mp) Much condensed.
(bgp_capability_orf_entry) New, process one ORF entry
(bgp_capability_orf) Condensed. Fixed to process all ORF entries.
(bgp_capability_restart) Condensed, and fixed to use a
cap-specific type, rather than abusing capability_mp.
(struct message capcode_str) added to aid generic logging.
(size_t cap_minsizes[]) added to aid generic validation of
capability length field.
(bgp_capability_parse) Generic logging and validation of TLV
consolidated here. Code compacted as much as possible.
* bgp_packet.c: (bgp_open_receive) Capability parsers now use
streams, so no more need here to manually fudge the input stream
getp.
(bgp_capability_msg_parse) use struct capability_mp_data. Validate
lengths /before/ memcpy. Use bgp_afi_safi_valid_indices.
(bgp_capability_receive) Exported for use by test harness.
* bgp_vty.c: (bgp_show_summary) fix conversion warning
(bgp_show_peer) ditto
* bgp_debug.h: Fix storage 'extern' after type 'const'.
* lib/log.c: (mes_lookup) warning about code not being in
same-number array slot should be debug, not warning. E.g. BGP
has several discontigious number spaces, allocating from
different parts of a space is not uncommon (e.g. IANA
assigned versus vendor-assigned code points in some number
space).
| 1 |
NTSTATUS GetDeviceSectorSize (PDEVICE_OBJECT deviceObject, ULONG *bytesPerSector)
{
NTSTATUS status;
DISK_GEOMETRY geometry;
status = SendDeviceIoControlRequest (deviceObject, IOCTL_DISK_GET_DRIVE_GEOMETRY, NULL, 0, &geometry, sizeof (geometry));
if (!NT_SUCCESS (status))
return status;
*bytesPerSector = geometry.BytesPerSector;
return STATUS_SUCCESS;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
VeraCrypt
|
f30f9339c9a0b9bbcc6f5ad38804af39db1f479e
|
1.7246923871696e+38
| 13 |
Windows: fix low severity vulnerability in driver that allowed reading 3 bytes of kernel stack memory (with a rare possibility of 25 additional bytes). Reported by Tim Harrison.
| 0 |
//! Remove last image.
/**
|
Safe
|
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
|
1.1320323858749672e+38
| 3 |
Fix other issues in 'CImg<T>::load_bmp()'.
| 0 |
getline (char **lineptr, size_t *n, FILE *stream)
{
return getdelim (lineptr, n, '\n', stream);
}
|
Safe
|
[
"CWE-125"
] |
libidn
|
570e68886c41c2e765e6218cb317d9a9a447a041
|
1.4141191716313902e+38
| 4 |
idn: Use getline instead of fgets with fixed-size buffer.
Fixes out-of-bounds read, reported by Hanno Böck.
| 0 |
static int curl_debug_cb(__maybe_unused CURL *handle, curl_infotype type,
__maybe_unused char *data, size_t size, void *userdata)
{
struct pool *pool = (struct pool *)userdata;
switch(type) {
case CURLINFO_HEADER_IN:
case CURLINFO_DATA_IN:
case CURLINFO_SSL_DATA_IN:
pool->sgminer_pool_stats.net_bytes_received += size;
break;
case CURLINFO_HEADER_OUT:
case CURLINFO_DATA_OUT:
case CURLINFO_SSL_DATA_OUT:
pool->sgminer_pool_stats.net_bytes_sent += size;
break;
case CURLINFO_TEXT:
default:
break;
}
return 0;
}
|
Safe
|
[
"CWE-20",
"CWE-703"
] |
sgminer
|
910c36089940e81fb85c65b8e63dcd2fac71470c
|
7.64051600368741e+37
| 22 |
stratum: parse_notify(): Don't die on malformed bbversion/prev_hash/nbit/ntime.
Might have introduced a memory leak, don't have time to check. :(
Should the other hex2bin()'s be checked?
Thanks to Mick Ayzenberg <mick.dejavusecurity.com> for finding this.
| 0 |
static void tg3_phy_init_link_config(struct tg3 *tp)
{
u32 adv = ADVERTISED_Autoneg;
if (!(tp->phy_flags & TG3_PHYFLG_10_100_ONLY))
adv |= ADVERTISED_1000baseT_Half |
ADVERTISED_1000baseT_Full;
if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES))
adv |= ADVERTISED_100baseT_Half |
ADVERTISED_100baseT_Full |
ADVERTISED_10baseT_Half |
ADVERTISED_10baseT_Full |
ADVERTISED_TP;
else
adv |= ADVERTISED_FIBRE;
tp->link_config.advertising = adv;
tp->link_config.speed = SPEED_UNKNOWN;
tp->link_config.duplex = DUPLEX_UNKNOWN;
tp->link_config.autoneg = AUTONEG_ENABLE;
tp->link_config.active_speed = SPEED_UNKNOWN;
tp->link_config.active_duplex = DUPLEX_UNKNOWN;
tp->old_link = -1;
}
|
Safe
|
[
"CWE-476",
"CWE-119"
] |
linux
|
715230a44310a8cf66fbfb5a46f9a62a9b2de424
|
3.0264754786895697e+38
| 26 |
tg3: fix length overflow in VPD firmware parsing
Commit 184b89044fb6e2a74611dafa69b1dce0d98612c6 ("tg3: Use VPD fw version
when present") introduced VPD parsing that contained a potential length
overflow.
Limit the hardware's reported firmware string length (max 255 bytes) to
stay inside the driver's firmware string length (32 bytes). On overflow,
truncate the formatted firmware string instead of potentially overwriting
portions of the tg3 struct.
http://cansecwest.com/slides/2013/PrivateCore%20CSW%202013.pdf
Signed-off-by: Kees Cook <keescook@chromium.org>
Reported-by: Oded Horovitz <oded@privatecore.com>
Reported-by: Brad Spengler <spender@grsecurity.net>
Cc: stable@vger.kernel.org
Cc: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
static Formattable* createArrayCopy(const Formattable* array, int32_t count) {
Formattable *result = new Formattable[count];
if (result != NULL) {
for (int32_t i=0; i<count; ++i)
result[i] = array[i]; // Don't memcpy!
}
return result;
}
|
Safe
|
[
"CWE-190"
] |
icu
|
53d8c8f3d181d87a6aa925b449b51c4a2c922a51
|
1.0932954739248612e+38
| 8 |
ICU-20246 Fixing another integer overflow in number parsing.
| 0 |
struct yang_data *yang_data_new_uint32(const char *xpath, uint32_t value)
{
char value_str[BUFSIZ];
snprintf(value_str, sizeof(value_str), "%u", value);
return yang_data_new(xpath, value_str);
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
frr
|
ac3133450de12ba86c051265fc0f1b12bc57b40c
|
1.1350983345980976e+38
| 7 |
isisd: fix #10505 using base64 encoding
Using base64 instead of the raw string to encode
the binary data.
Signed-off-by: whichbug <whichbug@github.com>
| 0 |
void qemu_input_update_buttons(QemuConsole *src, uint32_t *button_map,
uint32_t button_old, uint32_t button_new)
{
InputButton btn;
uint32_t mask;
for (btn = 0; btn < INPUT_BUTTON__MAX; btn++) {
mask = button_map[btn];
if ((button_old & mask) == (button_new & mask)) {
continue;
}
qemu_input_queue_btn(src, btn, button_new & mask);
}
}
|
Safe
|
[
"CWE-772"
] |
qemu
|
fa18f36a461984eae50ab957e47ec78dae3c14fc
|
2.9420485378545593e+38
| 14 |
input: limit kbd queue depth
Apply a limit to the number of items we accept into the keyboard queue.
Impact: Without this limit vnc clients can exhaust host memory by
sending keyboard events faster than qemu feeds them to the guest.
Fixes: CVE-2017-8379
Cc: P J P <ppandit@redhat.com>
Cc: Huawei PSIRT <PSIRT@huawei.com>
Reported-by: jiangxin1@huawei.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170428084237.23960-1-kraxel@redhat.com
| 0 |
static inline bool is_exception_n(u32 intr_info, u8 vector)
{
return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VECTOR_MASK |
INTR_INFO_VALID_MASK)) ==
(INTR_TYPE_HARD_EXCEPTION | vector | INTR_INFO_VALID_MASK);
}
|
Safe
|
[
"CWE-284"
] |
linux
|
727ba748e110b4de50d142edca9d6a9b7e6111d8
|
2.596637126862371e+38
| 6 |
kvm: nVMX: Enforce cpl=0 for VMX instructions
VMX instructions executed inside a L1 VM will always trigger a VM exit
even when executed with cpl 3. This means we must perform the
privilege check in software.
Fixes: 70f3aac964ae("kvm: nVMX: Remove superfluous VMX instruction fault checks")
Cc: stable@vger.kernel.org
Signed-off-by: Felix Wilhelm <fwilhelm@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
static int dt_remember_or_free_map(struct pinctrl *p, const char *statename,
struct pinctrl_dev *pctldev,
struct pinctrl_map *map, unsigned num_maps)
{
int i;
struct pinctrl_dt_map *dt_map;
/* Initialize common mapping table entry fields */
for (i = 0; i < num_maps; i++) {
map[i].dev_name = dev_name(p->dev);
map[i].name = statename;
if (pctldev)
map[i].ctrl_dev_name = dev_name(pctldev->dev);
}
/* Remember the converted mapping table entries */
dt_map = kzalloc(sizeof(*dt_map), GFP_KERNEL);
if (!dt_map) {
dt_free_map(pctldev, map, num_maps);
return -ENOMEM;
}
dt_map->pctldev = pctldev;
dt_map->map = map;
dt_map->num_maps = num_maps;
list_add_tail(&dt_map->node, &p->dt_maps);
return pinctrl_register_map(map, num_maps, false);
}
|
Vulnerable
|
[
"CWE-125"
] |
linux
|
be4c60b563edee3712d392aaeb0943a768df7023
|
9.21763719950016e+36
| 29 |
pinctrl: devicetree: Avoid taking direct reference to device name string
When populating the pinctrl mapping table entries for a device, the
'dev_name' field for each entry is initialised to point directly at the
string returned by 'dev_name()' for the device and subsequently used by
'create_pinctrl()' when looking up the mappings for the device being
probed.
This is unreliable in the presence of calls to 'dev_set_name()', which may
reallocate the device name string leaving the pinctrl mappings with a
dangling reference. This then leads to a use-after-free every time the
name is dereferenced by a device probe:
| BUG: KASAN: invalid-access in strcmp+0x20/0x64
| Read of size 1 at addr 13ffffc153494b00 by task modprobe/590
| Pointer tag: [13], memory tag: [fe]
|
| Call trace:
| __kasan_report+0x16c/0x1dc
| kasan_report+0x10/0x18
| check_memory_region
| __hwasan_load1_noabort+0x4c/0x54
| strcmp+0x20/0x64
| create_pinctrl+0x18c/0x7f4
| pinctrl_get+0x90/0x114
| devm_pinctrl_get+0x44/0x98
| pinctrl_bind_pins+0x5c/0x450
| really_probe+0x1c8/0x9a4
| driver_probe_device+0x120/0x1d8
Follow the example of sysfs, and duplicate the device name string before
stashing it away in the pinctrl mapping entries.
Cc: Linus Walleij <linus.walleij@linaro.org>
Reported-by: Elena Petrova <lenaptr@google.com>
Tested-by: Elena Petrova <lenaptr@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20191002124206.22928-1-will@kernel.org
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
| 1 |
bool stmt_create_procedure_start(const DDL_options_st &options)
{
sql_command= SQLCOM_CREATE_PROCEDURE;
return stmt_create_routine_start(options);
}
|
Safe
|
[
"CWE-703"
] |
server
|
39feab3cd31b5414aa9b428eaba915c251ac34a2
|
1.812570658146695e+38
| 5 |
MDEV-26412 Server crash in Item_field::fix_outer_field for INSERT SELECT
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
| 0 |
int mailimf_ignore_field_parse(const char * message, size_t length,
size_t * indx)
{
int has_field;
size_t cur_token;
int state;
size_t terminal;
has_field = FALSE;
cur_token = * indx;
terminal = cur_token;
state = UNSTRUCTURED_START;
/* check if this is not a beginning CRLF */
if (cur_token >= length)
return MAILIMF_ERROR_PARSE;
switch (message[cur_token]) {
case '\r':
return MAILIMF_ERROR_PARSE;
case '\n':
return MAILIMF_ERROR_PARSE;
}
while (state != UNSTRUCTURED_OUT) {
switch(state) {
case UNSTRUCTURED_START:
if (cur_token >= length)
return MAILIMF_ERROR_PARSE;
switch(message[cur_token]) {
case '\r':
state = UNSTRUCTURED_CR;
break;
case '\n':
state = UNSTRUCTURED_LF;
break;
case ':':
has_field = TRUE;
state = UNSTRUCTURED_START;
break;
default:
state = UNSTRUCTURED_START;
break;
}
break;
case UNSTRUCTURED_CR:
if (cur_token >= length)
return MAILIMF_ERROR_PARSE;
switch(message[cur_token]) {
case '\n':
state = UNSTRUCTURED_LF;
break;
case ':':
has_field = TRUE;
state = UNSTRUCTURED_START;
break;
default:
state = UNSTRUCTURED_START;
break;
}
break;
case UNSTRUCTURED_LF:
if (cur_token >= length) {
terminal = cur_token;
state = UNSTRUCTURED_OUT;
break;
}
switch(message[cur_token]) {
case '\t':
case ' ':
state = UNSTRUCTURED_WSP;
break;
default:
terminal = cur_token;
state = UNSTRUCTURED_OUT;
break;
}
break;
case UNSTRUCTURED_WSP:
if (cur_token >= length)
return MAILIMF_ERROR_PARSE;
switch(message[cur_token]) {
case '\r':
state = UNSTRUCTURED_CR;
break;
case '\n':
state = UNSTRUCTURED_LF;
break;
case ':':
has_field = TRUE;
state = UNSTRUCTURED_START;
break;
default:
state = UNSTRUCTURED_START;
break;
}
break;
}
cur_token ++;
}
if (!has_field)
return MAILIMF_ERROR_PARSE;
* indx = terminal;
return MAILIMF_NO_ERROR;
}
|
Safe
|
[
"CWE-476"
] |
libetpan
|
1fe8fbc032ccda1db9af66d93016b49c16c1f22d
|
3.9910127373996843e+37
| 116 |
Fixed crash #274
| 0 |
mtree_bid(struct archive_read *a, int best_bid)
{
const char *signature = "#mtree";
const char *p;
(void)best_bid; /* UNUSED */
/* Now let's look at the actual header and see if it matches. */
p = __archive_read_ahead(a, strlen(signature), NULL);
if (p == NULL)
return (-1);
if (memcmp(p, signature, strlen(signature)) == 0)
return (8 * (int)strlen(signature));
/*
* There is not a mtree signature. Let's try to detect mtree format.
*/
return (detect_form(a, NULL));
}
|
Safe
|
[
"CWE-476",
"CWE-119"
] |
libarchive
|
a550daeecf6bc689ade371349892ea17b5b97c77
|
2.675603897511915e+38
| 20 |
Fix libarchive/archive_read_support_format_mtree.c:1388:11: error: array subscript is above array bounds
| 0 |
_OwnSelection(XtermWidget xw,
String *selections,
Cardinal count)
{
TScreen *screen = TScreenOf(xw);
Display *dpy = screen->display;
Atom *atoms = screen->selection_atoms;
Cardinal i;
Bool have_selection = False;
SelectedCells *scp;
if (count == 0)
return;
TRACE(("_OwnSelection count %d\n", count));
selections = MapSelections(xw, selections, count);
if (count > screen->sel_atoms_size) {
XtFree((char *) atoms);
atoms = TypeXtMallocN(Atom, count);
screen->selection_atoms = atoms;
screen->sel_atoms_size = count;
}
XmuInternStrings(dpy, selections, count, atoms);
for (i = 0; i < count; i++) {
int cutbuffer = CutBuffer(atoms[i]);
if (cutbuffer >= 0) {
unsigned long limit =
(unsigned long) (4 * XMaxRequestSize(dpy) - 32);
scp = &(screen->selected_cells[CutBufferToCode(cutbuffer)]);
if (scp->data_length > limit) {
TRACE(("selection too big (%lu bytes), not storing in CUT_BUFFER%d\n",
scp->data_length, cutbuffer));
xtermWarning("selection too big (%lu bytes), not storing in CUT_BUFFER%d\n",
(unsigned long) scp->data_length, cutbuffer);
} else {
/* This used to just use the UTF-8 data, which was totally
* broken as not even the corresponding paste code in xterm
* understood this! So now it converts to Latin1 first.
* Robert Brady, 2000-09-05
*/
unsigned long length = scp->data_length;
Char *data = scp->data_buffer;
if_OPT_WIDE_CHARS((screen), {
data = UTF8toLatin1(screen, data, length, &length);
});
TRACE(("XStoreBuffer(%d)\n", cutbuffer));
XStoreBuffer(dpy,
(char *) data,
(int) length,
cutbuffer);
}
} else {
int which = AtomToSelection(dpy, atoms[i]);
if (keepClipboard(dpy, atoms[i])) {
Char *buf;
SelectedCells *tcp = &(screen->clipboard_data);
TRACE(("saving selection to clipboard buffer\n"));
scp = &(screen->selected_cells[CLIPBOARD_CODE]);
if ((buf = (Char *) malloc((size_t) scp->data_length)) == 0)
SysError(ERROR_BMALLOC2);
free(tcp->data_buffer);
memcpy(buf, scp->data_buffer, scp->data_length);
tcp->data_buffer = buf;
tcp->data_limit = scp->data_length;
tcp->data_length = scp->data_length;
}
scp = &(screen->selected_cells[which]);
if (scp->data_length == 0) {
TRACE(("XtDisownSelection(%s, @%ld)\n",
TraceAtomName(screen->display, atoms[i]),
(long) screen->selection_time));
XtDisownSelection((Widget) xw,
atoms[i],
screen->selection_time);
} else if (!screen->replyToEmacs && atoms[i] != 0) {
TRACE(("XtOwnSelection(%s, @%ld)\n",
TraceAtomName(screen->display, atoms[i]),
(long) screen->selection_time));
have_selection |=
XtOwnSelection((Widget) xw, atoms[i],
screen->selection_time,
ConvertSelection,
LoseSelection,
SelectionDone);
}
}
TRACE(("... _OwnSelection used length %ld value %s\n",
scp->data_length,
visibleChars(scp->data_buffer,
(unsigned) scp->data_length)));
}
if (!screen->replyToEmacs)
screen->selection_count = count;
if (!have_selection)
UnHiliteText(xw);
}
|
Safe
|
[
"CWE-399"
] |
xterm-snapshots
|
82ba55b8f994ab30ff561a347b82ea340ba7075c
|
2.382543936843782e+38
| 98 |
snapshot of project "xterm", label xterm-365d
| 0 |
static BlockAIOCB *scsi_block_do_sgio(SCSIBlockReq *req,
int64_t offset, QEMUIOVector *iov,
int direction,
BlockCompletionFunc *cb, void *opaque)
{
sg_io_hdr_t *io_header = &req->io_header;
SCSIDiskReq *r = &req->req;
SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, r->req.dev);
int nb_logical_blocks;
uint64_t lba;
BlockAIOCB *aiocb;
/* This is not supported yet. It can only happen if the guest does
* reads and writes that are not aligned to one logical sectors
* _and_ cover multiple MemoryRegions.
*/
assert(offset % s->qdev.blocksize == 0);
assert(iov->size % s->qdev.blocksize == 0);
io_header->interface_id = 'S';
/* The data transfer comes from the QEMUIOVector. */
io_header->dxfer_direction = direction;
io_header->dxfer_len = iov->size;
io_header->dxferp = (void *)iov->iov;
io_header->iovec_count = iov->niov;
assert(io_header->iovec_count == iov->niov); /* no overflow! */
/* Build a new CDB with the LBA and length patched in, in case
* DMA helpers split the transfer in multiple segments. Do not
* build a CDB smaller than what the guest wanted, and only build
* a larger one if strictly necessary.
*/
io_header->cmdp = req->cdb;
lba = offset / s->qdev.blocksize;
nb_logical_blocks = io_header->dxfer_len / s->qdev.blocksize;
if ((req->cmd >> 5) == 0 && lba <= 0x1ffff) {
/* 6-byte CDB */
stl_be_p(&req->cdb[0], lba | (req->cmd << 24));
req->cdb[4] = nb_logical_blocks;
req->cdb[5] = 0;
io_header->cmd_len = 6;
} else if ((req->cmd >> 5) <= 1 && lba <= 0xffffffffULL) {
/* 10-byte CDB */
req->cdb[0] = (req->cmd & 0x1f) | 0x20;
req->cdb[1] = req->cdb1;
stl_be_p(&req->cdb[2], lba);
req->cdb[6] = req->group_number;
stw_be_p(&req->cdb[7], nb_logical_blocks);
req->cdb[9] = 0;
io_header->cmd_len = 10;
} else if ((req->cmd >> 5) != 4 && lba <= 0xffffffffULL) {
/* 12-byte CDB */
req->cdb[0] = (req->cmd & 0x1f) | 0xA0;
req->cdb[1] = req->cdb1;
stl_be_p(&req->cdb[2], lba);
stl_be_p(&req->cdb[6], nb_logical_blocks);
req->cdb[10] = req->group_number;
req->cdb[11] = 0;
io_header->cmd_len = 12;
} else {
/* 16-byte CDB */
req->cdb[0] = (req->cmd & 0x1f) | 0x80;
req->cdb[1] = req->cdb1;
stq_be_p(&req->cdb[2], lba);
stl_be_p(&req->cdb[10], nb_logical_blocks);
req->cdb[14] = req->group_number;
req->cdb[15] = 0;
io_header->cmd_len = 16;
}
/* The rest is as in scsi-generic.c. */
io_header->mx_sb_len = sizeof(r->req.sense);
io_header->sbp = r->req.sense;
io_header->timeout = s->qdev.io_timeout * 1000;
io_header->usr_ptr = r;
io_header->flags |= SG_FLAG_DIRECT_IO;
req->cb = cb;
req->cb_opaque = opaque;
trace_scsi_disk_aio_sgio_command(r->req.tag, req->cdb[0], lba,
nb_logical_blocks, io_header->timeout);
aiocb = blk_aio_ioctl(s->qdev.conf.blk, SG_IO, io_header, scsi_block_sgio_complete, req);
assert(aiocb != NULL);
return aiocb;
}
|
Safe
|
[
"CWE-193"
] |
qemu
|
b3af7fdf9cc537f8f0dd3e2423d83f5c99a457e8
|
3.1580434181845587e+38
| 86 |
hw/scsi/scsi-disk: MODE_PAGE_ALLS not allowed in MODE SELECT commands
This avoids an off-by-one read of 'mode_sense_valid' buffer in
hw/scsi/scsi-disk.c:mode_sense_page().
Fixes: CVE-2021-3930
Cc: qemu-stable@nongnu.org
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: a8f4bbe2900 ("scsi-disk: store valid mode pages in a table")
Fixes: #546
Reported-by: Qiuhao Li <Qiuhao.Li@outlook.com>
Signed-off-by: Mauro Matteo Cascella <mcascell@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
static PCRE2_SPTR bracketend(PCRE2_SPTR cc)
{
SLJIT_ASSERT((*cc >= OP_ASSERT && *cc <= OP_ASSERTBACK_NOT) || (*cc >= OP_ONCE && *cc <= OP_SCOND));
do cc += GET(cc, 1); while (*cc == OP_ALT);
SLJIT_ASSERT(*cc >= OP_KET && *cc <= OP_KETRPOS);
cc += 1 + LINK_SIZE;
return cc;
}
|
Safe
|
[
"CWE-125"
] |
php-src
|
8947fd9e9fdce87cd6c59817b1db58e789538fe9
|
2.9245370408929036e+38
| 8 |
Fix #78338: Array cross-border reading in PCRE
We backport r1092 from pcre2.
| 0 |
int
yyparse (void *yyscanner, HEX_LEX_ENVIRONMENT *lex_env)
{
/* The lookahead symbol. */
int yychar;
/* The semantic value of the lookahead symbol. */
/* Default value used for initialization, for pacifying older GCCs
or non-GCC compilers. */
YY_INITIAL_VALUE (static YYSTYPE yyval_default;)
YYSTYPE yylval YY_INITIAL_VALUE (= yyval_default);
/* Number of syntax errors so far. */
int yynerrs;
int yystate;
/* Number of tokens to shift before error messages enabled. */
int yyerrstatus;
/* The stacks and their tools:
'yyss': related to states.
'yyvs': related to semantic values.
Refer to the stacks through separate pointers, to allow yyoverflow
to reallocate them elsewhere. */
/* The state stack. */
yytype_int16 yyssa[YYINITDEPTH];
yytype_int16 *yyss;
yytype_int16 *yyssp;
/* The semantic value stack. */
YYSTYPE yyvsa[YYINITDEPTH];
YYSTYPE *yyvs;
YYSTYPE *yyvsp;
YYSIZE_T yystacksize;
int yyn;
int yyresult;
/* Lookahead token as an internal (translated) token number. */
int yytoken = 0;
/* The variables used to return semantic value and location from the
action routines. */
YYSTYPE yyval;
#if YYERROR_VERBOSE
/* Buffer for error messages, and its allocated size. */
char yymsgbuf[128];
char *yymsg = yymsgbuf;
YYSIZE_T yymsg_alloc = sizeof yymsgbuf;
#endif
#define YYPOPSTACK(N) (yyvsp -= (N), yyssp -= (N))
/* The number of symbols on the RHS of the reduced rule.
Keep to zero when no symbol should be popped. */
int yylen = 0;
yyssp = yyss = yyssa;
yyvsp = yyvs = yyvsa;
yystacksize = YYINITDEPTH;
YYDPRINTF ((stderr, "Starting parse\n"));
yystate = 0;
yyerrstatus = 0;
yynerrs = 0;
yychar = YYEMPTY; /* Cause a token to be read. */
goto yysetstate;
/*------------------------------------------------------------.
| yynewstate -- Push a new state, which is found in yystate. |
`------------------------------------------------------------*/
yynewstate:
/* In all cases, when you get here, the value and location stacks
have just been pushed. So pushing a state here evens the stacks. */
yyssp++;
yysetstate:
*yyssp = yystate;
if (yyss + yystacksize - 1 <= yyssp)
{
/* Get the current used size of the three stacks, in elements. */
YYSIZE_T yysize = yyssp - yyss + 1;
#ifdef yyoverflow
{
/* Give user a chance to reallocate the stack. Use copies of
these so that the &'s don't force the real ones into
memory. */
YYSTYPE *yyvs1 = yyvs;
yytype_int16 *yyss1 = yyss;
/* Each stack pointer address is followed by the size of the
data in use in that stack, in bytes. This used to be a
conditional around just the two extra args, but that might
be undefined if yyoverflow is a macro. */
yyoverflow (YY_("memory exhausted"),
&yyss1, yysize * sizeof (*yyssp),
&yyvs1, yysize * sizeof (*yyvsp),
&yystacksize);
yyss = yyss1;
yyvs = yyvs1;
}
#else /* no yyoverflow */
# ifndef YYSTACK_RELOCATE
goto yyexhaustedlab;
# else
/* Extend the stack our own way. */
if (YYMAXDEPTH <= yystacksize)
goto yyexhaustedlab;
yystacksize *= 2;
if (YYMAXDEPTH < yystacksize)
yystacksize = YYMAXDEPTH;
{
yytype_int16 *yyss1 = yyss;
union yyalloc *yyptr =
(union yyalloc *) YYSTACK_ALLOC (YYSTACK_BYTES (yystacksize));
if (! yyptr)
goto yyexhaustedlab;
YYSTACK_RELOCATE (yyss_alloc, yyss);
YYSTACK_RELOCATE (yyvs_alloc, yyvs);
# undef YYSTACK_RELOCATE
if (yyss1 != yyssa)
YYSTACK_FREE (yyss1);
}
# endif
#endif /* no yyoverflow */
yyssp = yyss + yysize - 1;
yyvsp = yyvs + yysize - 1;
YYDPRINTF ((stderr, "Stack size increased to %lu\n",
(unsigned long int) yystacksize));
if (yyss + yystacksize - 1 <= yyssp)
YYABORT;
}
YYDPRINTF ((stderr, "Entering state %d\n", yystate));
if (yystate == YYFINAL)
YYACCEPT;
goto yybackup;
/*-----------.
| yybackup. |
`-----------*/
yybackup:
/* Do appropriate processing given the current state. Read a
lookahead token if we need one and don't already have one. */
/* First try to decide what to do without reference to lookahead token. */
yyn = yypact[yystate];
if (yypact_value_is_default (yyn))
goto yydefault;
/* Not known => get a lookahead token if don't already have one. */
/* YYCHAR is either YYEMPTY or YYEOF or a valid lookahead symbol. */
if (yychar == YYEMPTY)
{
YYDPRINTF ((stderr, "Reading a token: "));
yychar = yylex (&yylval, yyscanner, lex_env);
}
if (yychar <= YYEOF)
{
yychar = yytoken = YYEOF;
YYDPRINTF ((stderr, "Now at end of input.\n"));
}
else
{
yytoken = YYTRANSLATE (yychar);
YY_SYMBOL_PRINT ("Next token is", yytoken, &yylval, &yylloc);
}
/* If the proper action on seeing token YYTOKEN is to reduce or to
detect an error, take that action. */
yyn += yytoken;
if (yyn < 0 || YYLAST < yyn || yycheck[yyn] != yytoken)
goto yydefault;
yyn = yytable[yyn];
if (yyn <= 0)
{
if (yytable_value_is_error (yyn))
goto yyerrlab;
yyn = -yyn;
goto yyreduce;
}
/* Count tokens shifted since error; after three, turn off error
status. */
if (yyerrstatus)
yyerrstatus--;
/* Shift the lookahead token. */
YY_SYMBOL_PRINT ("Shifting", yytoken, &yylval, &yylloc);
/* Discard the shifted token. */
yychar = YYEMPTY;
yystate = yyn;
YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
*++yyvsp = yylval;
YY_IGNORE_MAYBE_UNINITIALIZED_END
goto yynewstate;
/*-----------------------------------------------------------.
| yydefault -- do the default action for the current state. |
`-----------------------------------------------------------*/
yydefault:
yyn = yydefact[yystate];
if (yyn == 0)
goto yyerrlab;
goto yyreduce;
/*-----------------------------.
| yyreduce -- Do a reduction. |
`-----------------------------*/
yyreduce:
/* yyn is the number of a rule to reduce with. */
yylen = yyr2[yyn];
/* If YYLEN is nonzero, implement the default value of the action:
'$$ = $1'.
Otherwise, the following line sets YYVAL to garbage.
This behavior is undocumented and Bison
users should not rely upon it. Assigning to YYVAL
unconditionally makes the parser a bit smaller, and it avoids a
GCC warning that YYVAL may be used uninitialized. */
yyval = yyvsp[1-yylen];
YY_REDUCE_PRINT (yyn);
switch (yyn)
{
case 2:
#line 113 "hex_grammar.y" /* yacc.c:1661 */
{
RE_AST* re_ast = yyget_extra(yyscanner);
re_ast->root_node = (yyvsp[-1].re_node);
}
#line 1337 "hex_grammar.c" /* yacc.c:1661 */
break;
case 3:
#line 122 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = (yyvsp[0].re_node);
}
#line 1345 "hex_grammar.c" /* yacc.c:1661 */
break;
case 4:
#line 126 "hex_grammar.y" /* yacc.c:1661 */
{
incr_ast_levels();
(yyval.re_node) = yr_re_node_create(RE_NODE_CONCAT, (yyvsp[-1].re_node), (yyvsp[0].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[-1].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[0].re_node));
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
}
#line 1360 "hex_grammar.c" /* yacc.c:1661 */
break;
case 5:
#line 137 "hex_grammar.y" /* yacc.c:1661 */
{
RE_NODE* new_concat;
RE_NODE* leftmost_concat = NULL;
RE_NODE* leftmost_node = (yyvsp[-1].re_node);
incr_ast_levels();
(yyval.re_node) = NULL;
/*
Some portions of the code (i.e: yr_re_split_at_chaining_point)
expect a left-unbalanced tree where the right child of a concat node
can't be another concat node. A concat node must be always the left
child of its parent if the parent is also a concat. For this reason
the can't simply create two new concat nodes arranged like this:
concat
/ \
/ \
token's \
subtree concat
/ \
/ \
/ \
token_sequence's token's
subtree subtree
Instead we must insert the subtree for the first token as the
leftmost node of the token_sequence subtree.
*/
while (leftmost_node->type == RE_NODE_CONCAT)
{
leftmost_concat = leftmost_node;
leftmost_node = leftmost_node->left;
}
new_concat = yr_re_node_create(
RE_NODE_CONCAT, (yyvsp[-2].re_node), leftmost_node);
if (new_concat != NULL)
{
if (leftmost_concat != NULL)
{
leftmost_concat->left = new_concat;
(yyval.re_node) = yr_re_node_create(RE_NODE_CONCAT, (yyvsp[-1].re_node), (yyvsp[0].re_node));
}
else
{
(yyval.re_node) = yr_re_node_create(RE_NODE_CONCAT, new_concat, (yyvsp[0].re_node));
}
}
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[-2].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[-1].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[0].re_node));
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
}
#line 1424 "hex_grammar.c" /* yacc.c:1661 */
break;
case 6:
#line 201 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = (yyvsp[0].re_node);
}
#line 1432 "hex_grammar.c" /* yacc.c:1661 */
break;
case 7:
#line 205 "hex_grammar.y" /* yacc.c:1661 */
{
incr_ast_levels();
(yyval.re_node) = yr_re_node_create(RE_NODE_CONCAT, (yyvsp[-1].re_node), (yyvsp[0].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[-1].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[0].re_node));
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
}
#line 1447 "hex_grammar.c" /* yacc.c:1661 */
break;
case 8:
#line 220 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = (yyvsp[0].re_node);
}
#line 1455 "hex_grammar.c" /* yacc.c:1661 */
break;
case 9:
#line 224 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = (yyvsp[0].re_node);
(yyval.re_node)->greedy = FALSE;
}
#line 1464 "hex_grammar.c" /* yacc.c:1661 */
break;
case 10:
#line 233 "hex_grammar.y" /* yacc.c:1661 */
{
lex_env->token_count++;
if (lex_env->token_count > MAX_HEX_STRING_TOKENS)
{
yr_re_node_destroy((yyvsp[0].re_node));
yyerror(yyscanner, lex_env, "string too long");
YYABORT;
}
(yyval.re_node) = (yyvsp[0].re_node);
}
#line 1481 "hex_grammar.c" /* yacc.c:1661 */
break;
case 11:
#line 246 "hex_grammar.y" /* yacc.c:1661 */
{
lex_env->inside_or++;
}
#line 1489 "hex_grammar.c" /* yacc.c:1661 */
break;
case 12:
#line 250 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = (yyvsp[-1].re_node);
lex_env->inside_or--;
}
#line 1498 "hex_grammar.c" /* yacc.c:1661 */
break;
case 13:
#line 259 "hex_grammar.y" /* yacc.c:1661 */
{
if ((yyvsp[-1].integer) <= 0)
{
yyerror(yyscanner, lex_env, "invalid jump length");
YYABORT;
}
if (lex_env->inside_or && (yyvsp[-1].integer) > STRING_CHAINING_THRESHOLD)
{
yyerror(yyscanner, lex_env, "jumps over "
STR(STRING_CHAINING_THRESHOLD)
" now allowed inside alternation (|)");
YYABORT;
}
(yyval.re_node) = yr_re_node_create(RE_NODE_RANGE_ANY, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
(yyval.re_node)->start = (int) (yyvsp[-1].integer);
(yyval.re_node)->end = (int) (yyvsp[-1].integer);
}
#line 1525 "hex_grammar.c" /* yacc.c:1661 */
break;
case 14:
#line 282 "hex_grammar.y" /* yacc.c:1661 */
{
if (lex_env->inside_or &&
((yyvsp[-3].integer) > STRING_CHAINING_THRESHOLD ||
(yyvsp[-1].integer) > STRING_CHAINING_THRESHOLD) )
{
yyerror(yyscanner, lex_env, "jumps over "
STR(STRING_CHAINING_THRESHOLD)
" now allowed inside alternation (|)");
YYABORT;
}
if ((yyvsp[-3].integer) < 0 || (yyvsp[-1].integer) < 0)
{
yyerror(yyscanner, lex_env, "invalid negative jump length");
YYABORT;
}
if ((yyvsp[-3].integer) > (yyvsp[-1].integer))
{
yyerror(yyscanner, lex_env, "invalid jump range");
YYABORT;
}
(yyval.re_node) = yr_re_node_create(RE_NODE_RANGE_ANY, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
(yyval.re_node)->start = (int) (yyvsp[-3].integer);
(yyval.re_node)->end = (int) (yyvsp[-1].integer);
}
#line 1561 "hex_grammar.c" /* yacc.c:1661 */
break;
case 15:
#line 314 "hex_grammar.y" /* yacc.c:1661 */
{
if (lex_env->inside_or)
{
yyerror(yyscanner, lex_env,
"unbounded jumps not allowed inside alternation (|)");
YYABORT;
}
if ((yyvsp[-2].integer) < 0)
{
yyerror(yyscanner, lex_env, "invalid negative jump length");
YYABORT;
}
(yyval.re_node) = yr_re_node_create(RE_NODE_RANGE_ANY, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
(yyval.re_node)->start = (int) (yyvsp[-2].integer);
(yyval.re_node)->end = INT_MAX;
}
#line 1587 "hex_grammar.c" /* yacc.c:1661 */
break;
case 16:
#line 336 "hex_grammar.y" /* yacc.c:1661 */
{
if (lex_env->inside_or)
{
yyerror(yyscanner, lex_env,
"unbounded jumps not allowed inside alternation (|)");
YYABORT;
}
(yyval.re_node) = yr_re_node_create(RE_NODE_RANGE_ANY, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
(yyval.re_node)->start = 0;
(yyval.re_node)->end = INT_MAX;
}
#line 1607 "hex_grammar.c" /* yacc.c:1661 */
break;
case 17:
#line 356 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = (yyvsp[0].re_node);
}
#line 1615 "hex_grammar.c" /* yacc.c:1661 */
break;
case 18:
#line 360 "hex_grammar.y" /* yacc.c:1661 */
{
mark_as_not_fast_regexp();
incr_ast_levels();
(yyval.re_node) = yr_re_node_create(RE_NODE_ALT, (yyvsp[-2].re_node), (yyvsp[0].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[-2].re_node));
DESTROY_NODE_IF((yyval.re_node) == NULL, (yyvsp[0].re_node));
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
}
#line 1631 "hex_grammar.c" /* yacc.c:1661 */
break;
case 19:
#line 375 "hex_grammar.y" /* yacc.c:1661 */
{
(yyval.re_node) = yr_re_node_create(RE_NODE_LITERAL, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
(yyval.re_node)->value = (int) (yyvsp[0].integer);
}
#line 1643 "hex_grammar.c" /* yacc.c:1661 */
break;
case 20:
#line 383 "hex_grammar.y" /* yacc.c:1661 */
{
uint8_t mask = (uint8_t) ((yyvsp[0].integer) >> 8);
if (mask == 0x00)
{
(yyval.re_node) = yr_re_node_create(RE_NODE_ANY, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
}
else
{
(yyval.re_node) = yr_re_node_create(RE_NODE_MASKED_LITERAL, NULL, NULL);
ERROR_IF((yyval.re_node) == NULL, ERROR_INSUFFICIENT_MEMORY);
(yyval.re_node)->value = (yyvsp[0].integer) & 0xFF;
(yyval.re_node)->mask = mask;
}
}
#line 1667 "hex_grammar.c" /* yacc.c:1661 */
break;
#line 1671 "hex_grammar.c" /* yacc.c:1661 */
default: break;
}
/* User semantic actions sometimes alter yychar, and that requires
that yytoken be updated with the new translation. We take the
approach of translating immediately before every use of yytoken.
One alternative is translating here after every semantic action,
but that translation would be missed if the semantic action invokes
YYABORT, YYACCEPT, or YYERROR immediately after altering yychar or
if it invokes YYBACKUP. In the case of YYABORT or YYACCEPT, an
incorrect destructor might then be invoked immediately. In the
case of YYERROR or YYBACKUP, subsequent parser actions might lead
to an incorrect destructor call or verbose syntax error message
before the lookahead is translated. */
YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc);
YYPOPSTACK (yylen);
yylen = 0;
YY_STACK_PRINT (yyss, yyssp);
*++yyvsp = yyval;
/* Now 'shift' the result of the reduction. Determine what state
that goes to, based on the state we popped back to and the rule
number reduced by. */
yyn = yyr1[yyn];
yystate = yypgoto[yyn - YYNTOKENS] + *yyssp;
if (0 <= yystate && yystate <= YYLAST && yycheck[yystate] == *yyssp)
yystate = yytable[yystate];
else
yystate = yydefgoto[yyn - YYNTOKENS];
goto yynewstate;
/*--------------------------------------.
| yyerrlab -- here on detecting error. |
`--------------------------------------*/
yyerrlab:
/* Make sure we have latest lookahead translation. See comments at
user semantic actions for why this is necessary. */
yytoken = yychar == YYEMPTY ? YYEMPTY : YYTRANSLATE (yychar);
/* If not already recovering from an error, report this error. */
if (!yyerrstatus)
{
++yynerrs;
#if ! YYERROR_VERBOSE
yyerror (yyscanner, lex_env, YY_("syntax error"));
#else
# define YYSYNTAX_ERROR yysyntax_error (&yymsg_alloc, &yymsg, \
yyssp, yytoken)
{
char const *yymsgp = YY_("syntax error");
int yysyntax_error_status;
yysyntax_error_status = YYSYNTAX_ERROR;
if (yysyntax_error_status == 0)
yymsgp = yymsg;
else if (yysyntax_error_status == 1)
{
if (yymsg != yymsgbuf)
YYSTACK_FREE (yymsg);
yymsg = (char *) YYSTACK_ALLOC (yymsg_alloc);
if (!yymsg)
{
yymsg = yymsgbuf;
yymsg_alloc = sizeof yymsgbuf;
yysyntax_error_status = 2;
}
else
{
yysyntax_error_status = YYSYNTAX_ERROR;
yymsgp = yymsg;
}
}
yyerror (yyscanner, lex_env, yymsgp);
if (yysyntax_error_status == 2)
goto yyexhaustedlab;
}
# undef YYSYNTAX_ERROR
#endif
}
if (yyerrstatus == 3)
{
/* If just tried and failed to reuse lookahead token after an
error, discard it. */
if (yychar <= YYEOF)
{
/* Return failure if at end of input. */
if (yychar == YYEOF)
YYABORT;
}
else
{
yydestruct ("Error: discarding",
yytoken, &yylval, yyscanner, lex_env);
yychar = YYEMPTY;
}
}
/* Else will try to reuse lookahead token after shifting the error
token. */
goto yyerrlab1;
/*---------------------------------------------------.
| yyerrorlab -- error raised explicitly by YYERROR. |
`---------------------------------------------------*/
yyerrorlab:
/* Pacify compilers like GCC when the user code never invokes
YYERROR and the label yyerrorlab therefore never appears in user
code. */
if (/*CONSTCOND*/ 0)
goto yyerrorlab;
/* Do not reclaim the symbols of the rule whose action triggered
this YYERROR. */
YYPOPSTACK (yylen);
yylen = 0;
YY_STACK_PRINT (yyss, yyssp);
yystate = *yyssp;
goto yyerrlab1;
/*-------------------------------------------------------------.
| yyerrlab1 -- common code for both syntax error and YYERROR. |
`-------------------------------------------------------------*/
yyerrlab1:
yyerrstatus = 3; /* Each real token shifted decrements this. */
for (;;)
{
yyn = yypact[yystate];
if (!yypact_value_is_default (yyn))
{
yyn += YYTERROR;
if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR)
{
yyn = yytable[yyn];
if (0 < yyn)
break;
}
}
/* Pop the current state because it cannot handle the error token. */
if (yyssp == yyss)
YYABORT;
yydestruct ("Error: popping",
yystos[yystate], yyvsp, yyscanner, lex_env);
YYPOPSTACK (1);
yystate = *yyssp;
YY_STACK_PRINT (yyss, yyssp);
}
YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
*++yyvsp = yylval;
YY_IGNORE_MAYBE_UNINITIALIZED_END
/* Shift the error token. */
YY_SYMBOL_PRINT ("Shifting", yystos[yyn], yyvsp, yylsp);
yystate = yyn;
goto yynewstate;
/*-------------------------------------.
| yyacceptlab -- YYACCEPT comes here. |
`-------------------------------------*/
yyacceptlab:
yyresult = 0;
goto yyreturn;
/*-----------------------------------.
| yyabortlab -- YYABORT comes here. |
`-----------------------------------*/
yyabortlab:
yyresult = 1;
goto yyreturn;
#if !defined yyoverflow || YYERROR_VERBOSE
/*-------------------------------------------------.
| yyexhaustedlab -- memory exhaustion comes here. |
`-------------------------------------------------*/
yyexhaustedlab:
yyerror (yyscanner, lex_env, YY_("memory exhausted"));
yyresult = 2;
/* Fall through. */
#endif
yyreturn:
if (yychar != YYEMPTY)
{
/* Make sure we have latest lookahead translation. See comments at
user semantic actions for why this is necessary. */
yytoken = YYTRANSLATE (yychar);
yydestruct ("Cleanup: discarding lookahead",
yytoken, &yylval, yyscanner, lex_env);
}
/* Do not reclaim the symbols of the rule whose action triggered
this YYABORT or YYACCEPT. */
YYPOPSTACK (yylen);
YY_STACK_PRINT (yyss, yyssp);
while (yyssp != yyss)
{
yydestruct ("Cleanup: popping",
yystos[*yyssp], yyvsp, yyscanner, lex_env);
YYPOPSTACK (1);
}
#ifndef yyoverflow
if (yyss != yyssa)
YYSTACK_FREE (yyss);
#endif
#if YYERROR_VERBOSE
if (yymsg != yymsgbuf)
YYSTACK_FREE (yymsg);
#endif
|
Safe
|
[
"CWE-674",
"CWE-787"
] |
yara
|
10e8bd3071677dd1fa76beeef4bc2fc427cea5e7
|
1.6419945789342986e+38
| 815 |
Fix issue #674 for hex strings.
| 0 |
static int records_match(const char *mboxname,
struct index_record *old,
struct index_record *new)
{
int i;
int match = 1;
int userflags_dirty = 0;
if (old->internaldate != new->internaldate) {
printf("%s uid %u mismatch: internaldate\n",
mboxname, new->uid);
match = 0;
}
if (old->sentdate != new->sentdate) {
printf("%s uid %u mismatch: sentdate\n",
mboxname, new->uid);
match = 0;
}
if (old->size != new->size) {
printf("%s uid %u mismatch: size\n",
mboxname, new->uid);
match = 0;
}
if (old->header_size != new->header_size) {
printf("%s uid %u mismatch: header_size\n",
mboxname, new->uid);
match = 0;
}
if (old->gmtime != new->gmtime) {
printf("%s uid %u mismatch: gmtime\n",
mboxname, new->uid);
match = 0;
}
if (old->savedate != new->savedate) {
printf("%s uid %u mismatch: savedate\n",
mboxname, new->uid);
match = 0;
}
if (old->createdmodseq != new->createdmodseq) {
printf("%s uid %u mismatch: createdmodseq\n",
mboxname, new->uid);
match = 0;
}
if (old->system_flags != new->system_flags) {
printf("%s uid %u mismatch: systemflags\n",
mboxname, new->uid);
match = 0;
}
if (old->internal_flags != new->internal_flags) {
printf("%s uid %u mismatch: internalflags\n",
mboxname, new->uid);
match = 0;
}
for (i = 0; i < MAX_USER_FLAGS/32; i++) {
if (old->user_flags[i] != new->user_flags[i])
userflags_dirty = 1;
}
if (userflags_dirty) {
printf("%s uid %u mismatch: userflags\n",
mboxname, new->uid);
match = 0;
}
if (!message_guid_equal(&old->guid, &new->guid)) {
printf("%s uid %u mismatch: guid\n",
mboxname, new->uid);
match = 0;
}
if (!match) {
syslog(LOG_ERR, "%s uid %u record mismatch, rewriting",
mboxname, new->uid);
}
/* cache issues - don't print, probably just a version
* upgrade... */
if (old->cache_version != new->cache_version) {
match = 0;
}
if (old->cache_crc != new->cache_crc) {
match = 0;
}
if (cache_len(old) != cache_len(new)) {
match = 0;
}
/* only compare cache records if size matches */
else if (memcmp(cache_base(old), cache_base(new), cache_len(new))) {
match = 0;
}
return match;
}
|
Safe
|
[] |
cyrus-imapd
|
1d6d15ee74e11a9bd745e80be69869e5fb8d64d6
|
1.851464332380246e+38
| 91 |
mailbox.c/reconstruct.c: Add mailbox_mbentry_from_path()
| 0 |
int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
struct kvm_mp_state *mp_state)
{
return -EINVAL;
}
|
Safe
|
[
"CWE-399",
"CWE-284"
] |
linux
|
e8180dcaa8470ceca21109f143876fdcd9fe050a
|
4.2089576603458536e+37
| 5 |
ARM: KVM: prevent NULL pointer dereferences with KVM VCPU ioctl
Some ARM KVM VCPU ioctls require the vCPU to be properly initialized
with the KVM_ARM_VCPU_INIT ioctl before being used with further
requests. KVM_RUN checks whether this initialization has been
done, but other ioctls do not.
Namely KVM_GET_REG_LIST will dereference an array with index -1
without initialization and thus leads to a kernel oops.
Fix this by adding checks before executing the ioctl handlers.
[ Removed superflous comment from static function - Christoffer ]
Changes from v1:
* moved check into a static function with a meaningful name
Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
| 0 |
static int fib6_new_sernum(struct net *net)
{
int new, old;
do {
old = atomic_read(&net->ipv6.fib6_sernum);
new = old < INT_MAX ? old + 1 : 1;
} while (atomic_cmpxchg(&net->ipv6.fib6_sernum,
old, new) != old);
return new;
}
|
Safe
|
[
"CWE-755"
] |
linux
|
7b09c2d052db4b4ad0b27b97918b46a7746966fa
|
1.185892713332203e+38
| 11 |
ipv6: fix a typo in fib6_rule_lookup()
Yi Ren reported an issue discovered by syzkaller, and bisected
to the cited commit.
Many thanks to Yi, this trivial patch does not reflect the patient
work that has been done.
Fixes: d64a1f574a29 ("ipv6: honor RT6_LOOKUP_F_DST_NOREF in rule lookup logic")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Wei Wang <weiwan@google.com>
Bisected-and-reported-by: Yi Ren <c4tren@gmail.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
| 0 |
z2restore(i_ctx_t *i_ctx_p)
{
while (gs_gstate_saved(gs_gstate_saved(igs))) {
if (restore_page_device(igs, gs_gstate_saved(igs)))
return push_callout(i_ctx_p, "%restore1pagedevice");
gs_grestore(igs);
}
if (restore_page_device(igs, gs_gstate_saved(igs)))
return push_callout(i_ctx_p, "%restorepagedevice");
return zrestore(i_ctx_p);
}
|
Vulnerable
|
[] |
ghostpdl
|
78911a01b67d590b4a91afac2e8417360b934156
|
3.3785434213330642e+38
| 11 |
Bug 699654: Check the restore operand type
The primary function that implements restore correctly checked its parameter,
but a function that does some preliminary work for the restore (gstate and
device handling) did not check.
So, even though the restore correctly errored out, it left things partially done
and, in particular, the device in partially restored state. Meaning the
LockSafetyParams was not correctly set.
| 1 |
PHP_FUNCTION(xml_set_start_namespace_decl_handler)
{
xml_parser *parser;
zval *pind, **hdl;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "rZ", &pind, &hdl) == FAILURE) {
return;
}
ZEND_FETCH_RESOURCE(parser,xml_parser *, &pind, -1, "XML Parser", le_xml_parser);
xml_set_handler(&parser->startNamespaceDeclHandler, hdl);
XML_SetStartNamespaceDeclHandler(parser->parser, _xml_startNamespaceDeclHandler);
RETVAL_TRUE;
}
|
Safe
|
[
"CWE-787"
] |
php-src
|
7d163e8a0880ae8af2dd869071393e5dc07ef271
|
2.9083361577873745e+38
| 15 |
truncate results at depth of 255 to prevent corruption
| 0 |
static int stgi_interception(struct vcpu_svm *svm)
{
if (nested_svm_check_permissions(svm))
return 1;
svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
skip_emulated_instruction(&svm->vcpu);
kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
enable_gif(svm);
return 1;
}
|
Safe
|
[] |
kvm
|
854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
|
2.1700172799105267e+38
| 13 |
KVM: x86: Check non-canonical addresses upon WRMSR
Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is
written to certain MSRs. The behavior is "almost" identical for AMD and Intel
(ignoring MSRs that are not implemented in either architecture since they would
anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
non-canonical address is written on Intel but not on AMD (which ignores the top
32-bits).
Accordingly, this patch injects a #GP on the MSRs which behave identically on
Intel and AMD. To eliminate the differences between the architecutres, the
value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to
canonical value before writing instead of injecting a #GP.
Some references from Intel and AMD manuals:
According to Intel SDM description of WRMSR instruction #GP is expected on
WRMSR "If the source register contains a non-canonical address and ECX
specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE,
IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP."
According to AMD manual instruction manual:
LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the
LSTAR and CSTAR registers. If an RIP written by WRMSR is not in canonical
form, a general-protection exception (#GP) occurs."
IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the
base field must be in canonical form or a #GP fault will occur."
IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must
be in canonical form."
This patch fixes CVE-2014-3610.
Cc: stable@vger.kernel.org
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
static void hns_gmac_set_rx_auto_pause_frames(void *mac_drv, u32 newval)
{
struct mac_driver *drv = (struct mac_driver *)mac_drv;
dsaf_set_dev_bit(drv, GMAC_PAUSE_EN_REG,
GMAC_PAUSE_EN_RX_FDFC_B, !!newval);
}
|
Safe
|
[
"CWE-119",
"CWE-703"
] |
linux
|
412b65d15a7f8a93794653968308fc100f2aa87c
|
6.262474717446694e+37
| 7 |
net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
autoar_extractor_start_async_thread (GTask *task,
gpointer source_object,
gpointer task_data,
GCancellable *cancellable)
{
AutoarExtractor *self = source_object;
autoar_extractor_run (self);
g_task_return_pointer (task, NULL, g_free);
g_object_unref (self);
g_object_unref (task);
}
|
Safe
|
[
"CWE-22"
] |
gnome-autoar
|
adb067e645732fdbe7103516e506d09eb6a54429
|
1.995456456440783e+38
| 11 |
AutoarExtractor: Do not extract files outside the destination dir
Currently, a malicious archive can cause that the files are extracted
outside of the destination dir. This can happen if the archive contains
a file whose parent is a symbolic link, which points outside of the
destination dir. This is potentially a security threat similar to
CVE-2020-11736. Let's skip such problematic files when extracting.
Fixes: https://gitlab.gnome.org/GNOME/gnome-autoar/-/issues/7
| 0 |
void RGWListBucket_ObjStore_S3v2::send_response()
{
if (op_ret < 0) {
set_req_state_err(s, op_ret);
}
dump_errno(s);
// Explicitly use chunked transfer encoding so that we can stream the result
// to the user without having to wait for the full length of it.
end_header(s, this, "application/xml", CHUNKED_TRANSFER_ENCODING);
dump_start(s);
if (op_ret < 0) {
return;
}
if (list_versions) {
send_versioned_response();
return;
}
s->formatter->open_object_section_in_ns("ListBucketResult", XMLNS_AWS_S3);
if (strcasecmp(encoding_type.c_str(), "url") == 0) {
s->formatter->dump_string("EncodingType", "url");
encode_key = true;
}
RGWListBucket_ObjStore_S3::send_common_response();
if (op_ret >= 0) {
vector<rgw_bucket_dir_entry>::iterator iter;
for (iter = objs.begin(); iter != objs.end(); ++iter) {
rgw_obj_key key(iter->key);
s->formatter->open_array_section("Contents");
if (encode_key) {
string key_name;
url_encode(key.name, key_name);
s->formatter->dump_string("Key", key_name);
}
else {
s->formatter->dump_string("Key", key.name);
}
dump_time(s, "LastModified", &iter->meta.mtime);
s->formatter->dump_format("ETag", "\"%s\"", iter->meta.etag.c_str());
s->formatter->dump_int("Size", iter->meta.accounted_size);
auto& storage_class = rgw_placement_rule::get_canonical_storage_class(iter->meta.storage_class);
s->formatter->dump_string("StorageClass", storage_class.c_str());
if (fetchOwner == true) {
dump_owner(s, s->user->user_id, s->user->display_name);
}
if (s->system_request) {
s->formatter->dump_string("RgwxTag", iter->tag);
}
if (iter->meta.appendable) {
s->formatter->dump_string("Type", "Appendable");
} else {
s->formatter->dump_string("Type", "Normal");
}
s->formatter->close_section();
}
}
if (continuation_token_exist) {
s->formatter->dump_string("ContinuationToken", continuation_token);
}
if (is_truncated && !next_marker.empty()) {
s->formatter->dump_string("NextContinuationToken", next_marker.name);
}
s->formatter->dump_int("KeyCount",objs.size());
if (start_after_exist) {
s->formatter->dump_string("StartAfter", startAfter);
}
s->formatter->close_section();
rgw_flush_formatter_and_reset(s, s->formatter);
}
|
Safe
|
[
"CWE-79"
] |
ceph
|
fce0b267446d6f3f631bb4680ebc3527bbbea002
|
2.7795649041192593e+38
| 71 |
rgw: reject unauthenticated response-header actions
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit d8dd5e513c0c62bbd7d3044d7e2eddcd897bd400)
| 0 |
static int ZEND_FASTCALL ZEND_ASSIGN_SPEC_VAR_TMP_HANDLER(ZEND_OPCODE_HANDLER_ARGS)
{
zend_op *opline = EX(opline);
zend_free_op free_op1, free_op2;
zval *value = _get_zval_ptr_tmp(&opline->op2, EX(Ts), &free_op2 TSRMLS_CC);
zval **variable_ptr_ptr = _get_zval_ptr_ptr_var(&opline->op1, EX(Ts), &free_op1 TSRMLS_CC);
if (IS_VAR == IS_VAR && !variable_ptr_ptr) {
if (zend_assign_to_string_offset(&EX_T(opline->op1.u.var), value, IS_TMP_VAR TSRMLS_CC)) {
if (!RETURN_VALUE_UNUSED(&opline->result)) {
EX_T(opline->result.u.var).var.ptr_ptr = &EX_T(opline->result.u.var).var.ptr;
ALLOC_ZVAL(EX_T(opline->result.u.var).var.ptr);
INIT_PZVAL(EX_T(opline->result.u.var).var.ptr);
ZVAL_STRINGL(EX_T(opline->result.u.var).var.ptr, Z_STRVAL_P(EX_T(opline->op1.u.var).str_offset.str)+EX_T(opline->op1.u.var).str_offset.offset, 1, 1);
}
} else if (!RETURN_VALUE_UNUSED(&opline->result)) {
AI_SET_PTR(EX_T(opline->result.u.var).var, EG(uninitialized_zval_ptr));
PZVAL_LOCK(EG(uninitialized_zval_ptr));
}
} else {
value = zend_assign_to_variable(variable_ptr_ptr, value, 1 TSRMLS_CC);
if (!RETURN_VALUE_UNUSED(&opline->result)) {
AI_SET_PTR(EX_T(opline->result.u.var).var, value);
PZVAL_LOCK(value);
}
}
if (free_op1.var) {zval_ptr_dtor(&free_op1.var);};
/* zend_assign_to_variable() always takes care of op2, never free it! */
ZEND_VM_NEXT_OPCODE();
}
|
Safe
|
[] |
php-src
|
ce96fd6b0761d98353761bf78d5bfb55291179fd
|
1.6183533272715958e+38
| 33 |
- fix #39863, do not accept paths with NULL in them. See http://news.php.net/php.internals/50191, trunk will have the patch later (adding a macro and/or changing (some) APIs. Patch by Rasmus
| 0 |
cmsBool HeaderSection(cmsIT8* it8)
{
char VarName[MAXID];
char Buffer[MAXSTR];
KEYVALUE* Key;
while (it8->sy != SEOF &&
it8->sy != SSYNERROR &&
it8->sy != SBEGIN_DATA_FORMAT &&
it8->sy != SBEGIN_DATA) {
switch (it8 -> sy) {
case SKEYWORD:
InSymbol(it8);
if (!GetVal(it8, Buffer, MAXSTR-1, "Keyword expected")) return FALSE;
if (!AddAvailableProperty(it8, Buffer, WRITE_UNCOOKED)) return FALSE;
InSymbol(it8);
break;
case SDATA_FORMAT_ID:
InSymbol(it8);
if (!GetVal(it8, Buffer, MAXSTR-1, "Keyword expected")) return FALSE;
if (!AddAvailableSampleID(it8, Buffer)) return FALSE;
InSymbol(it8);
break;
case SIDENT:
strncpy(VarName, it8->id, MAXID-1);
VarName[MAXID-1] = 0;
if (!IsAvailableOnList(it8-> ValidKeywords, VarName, NULL, &Key)) {
#ifdef CMS_STRICT_CGATS
return SynError(it8, "Undefined keyword '%s'", VarName);
#else
Key = AddAvailableProperty(it8, VarName, WRITE_UNCOOKED);
if (Key == NULL) return FALSE;
#endif
}
InSymbol(it8);
if (!GetVal(it8, Buffer, MAXSTR-1, "Property data expected")) return FALSE;
if(Key->WriteAs != WRITE_PAIR) {
AddToList(it8, &GetTable(it8)->HeaderList, VarName, NULL, Buffer,
(it8->sy == SSTRING) ? WRITE_STRINGIFY : WRITE_UNCOOKED);
}
else {
const char *Subkey;
char *Nextkey;
if (it8->sy != SSTRING)
return SynError(it8, "Invalid value '%s' for property '%s'.", Buffer, VarName);
// chop the string as a list of "subkey, value" pairs, using ';' as a separator
for (Subkey = Buffer; Subkey != NULL; Subkey = Nextkey)
{
char *Value, *temp;
// identify token pair boundary
Nextkey = (char*) strchr(Subkey, ';');
if(Nextkey)
*Nextkey++ = '\0';
// for each pair, split the subkey and the value
Value = (char*) strrchr(Subkey, ',');
if(Value == NULL)
return SynError(it8, "Invalid value for property '%s'.", VarName);
// gobble the spaces before the coma, and the coma itself
temp = Value++;
do *temp-- = '\0'; while(temp >= Subkey && *temp == ' ');
// gobble any space at the right
temp = Value + strlen(Value) - 1;
while(*temp == ' ') *temp-- = '\0';
// trim the strings from the left
Subkey += strspn(Subkey, " ");
Value += strspn(Value, " ");
if(Subkey[0] == 0 || Value[0] == 0)
return SynError(it8, "Invalid value for property '%s'.", VarName);
AddToList(it8, &GetTable(it8)->HeaderList, VarName, Subkey, Value, WRITE_PAIR);
}
}
InSymbol(it8);
break;
case SEOLN: break;
default:
return SynError(it8, "expected keyword or identifier");
}
SkipEOLN(it8);
}
return TRUE;
}
|
Safe
|
[] |
Little-CMS
|
65e2f1df3495edc984f7e0d7b7b24e29d851e240
|
7.610141925633553e+37
| 106 |
Fix some warnings from static analysis
| 0 |
dtls1_record_needs_buffering(SSL *s, SSL3_RECORD *rr, unsigned short *priority,
unsigned long *offset)
{
/* alerts are passed up immediately */
if ( rr->type == SSL3_RT_APPLICATION_DATA ||
rr->type == SSL3_RT_ALERT)
return 0;
/* Only need to buffer if a handshake is underway.
* (this implies that Hello Request and Client Hello are passed up
* immediately) */
if ( SSL_in_init(s))
{
unsigned char *data = rr->data;
/* need to extract the HM/CCS sequence number here */
if ( rr->type == SSL3_RT_HANDSHAKE ||
rr->type == SSL3_RT_CHANGE_CIPHER_SPEC)
{
unsigned short seq_num;
struct hm_header_st msg_hdr;
struct ccs_header_st ccs_hdr;
if ( rr->type == SSL3_RT_HANDSHAKE)
{
dtls1_get_message_header(data, &msg_hdr);
seq_num = msg_hdr.seq;
*offset = msg_hdr.frag_off;
}
else
{
dtls1_get_ccs_header(data, &ccs_hdr);
seq_num = ccs_hdr.seq;
*offset = 0;
}
/* this is either a record we're waiting for, or a
* retransmit of something we happened to previously
* receive (higher layers will drop the repeat silently */
if ( seq_num < s->d1->handshake_read_seq)
return 0;
if (rr->type == SSL3_RT_HANDSHAKE &&
seq_num == s->d1->handshake_read_seq &&
msg_hdr.frag_off < s->d1->r_msg_hdr.frag_off)
return 0;
else if ( seq_num == s->d1->handshake_read_seq &&
(rr->type == SSL3_RT_CHANGE_CIPHER_SPEC ||
msg_hdr.frag_off == s->d1->r_msg_hdr.frag_off))
return 0;
else
{
*priority = seq_num;
return 1;
}
}
else /* unknown record type */
return 0;
}
return 0;
}
|
Safe
|
[
"CWE-310"
] |
openssl
|
270881316664396326c461ec7a124aec2c6cc081
|
3.1227663220437916e+37
| 61 |
Add and use a constant-time memcmp.
This change adds CRYPTO_memcmp, which compares two vectors of bytes in
an amount of time that's independent of their contents. It also changes
several MAC compares in the code to use this over the standard memcmp,
which may leak information about the size of a matching prefix.
(cherry picked from commit 2ee798880a246d648ecddadc5b91367bee4a5d98)
Conflicts:
crypto/crypto.h
ssl/t1_lib.c
(cherry picked from commit dc406b59f3169fe191e58906df08dce97edb727c)
Conflicts:
crypto/crypto.h
ssl/d1_pkt.c
ssl/s3_pkt.c
| 0 |
gx_dc_pattern2_can_overlap(const gx_device_color *pdevc)
{
gs_pattern2_instance_t * pinst;
if (pdevc->type != &gx_dc_pattern2)
return false;
pinst = (gs_pattern2_instance_t *)pdevc->ccolor.pattern;
switch (pinst->templat.Shading->head.type) {
case 3: case 6: case 7:
return true;
default:
return false;
}
}
|
Safe
|
[
"CWE-704"
] |
ghostpdl
|
693baf02152119af6e6afd30bb8ec76d14f84bbf
|
2.2441983696327638e+37
| 14 |
PS interpreter - check the Implementation of a Pattern before use
Bug #700141 "Type confusion in setpattern"
As the bug thread says, we were not checking that the Implementation
of a pattern dictionary was a structure type, leading to a crash when
we tried to treat it as one.
Here we make the st_pattern1_instance and st_pattern2_instance
structures public definitions and in zsetcolor we check the object
stored under the Implementation key in the supplied dictionary to see if
its a t_struct or t_astruct type, and if it is that its a
st_pattern1_instance or st_pattern2_instance structure.
If either check fails we throw a typecheck error.
We need to make the st_pattern1_instance and st_pattern2_instance
definitions public as they are defined in the graphics library and we
need to check in the interpreter.
| 0 |
MODRET auth_pass(cmd_rec *cmd) {
char *user = NULL;
int res = 0;
if (logged_in)
return PR_ERROR_MSG(cmd, R_503, _("You are already logged in"));
user = pr_table_get(session.notes, "mod_auth.orig-user", NULL);
if (!user) {
(void) pr_table_remove(session.notes, "mod_auth.orig-user", NULL);
(void) pr_table_remove(session.notes, "mod_auth.anon-passwd", NULL);
return PR_ERROR_MSG(cmd, R_503, _("Login with USER first"));
}
/* Clear any potentially cached directory config */
session.anon_config = NULL;
session.dir_config = NULL;
res = setup_env(cmd->tmp_pool, cmd, user, cmd->arg);
if (res == 1) {
config_rec *c = NULL;
c = add_config_param_set(&cmd->server->conf, "authenticated", 1, NULL);
c->argv[0] = pcalloc(c->pool, sizeof(unsigned char));
*((unsigned char *) c->argv[0]) = TRUE;
set_auth_check(NULL);
(void) pr_table_remove(session.notes, "mod_auth.anon-passwd", NULL);
if (session.sf_flags & SF_ANON) {
if (pr_table_add_dup(session.notes, "mod_auth.anon-passwd",
pr_fs_decode_path(cmd->server->pool, cmd->arg), 0) < 0) {
pr_log_debug(DEBUG3,
"error stashing anonymous password in session.notes: %s",
strerror(errno));
}
}
logged_in = 1;
return PR_HANDLED(cmd);
}
(void) pr_table_remove(session.notes, "mod_auth.anon-passwd", NULL);
if (res == 0) {
unsigned int max_logins, *max = NULL;
char *denymsg = NULL;
/* check for AccessDenyMsg */
if ((denymsg = get_param_ptr((session.anon_config ?
session.anon_config->subset : cmd->server->conf),
"AccessDenyMsg", FALSE)) != NULL) {
if (strstr(denymsg, "%u") != NULL) {
denymsg = sreplace(cmd->tmp_pool, denymsg, "%u", user, NULL);
}
}
max = get_param_ptr(main_server->conf, "MaxLoginAttempts", FALSE);
if (max != NULL) {
max_logins = *max;
} else {
max_logins = 3;
}
if (max_logins > 0 &&
++auth_tries >= max_logins) {
if (denymsg) {
pr_response_send(R_530, "%s", denymsg);
} else {
pr_response_send(R_530, "%s", _("Login incorrect."));
}
pr_log_auth(PR_LOG_NOTICE,
"Maximum login attempts (%u) exceeded, connection refused", max_logins);
/* Generate an event about this limit being exceeded. */
pr_event_generate("mod_auth.max-login-attempts", session.c);
pr_session_disconnect(&auth_module, PR_SESS_DISCONNECT_CONFIG_ACL,
"Denied by MaxLoginAttempts");
}
return PR_ERROR_MSG(cmd, R_530, denymsg ? denymsg : _("Login incorrect."));
}
return PR_HANDLED(cmd);
}
|
Safe
|
[
"CWE-59",
"CWE-61"
] |
proftpd
|
ecff21e0d0e84f35c299ef91d7fda088e516d4ed
|
1.7447109116689047e+38
| 92 |
Backporting recursive handling of DefaultRoot path, when AllowChrootSymlinks
is off, to 1.3.5 branch.
| 0 |
int ssl_check_clienthello_tlsext_early(SSL *s)
{
int ret=SSL_TLSEXT_ERR_NOACK;
int al = SSL_AD_UNRECOGNIZED_NAME;
#ifndef OPENSSL_NO_EC
/* The handling of the ECPointFormats extension is done elsewhere, namely in
* ssl3_choose_cipher in s3_lib.c.
*/
/* The handling of the EllipticCurves extension is done elsewhere, namely in
* ssl3_choose_cipher in s3_lib.c.
*/
#endif
if (s->ctx != NULL && s->ctx->tlsext_servername_callback != 0)
ret = s->ctx->tlsext_servername_callback(s, &al, s->ctx->tlsext_servername_arg);
else if (s->initial_ctx != NULL && s->initial_ctx->tlsext_servername_callback != 0)
ret = s->initial_ctx->tlsext_servername_callback(s, &al, s->initial_ctx->tlsext_servername_arg);
#ifdef TLSEXT_TYPE_opaque_prf_input
{
/* This sort of belongs into ssl_prepare_serverhello_tlsext(),
* but we might be sending an alert in response to the client hello,
* so this has to happen here in
* ssl_check_clienthello_tlsext_early(). */
int r = 1;
if (s->ctx->tlsext_opaque_prf_input_callback != 0)
{
r = s->ctx->tlsext_opaque_prf_input_callback(s, NULL, 0, s->ctx->tlsext_opaque_prf_input_callback_arg);
if (!r)
{
ret = SSL_TLSEXT_ERR_ALERT_FATAL;
al = SSL_AD_INTERNAL_ERROR;
goto err;
}
}
if (s->s3->server_opaque_prf_input != NULL) /* shouldn't really happen */
OPENSSL_free(s->s3->server_opaque_prf_input);
s->s3->server_opaque_prf_input = NULL;
if (s->tlsext_opaque_prf_input != NULL)
{
if (s->s3->client_opaque_prf_input != NULL &&
s->s3->client_opaque_prf_input_len == s->tlsext_opaque_prf_input_len)
{
/* can only use this extension if we have a server opaque PRF input
* of the same length as the client opaque PRF input! */
if (s->tlsext_opaque_prf_input_len == 0)
s->s3->server_opaque_prf_input = OPENSSL_malloc(1); /* dummy byte just to get non-NULL */
else
s->s3->server_opaque_prf_input = BUF_memdup(s->tlsext_opaque_prf_input, s->tlsext_opaque_prf_input_len);
if (s->s3->server_opaque_prf_input == NULL)
{
ret = SSL_TLSEXT_ERR_ALERT_FATAL;
al = SSL_AD_INTERNAL_ERROR;
goto err;
}
s->s3->server_opaque_prf_input_len = s->tlsext_opaque_prf_input_len;
}
}
if (r == 2 && s->s3->server_opaque_prf_input == NULL)
{
/* The callback wants to enforce use of the extension,
* but we can't do that with the client opaque PRF input;
* abort the handshake.
*/
ret = SSL_TLSEXT_ERR_ALERT_FATAL;
al = SSL_AD_HANDSHAKE_FAILURE;
}
}
err:
#endif
switch (ret)
{
case SSL_TLSEXT_ERR_ALERT_FATAL:
ssl3_send_alert(s,SSL3_AL_FATAL,al);
return -1;
case SSL_TLSEXT_ERR_ALERT_WARNING:
ssl3_send_alert(s,SSL3_AL_WARNING,al);
return 1;
case SSL_TLSEXT_ERR_NOACK:
s->servername_done=0;
default:
return 1;
}
}
|
Safe
|
[
"CWE-310"
] |
openssl
|
9c00a950604aca819cee977f1dcb4b45f2af3aa6
|
2.834733888199961e+38
| 94 |
Add and use a constant-time memcmp.
This change adds CRYPTO_memcmp, which compares two vectors of bytes in
an amount of time that's independent of their contents. It also changes
several MAC compares in the code to use this over the standard memcmp,
which may leak information about the size of a matching prefix.
(cherry picked from commit 2ee798880a246d648ecddadc5b91367bee4a5d98)
Conflicts:
crypto/crypto.h
ssl/t1_lib.c
| 0 |
bool is_sameZ(const CImg<t>& img) const {
return is_sameZ(img._depth);
}
|
Safe
|
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
|
3.369194956563406e+38
| 3 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
| 0 |
static gint read_all_used_files(GDataInputStream* f, MenuCache* cache,
MenuCacheFileDir*** all_used_files)
{
char *line;
gsize len;
int i, n;
MenuCacheFileDir** dirs;
line = g_data_input_stream_read_line(f, &len, cache->cancellable, NULL);
if(G_UNLIKELY(line == NULL))
return -1;
n = atoi( line );
g_free(line);
if (G_UNLIKELY(n <= 0))
return n;
dirs = g_new0( MenuCacheFileDir *, n );
for( i = 0; i < n; ++i )
{
line = g_data_input_stream_read_line(f, &len, cache->cancellable, NULL);
if(G_UNLIKELY(line == NULL))
{
while (i-- > 0)
menu_cache_file_dir_unref(dirs[i]);
g_free(dirs);
return -1;
}
dirs[i] = g_new(MenuCacheFileDir, 1);
dirs[i]->n_ref = 1;
dirs[i]->dir = line; /* don't include \n */
}
*all_used_files = dirs;
return n;
}
|
Safe
|
[
"CWE-20"
] |
menu-cache
|
56f66684592abf257c4004e6e1fff041c64a12ce
|
1.759187628239347e+38
| 36 |
Fix potential access violation, use runtime user dir instead of tmp dir.
Note: it limits libmenu-cache compatibility to menu-cached >= 0.7.0.
| 0 |
translated_function_exists(char_u *name, int is_global)
{
if (builtin_function(name, -1))
return has_internal_func(name);
return find_func(name, is_global, NULL) != NULL;
}
|
Safe
|
[
"CWE-416"
] |
vim
|
9c23f9bb5fe435b28245ba8ac65aa0ca6b902c04
|
6.197665237483315e+37
| 6 |
patch 8.2.3902: Vim9: double free with nested :def function
Problem: Vim9: double free with nested :def function.
Solution: Pass "line_to_free" from compile_def_function() and make sure
cmdlinep is valid.
| 0 |
do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud, int perm, struct vc_data *vc)
{
struct unimapdesc tmp;
if (copy_from_user(&tmp, user_ud, sizeof tmp))
return -EFAULT;
switch (cmd) {
case PIO_UNIMAP:
if (!perm)
return -EPERM;
return con_set_unimap(vc, tmp.entry_ct, tmp.entries);
case GIO_UNIMAP:
if (!perm && fg_console != vc->vc_num)
return -EPERM;
return con_get_unimap(vc, tmp.entry_ct, &(user_ud->entry_ct), tmp.entries);
}
return 0;
}
|
Safe
|
[
"CWE-416",
"CWE-362"
] |
linux
|
ca4463bf8438b403596edd0ec961ca0d4fbe0220
|
2.0767764464853375e+38
| 18 |
vt: vt_ioctl: fix VT_DISALLOCATE freeing in-use virtual console
The VT_DISALLOCATE ioctl can free a virtual console while tty_release()
is still running, causing a use-after-free in con_shutdown(). This
occurs because VT_DISALLOCATE considers a virtual console's
'struct vc_data' to be unused as soon as the corresponding tty's
refcount hits 0. But actually it may be still being closed.
Fix this by making vc_data be reference-counted via the embedded
'struct tty_port'. A newly allocated virtual console has refcount 1.
Opening it for the first time increments the refcount to 2. Closing it
for the last time decrements the refcount (in tty_operations::cleanup()
so that it happens late enough), as does VT_DISALLOCATE.
Reproducer:
#include <fcntl.h>
#include <linux/vt.h>
#include <sys/ioctl.h>
#include <unistd.h>
int main()
{
if (fork()) {
for (;;)
close(open("/dev/tty5", O_RDWR));
} else {
int fd = open("/dev/tty10", O_RDWR);
for (;;)
ioctl(fd, VT_DISALLOCATE, 5);
}
}
KASAN report:
BUG: KASAN: use-after-free in con_shutdown+0x76/0x80 drivers/tty/vt/vt.c:3278
Write of size 8 at addr ffff88806a4ec108 by task syz_vt/129
CPU: 0 PID: 129 Comm: syz_vt Not tainted 5.6.0-rc2 #11
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191223_100556-anatol 04/01/2014
Call Trace:
[...]
con_shutdown+0x76/0x80 drivers/tty/vt/vt.c:3278
release_tty+0xa8/0x410 drivers/tty/tty_io.c:1514
tty_release_struct+0x34/0x50 drivers/tty/tty_io.c:1629
tty_release+0x984/0xed0 drivers/tty/tty_io.c:1789
[...]
Allocated by task 129:
[...]
kzalloc include/linux/slab.h:669 [inline]
vc_allocate drivers/tty/vt/vt.c:1085 [inline]
vc_allocate+0x1ac/0x680 drivers/tty/vt/vt.c:1066
con_install+0x4d/0x3f0 drivers/tty/vt/vt.c:3229
tty_driver_install_tty drivers/tty/tty_io.c:1228 [inline]
tty_init_dev+0x94/0x350 drivers/tty/tty_io.c:1341
tty_open_by_driver drivers/tty/tty_io.c:1987 [inline]
tty_open+0x3ca/0xb30 drivers/tty/tty_io.c:2035
[...]
Freed by task 130:
[...]
kfree+0xbf/0x1e0 mm/slab.c:3757
vt_disallocate drivers/tty/vt/vt_ioctl.c:300 [inline]
vt_ioctl+0x16dc/0x1e30 drivers/tty/vt/vt_ioctl.c:818
tty_ioctl+0x9db/0x11b0 drivers/tty/tty_io.c:2660
[...]
Fixes: 4001d7b7fc27 ("vt: push down the tty lock so we can see what is left to tackle")
Cc: <stable@vger.kernel.org> # v3.4+
Reported-by: syzbot+522643ab5729b0421998@syzkaller.appspotmail.com
Acked-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20200322034305.210082-2-ebiggers@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
Item_bool_rowready_func2* Le_creator::create_swap(THD *thd, Item *a, Item *b) const
{
return new(thd->mem_root) Item_func_ge(thd, b, a);
}
|
Safe
|
[
"CWE-617"
] |
server
|
807945f2eb5fa22e6f233cc17b85a2e141efe2c8
|
8.181126231632393e+37
| 4 |
MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order...
When doing condition pushdown from HAVING into WHERE,
Item_equal::create_pushable_equalities() calls
item->set_extraction_flag(IMMUTABLE_FL) for constant items.
Then, Item::cleanup_excluding_immutables_processor() checks for this flag
to see if it should call item->cleanup() or leave the item as-is.
The failure happens when a constant item has a non-constant one inside it,
like:
(tbl.col=0 AND impossible_cond)
item->walk(cleanup_excluding_immutables_processor) works in a bottom-up
way so it
1. will call Item_func_eq(tbl.col=0)->cleanup()
2. will not call Item_cond_and->cleanup (as the AND is constant)
This creates an item tree where a fixed Item has an un-fixed Item inside
it which eventually causes an assertion failure.
Fixed by introducing this rule: instead of just calling
item->set_extraction_flag(IMMUTABLE_FL);
we call Item::walk() to set the flag for all sub-items of the item.
| 0 |
static void pcan_usb_fd_free(struct peak_usb_device *dev)
{
/* last device: can free shared objects now */
if (!dev->prev_siblings && !dev->next_siblings) {
struct pcan_usb_fd_device *pdev =
container_of(dev, struct pcan_usb_fd_device, dev);
/* free commands buffer */
kfree(pdev->cmd_buffer_addr);
/* free usb interface object */
kfree(pdev->usb_if);
}
}
|
Safe
|
[
"CWE-908"
] |
linux
|
30a8beeb3042f49d0537b7050fd21b490166a3d9
|
1.344625280204115e+38
| 14 |
can: peak_usb: pcan_usb_fd: Fix info-leaks to USB devices
Uninitialized Kernel memory can leak to USB devices.
Fix by using kzalloc() instead of kmalloc() on the affected buffers.
Signed-off-by: Tomas Bortoli <tomasbortoli@gmail.com>
Reported-by: syzbot+513e4d0985298538bf9b@syzkaller.appspotmail.com
Fixes: 0a25e1f4f185 ("can: peak_usb: add support for PEAK new CANFD USB adapters")
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
| 0 |
PHP_FUNCTION(openssl_get_cert_locations)
{
array_init(return_value);
add_assoc_string(return_value, "default_cert_file", (char *) X509_get_default_cert_file());
add_assoc_string(return_value, "default_cert_file_env", (char *) X509_get_default_cert_file_env());
add_assoc_string(return_value, "default_cert_dir", (char *) X509_get_default_cert_dir());
add_assoc_string(return_value, "default_cert_dir_env", (char *) X509_get_default_cert_dir_env());
add_assoc_string(return_value, "default_private_dir", (char *) X509_get_default_private_dir());
add_assoc_string(return_value, "default_default_cert_area", (char *) X509_get_default_cert_area());
add_assoc_string(return_value, "ini_cafile",
zend_ini_string("openssl.cafile", sizeof("openssl.cafile")-1, 0));
add_assoc_string(return_value, "ini_capath",
zend_ini_string("openssl.capath", sizeof("openssl.capath")-1, 0));
}
|
Safe
|
[
"CWE-326"
] |
php-src
|
0216630ea2815a5789a24279a1211ac398d4de79
|
5.881626850391071e+37
| 15 |
Fix bug #79601 (Wrong ciphertext/tag in AES-CCM encryption for a 12 bytes IV)
| 0 |
static int cacheinfo_cpu_pre_down(unsigned int cpu)
{
if (cpumask_test_and_clear_cpu(cpu, &cache_dev_map))
cpu_cache_sysfs_exit(cpu);
free_cache_attributes(cpu);
return 0;
}
|
Safe
|
[
"CWE-787"
] |
linux
|
aa838896d87af561a33ecefea1caa4c15a68bc47
|
1.0215687909836945e+38
| 8 |
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions
Convert the various sprintf fmaily calls in sysfs device show functions
to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety.
Done with:
$ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 .
And cocci script:
$ cat sysfs_emit_dev.cocci
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- strcpy(buf, chr);
+ sysfs_emit(buf, chr);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
- len += scnprintf(buf + len, PAGE_SIZE - len,
+ len += sysfs_emit_at(buf, len,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
...
- strcpy(buf, chr);
- return strlen(buf);
+ return sysfs_emit(buf, chr);
}
Signed-off-by: Joe Perches <joe@perches.com>
Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| 0 |
static long btrfs_ioctl_qgroup_assign(struct file *file, void __user *arg)
{
struct inode *inode = file_inode(file);
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_ioctl_qgroup_assign_args *sa;
struct btrfs_trans_handle *trans;
int ret;
int err;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
ret = mnt_want_write_file(file);
if (ret)
return ret;
sa = memdup_user(arg, sizeof(*sa));
if (IS_ERR(sa)) {
ret = PTR_ERR(sa);
goto drop_write;
}
trans = btrfs_join_transaction(root);
if (IS_ERR(trans)) {
ret = PTR_ERR(trans);
goto out;
}
if (sa->assign) {
ret = btrfs_add_qgroup_relation(trans, sa->src, sa->dst);
} else {
ret = btrfs_del_qgroup_relation(trans, sa->src, sa->dst);
}
/* update qgroup status and info */
err = btrfs_run_qgroups(trans);
if (err < 0)
btrfs_handle_fs_error(fs_info, err,
"failed to update qgroup status and info");
err = btrfs_end_transaction(trans);
if (err && !ret)
ret = err;
out:
kfree(sa);
drop_write:
mnt_drop_write_file(file);
return ret;
}
|
Safe
|
[
"CWE-476",
"CWE-284"
] |
linux
|
09ba3bc9dd150457c506e4661380a6183af651c1
|
1.5333328515110985e+38
| 50 |
btrfs: merge btrfs_find_device and find_device
Both btrfs_find_device() and find_device() does the same thing except
that the latter does not take the seed device onto account in the device
scanning context. We can merge them.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
| 0 |
static void test_select_direct()
{
int rc;
MYSQL_RES *result;
myheader("test_select_direct");
rc= mysql_autocommit(mysql, TRUE);
myquery(rc);
rc= mysql_query(mysql, "DROP TABLE IF EXISTS test_select");
myquery(rc);
rc= mysql_query(mysql, "CREATE TABLE test_select(id int, id1 tinyint, "
" id2 float, "
" id3 double, "
" name varchar(50))");
myquery(rc);
/* insert a row and commit the transaction */
rc= mysql_query(mysql, "INSERT INTO test_select VALUES(10, 5, 2.3, 4.5, 'venu')");
myquery(rc);
rc= mysql_commit(mysql);
myquery(rc);
rc= mysql_query(mysql, "SELECT * FROM test_select");
myquery(rc);
/* get the result */
result= mysql_store_result(mysql);
mytest(result);
(void) my_process_result_set(result);
mysql_free_result(result);
}
|
Safe
|
[
"CWE-284",
"CWE-295"
] |
mysql-server
|
3bd5589e1a5a93f9c224badf983cd65c45215390
|
3.0790587299748283e+38
| 36 |
WL#6791 : Redefine client --ssl option to imply enforced encryption
# Changed the meaning of the --ssl=1 option of all client binaries
to mean force ssl, not try ssl and fail over to eunecrypted
# Added a new MYSQL_OPT_SSL_ENFORCE mysql_options()
option to specify that an ssl connection is required.
# Added a new macro SSL_SET_OPTIONS() to the client
SSL handling headers that sets all the relevant SSL options at
once.
# Revamped all of the current native clients to use the new macro
# Removed some Windows line endings.
# Added proper handling of the new option into the ssl helper
headers.
# If SSL is mandatory assume that the media is secure enough
for the sha256 plugin to do unencrypted password exchange even
before establishing a connection.
# Set the default ssl cipher to DHE-RSA-AES256-SHA if none is
specified.
# updated test cases that require a non-default cipher to spawn
a mysql command line tool binary since mysqltest has no support
for specifying ciphers.
# updated the replication slave connection code to always enforce
SSL if any of the SSL config options is present.
# test cases added and updated.
# added a mysql_get_option() API to return mysql_options()
values. Used the new API inside the sha256 plugin.
# Fixed compilation warnings because of unused variables.
# Fixed test failures (mysql_ssl and bug13115401)
# Fixed whitespace issues.
# Fully implemented the mysql_get_option() function.
# Added a test case for mysql_get_option()
# fixed some trailing whitespace issues
# fixed some uint/int warnings in mysql_client_test.c
# removed shared memory option from non-windows get_options
tests
# moved MYSQL_OPT_LOCAL_INFILE to the uint options
| 0 |
static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
{
unsigned int nr_events = 0;
int iters = 0, ret = 0;
/*
* We disallow the app entering submit/complete with polling, but we
* still need to lock the ring to prevent racing with polled issue
* that got punted to a workqueue.
*/
mutex_lock(&ctx->uring_lock);
do {
/*
* Don't enter poll loop if we already have events pending.
* If we do, we can potentially be spinning for commands that
* already triggered a CQE (eg in error).
*/
if (io_cqring_events(ctx, false))
break;
/*
* If a submit got punted to a workqueue, we can have the
* application entering polling for a command before it gets
* issued. That app will hold the uring_lock for the duration
* of the poll right here, so we need to take a breather every
* now and then to ensure that the issue has a chance to add
* the poll to the issued list. Otherwise we can spin here
* forever, while the workqueue is stuck trying to acquire the
* very same mutex.
*/
if (!(++iters & 7)) {
mutex_unlock(&ctx->uring_lock);
io_run_task_work();
mutex_lock(&ctx->uring_lock);
}
ret = io_iopoll_getevents(ctx, &nr_events, min);
if (ret <= 0)
break;
ret = 0;
} while (min && !nr_events && !need_resched());
mutex_unlock(&ctx->uring_lock);
return ret;
}
|
Safe
|
[] |
linux
|
0f2122045b946241a9e549c2a76cea54fa58a7ff
|
7.584508442622594e+36
| 45 |
io_uring: don't rely on weak ->files references
Grab actual references to the files_struct. To avoid circular references
issues due to this, we add a per-task note that keeps track of what
io_uring contexts a task has used. When the tasks execs or exits its
assigned files, we cancel requests based on this tracking.
With that, we can grab proper references to the files table, and no
longer need to rely on stashing away ring_fd and ring_file to check
if the ring_fd may have been closed.
Cc: stable@vger.kernel.org # v5.5+
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| 0 |
get_option_arg( char* option,
char** *pargv,
char** argend )
{
if ( option[2] == 0 )
{
char** argv = *pargv;
if ( ++argv >= argend )
usage();
option = argv[0];
*pargv = argv;
}
else
option += 2;
return option;
}
|
Safe
|
[
"CWE-120"
] |
freetype2-demos
|
b995299b73ba4cd259f221f500d4e63095508bec
|
2.7548544421232067e+38
| 19 |
Fix Savannah bug #30054.
* src/ftdiff.c, src/ftgrid.c, src/ftmulti.c, src/ftstring.c,
src/ftview.c: Use precision for `%s' where appropriate to avoid
buffer overflows.
| 0 |
static void sig_complete_alias(GList **list, WINDOW_REC *window,
const char *word, const char *line,
int *want_space)
{
const char *definition;
g_return_if_fail(list != NULL);
g_return_if_fail(word != NULL);
g_return_if_fail(line != NULL);
if (*line != '\0') {
if ((definition = alias_find(line)) != NULL) {
*list = g_list_append(NULL, g_strdup(definition));
signal_stop();
}
} else {
*list = completion_get_aliases(word);
if (*list != NULL) signal_stop();
}
}
|
Safe
|
[
"CWE-416"
] |
irssi
|
36564717c9f701e3a339da362ab46d220d27e0c1
|
2.481574811257658e+38
| 20 |
Merge branch 'security' into 'master'
Security
See merge request irssi/irssi!34
(cherry picked from commit b0d9cb33cd9ef9da7c331409e8b7c57a6f3aef3f)
| 0 |
CAMLprim value caml_update_dummy(value dummy, value newval)
{
mlsize_t size, i;
tag_t tag;
size = Wosize_val(newval);
tag = Tag_val (newval);
Assert (size == Wosize_val(dummy));
Assert (tag < No_scan_tag || tag == Double_array_tag);
Tag_val(dummy) = tag;
if (tag == Double_array_tag){
size = Wosize_val (newval) / Double_wosize;
for (i = 0; i < size; i++){
Store_double_field (dummy, i, Double_field (newval, i));
}
}else{
for (i = 0; i < size; i++){
caml_modify (&Field(dummy, i), Field(newval, i));
}
}
return Val_unit;
}
|
Safe
|
[
"CWE-200"
] |
ocaml
|
659615c7b100a89eafe6253e7a5b9d84d0e8df74
|
2.4663697654007203e+38
| 23 |
fix PR#7003 and a few other bugs caused by misuse of Int_val
git-svn-id: http://caml.inria.fr/svn/ocaml/trunk@16525 f963ae5c-01c2-4b8c-9fe0-0dff7051ff02
| 0 |
SPL_METHOD(SplFileObject, setCsvControl)
{
spl_filesystem_object *intern = (spl_filesystem_object*)zend_object_store_get_object(getThis() TSRMLS_CC);
char delimiter = ',', enclosure = '"', escape='\\';
char *delim = NULL, *enclo = NULL, *esc = NULL;
int d_len = 0, e_len = 0, esc_len = 0;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "|sss", &delim, &d_len, &enclo, &e_len, &esc, &esc_len) == SUCCESS) {
switch(ZEND_NUM_ARGS())
{
case 3:
if (esc_len != 1) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "escape must be a character");
RETURN_FALSE;
}
escape = esc[0];
/* no break */
case 2:
if (e_len != 1) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "enclosure must be a character");
RETURN_FALSE;
}
enclosure = enclo[0];
/* no break */
case 1:
if (d_len != 1) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "delimiter must be a character");
RETURN_FALSE;
}
delimiter = delim[0];
/* no break */
case 0:
break;
}
intern->u.file.delimiter = delimiter;
intern->u.file.enclosure = enclosure;
intern->u.file.escape = escape;
}
}
|
Safe
|
[
"CWE-190"
] |
php-src
|
7245bff300d3fa8bacbef7897ff080a6f1c23eba
|
9.647790184066849e+36
| 39 |
Fix bug #72262 - do not overflow int
| 0 |
static int kdb_defcmd(int argc, const char **argv)
{
kdbtab_t *mp;
if (defcmd_in_progress) {
kdb_printf("kdb: nested defcmd detected, assuming missing "
"endefcmd\n");
kdb_defcmd2("endefcmd", "endefcmd");
}
if (argc == 0) {
kdbtab_t *kp;
struct kdb_macro *kmp;
struct kdb_macro_statement *kms;
list_for_each_entry(kp, &kdb_cmds_head, list_node) {
if (kp->func == kdb_exec_defcmd) {
kdb_printf("defcmd %s \"%s\" \"%s\"\n",
kp->name, kp->usage, kp->help);
kmp = container_of(kp, struct kdb_macro, cmd);
list_for_each_entry(kms, &kmp->statements,
list_node)
kdb_printf("%s", kms->statement);
kdb_printf("endefcmd\n");
}
}
return 0;
}
if (argc != 3)
return KDB_ARGCOUNT;
if (in_dbg_master()) {
kdb_printf("Command only available during kdb_init()\n");
return KDB_NOTIMP;
}
kdb_macro = kzalloc(sizeof(*kdb_macro), GFP_KDB);
if (!kdb_macro)
goto fail_defcmd;
mp = &kdb_macro->cmd;
mp->func = kdb_exec_defcmd;
mp->minlen = 0;
mp->flags = KDB_ENABLE_ALWAYS_SAFE;
mp->name = kdb_strdup(argv[1], GFP_KDB);
if (!mp->name)
goto fail_name;
mp->usage = kdb_strdup(argv[2], GFP_KDB);
if (!mp->usage)
goto fail_usage;
mp->help = kdb_strdup(argv[3], GFP_KDB);
if (!mp->help)
goto fail_help;
if (mp->usage[0] == '"') {
strcpy(mp->usage, argv[2]+1);
mp->usage[strlen(mp->usage)-1] = '\0';
}
if (mp->help[0] == '"') {
strcpy(mp->help, argv[3]+1);
mp->help[strlen(mp->help)-1] = '\0';
}
INIT_LIST_HEAD(&kdb_macro->statements);
defcmd_in_progress = true;
return 0;
fail_help:
kfree(mp->usage);
fail_usage:
kfree(mp->name);
fail_name:
kfree(kdb_macro);
fail_defcmd:
kdb_printf("Could not allocate new kdb_macro entry for %s\n", argv[1]);
return KDB_NOTIMP;
}
|
Safe
|
[
"CWE-787"
] |
linux
|
eadb2f47a3ced5c64b23b90fd2a3463f63726066
|
3.295562498943339e+38
| 72 |
lockdown: also lock down previous kgdb use
KGDB and KDB allow read and write access to kernel memory, and thus
should be restricted during lockdown. An attacker with access to a
serial port (for example, via a hypervisor console, which some cloud
vendors provide over the network) could trigger the debugger so it is
important that the debugger respect the lockdown mode when/if it is
triggered.
Fix this by integrating lockdown into kdb's existing permissions
mechanism. Unfortunately kgdb does not have any permissions mechanism
(although it certainly could be added later) so, for now, kgdb is simply
and brutally disabled by immediately exiting the gdb stub without taking
any action.
For lockdowns established early in the boot (e.g. the normal case) then
this should be fine but on systems where kgdb has set breakpoints before
the lockdown is enacted than "bad things" will happen.
CVE: CVE-2022-21499
Co-developed-by: Stephen Brennan <stephen.s.brennan@oracle.com>
Signed-off-by: Stephen Brennan <stephen.s.brennan@oracle.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| 0 |
void fx_TypedArray_prototype_slice(txMachine* the)
{
mxTypedArrayDeclarations;
txInteger delta = dispatch->value.typedArray.dispatch->size;
txInteger start = (txInteger)fxArgToIndex(the, 0, 0, length);
txInteger end = (txInteger)fxArgToIndex(the, 1, length, length);
txInteger count = (end > start) ? end - start : 0;
txInteger index = 0;
fxCreateTypedArraySpecies(the);
mxPushNumber(count);
mxRunCount(1);
mxPullSlot(mxResult);
{
mxResultTypedArrayDeclarations;
if (resultLength < count)
mxTypeError("insufficient buffer");
if (count) {
length = fxCheckDataViewSize(the, view, buffer, XS_IMMUTABLE);
mxPushUndefined();
while ((start < length) && (start < end)) {
(*dispatch->value.typedArray.dispatch->getter)(the, buffer->value.reference->next, view->value.dataView.offset + (start * delta), the->stack, EndianNative);
(*resultDispatch->value.typedArray.dispatch->coerce)(the, the->stack);
(*resultDispatch->value.typedArray.dispatch->setter)(the, resultBuffer->value.reference->next, resultView->value.dataView.offset + (index << resultDispatch->value.typedArray.dispatch->shift), the->stack, EndianNative);
start++;
index++;
}
while (start < end) {
the->stack->kind = XS_UNDEFINED_KIND;
(*resultDispatch->value.typedArray.dispatch->coerce)(the, the->stack);
(*resultDispatch->value.typedArray.dispatch->setter)(the, resultBuffer->value.reference->next, resultView->value.dataView.offset + (index << resultDispatch->value.typedArray.dispatch->shift), the->stack, EndianNative);
start++;
index++;
}
mxPop();
}
}
}
|
Safe
|
[
"CWE-125"
] |
moddable
|
135aa9a4a6a9b49b60aa730ebc3bcc6247d75c45
|
2.013560560262086e+38
| 37 |
XS: #896
| 0 |
int bnx2x_send_final_clnup(struct bnx2x *bp, u8 clnup_func, u32 poll_cnt)
{
u32 op_gen_command = 0;
u32 comp_addr = BAR_CSTRORM_INTMEM +
CSTORM_FINAL_CLEANUP_COMPLETE_OFFSET(clnup_func);
int ret = 0;
if (REG_RD(bp, comp_addr)) {
BNX2X_ERR("Cleanup complete was not 0 before sending\n");
return 1;
}
op_gen_command |= OP_GEN_PARAM(XSTORM_AGG_INT_FINAL_CLEANUP_INDEX);
op_gen_command |= OP_GEN_TYPE(XSTORM_AGG_INT_FINAL_CLEANUP_COMP_TYPE);
op_gen_command |= OP_GEN_AGG_VECT(clnup_func);
op_gen_command |= 1 << SDM_OP_GEN_AGG_VECT_IDX_VALID_SHIFT;
DP(BNX2X_MSG_SP, "sending FW Final cleanup\n");
REG_WR(bp, XSDM_REG_OPERATION_GEN, op_gen_command);
if (bnx2x_flr_clnup_reg_poll(bp, comp_addr, 1, poll_cnt) != 1) {
BNX2X_ERR("FW final cleanup did not succeed\n");
DP(BNX2X_MSG_SP, "At timeout completion address contained %x\n",
(REG_RD(bp, comp_addr)));
bnx2x_panic();
return 1;
}
/* Zero completion for next FLR */
REG_WR(bp, comp_addr, 0);
return ret;
}
|
Safe
|
[
"CWE-20"
] |
linux
|
8914a595110a6eca69a5e275b323f5d09e18f4f9
|
6.302114179449265e+37
| 32 |
bnx2x: disable GSO where gso_size is too big for hardware
If a bnx2x card is passed a GSO packet with a gso_size larger than
~9700 bytes, it will cause a firmware error that will bring the card
down:
bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f0)]MC assert!
bnx2x: [bnx2x_mc_assert:720(enP24p1s0f0)]XSTORM_ASSERT_LIST_INDEX 0x2
bnx2x: [bnx2x_mc_assert:736(enP24p1s0f0)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e43e47 0x00463e01 0x00010052
bnx2x: [bnx2x_mc_assert:750(enP24p1s0f0)]Chip Revision: everest3, FW Version: 7_13_1
... (dump of values continues) ...
Detect when the mac length of a GSO packet is greater than the maximum
packet size (9700 bytes) and disable GSO.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
| 0 |
parse_CONJUNCTION(const char *arg, struct ofpbuf *ofpacts,
enum ofputil_protocol *usable_protocols OVS_UNUSED)
{
uint8_t n_clauses;
uint8_t clause;
uint32_t id;
int n;
if (!ovs_scan(arg, "%"SCNi32" , %"SCNu8" / %"SCNu8" %n",
&id, &clause, &n_clauses, &n) || n != strlen(arg)) {
return xstrdup("\"conjunction\" syntax is \"conjunction(id,i/n)\"");
}
if (n_clauses < 2) {
return xstrdup("conjunction must have at least 2 clauses");
} else if (n_clauses > 64) {
return xstrdup("conjunction must have at most 64 clauses");
} else if (clause < 1) {
return xstrdup("clause index must be positive");
} else if (clause > n_clauses) {
return xstrdup("clause index must be less than or equal to "
"number of clauses");
}
add_conjunction(ofpacts, id, clause - 1, n_clauses);
return NULL;
}
|
Safe
|
[
"CWE-125"
] |
ovs
|
9237a63c47bd314b807cda0bd2216264e82edbe8
|
3.3827334437322028e+38
| 27 |
ofp-actions: Avoid buffer overread in BUNDLE action decoding.
Reported-at: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9052
Signed-off-by: Ben Pfaff <blp@ovn.org>
Acked-by: Justin Pettit <jpettit@ovn.org>
| 0 |
static void child_msg_onlinestatus(int msg_type, struct process_id src,
void *buf, size_t len, void *private_data)
{
TALLOC_CTX *mem_ctx;
const char *message;
struct process_id *sender;
DEBUG(5,("winbind_msg_onlinestatus received.\n"));
if (!buf) {
return;
}
sender = (struct process_id *)buf;
mem_ctx = talloc_init("winbind_msg_onlinestatus");
if (mem_ctx == NULL) {
return;
}
message = collect_onlinestatus(mem_ctx);
if (message == NULL) {
talloc_destroy(mem_ctx);
return;
}
message_send_pid(*sender, MSG_WINBIND_ONLINESTATUS,
message, strlen(message) + 1, True);
talloc_destroy(mem_ctx);
}
|
Safe
|
[] |
samba
|
c93d42969451949566327e7fdbf29bfcee2c8319
|
2.4846511880721546e+38
| 31 |
Back-port of Volkers fix.
Fix a race condition in winbind leading to a crash
When SIGCHLD handling is delayed for some reason, sending a request to a child
can fail early because the child has died already. In this case
async_main_request_sent() directly called the continuation function without
properly removing the malfunctioning child process and the requests in the
queue. The next request would then crash in the DLIST_ADD_END() in
async_request() because the request pending for the child had been
talloc_free()'ed and yet still was referenced in the list.
This one is *old*...
Volker
Jeremy.
| 0 |
static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
{
const struct cred *creds = NULL;
int ret;
if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
creds = override_creds(req->creds);
if (!io_op_defs[req->opcode].audit_skip)
audit_uring_entry(req->opcode);
if (unlikely(!io_assign_file(req, issue_flags)))
return -EBADF;
switch (req->opcode) {
case IORING_OP_NOP:
ret = io_nop(req, issue_flags);
break;
case IORING_OP_READV:
case IORING_OP_READ_FIXED:
case IORING_OP_READ:
ret = io_read(req, issue_flags);
break;
case IORING_OP_WRITEV:
case IORING_OP_WRITE_FIXED:
case IORING_OP_WRITE:
ret = io_write(req, issue_flags);
break;
case IORING_OP_FSYNC:
ret = io_fsync(req, issue_flags);
break;
case IORING_OP_POLL_ADD:
ret = io_poll_add(req, issue_flags);
break;
case IORING_OP_POLL_REMOVE:
ret = io_poll_update(req, issue_flags);
break;
case IORING_OP_SYNC_FILE_RANGE:
ret = io_sync_file_range(req, issue_flags);
break;
case IORING_OP_SENDMSG:
ret = io_sendmsg(req, issue_flags);
break;
case IORING_OP_SEND:
ret = io_send(req, issue_flags);
break;
case IORING_OP_RECVMSG:
ret = io_recvmsg(req, issue_flags);
break;
case IORING_OP_RECV:
ret = io_recv(req, issue_flags);
break;
case IORING_OP_TIMEOUT:
ret = io_timeout(req, issue_flags);
break;
case IORING_OP_TIMEOUT_REMOVE:
ret = io_timeout_remove(req, issue_flags);
break;
case IORING_OP_ACCEPT:
ret = io_accept(req, issue_flags);
break;
case IORING_OP_CONNECT:
ret = io_connect(req, issue_flags);
break;
case IORING_OP_ASYNC_CANCEL:
ret = io_async_cancel(req, issue_flags);
break;
case IORING_OP_FALLOCATE:
ret = io_fallocate(req, issue_flags);
break;
case IORING_OP_OPENAT:
ret = io_openat(req, issue_flags);
break;
case IORING_OP_CLOSE:
ret = io_close(req, issue_flags);
break;
case IORING_OP_FILES_UPDATE:
ret = io_files_update(req, issue_flags);
break;
case IORING_OP_STATX:
ret = io_statx(req, issue_flags);
break;
case IORING_OP_FADVISE:
ret = io_fadvise(req, issue_flags);
break;
case IORING_OP_MADVISE:
ret = io_madvise(req, issue_flags);
break;
case IORING_OP_OPENAT2:
ret = io_openat2(req, issue_flags);
break;
case IORING_OP_EPOLL_CTL:
ret = io_epoll_ctl(req, issue_flags);
break;
case IORING_OP_SPLICE:
ret = io_splice(req, issue_flags);
break;
case IORING_OP_PROVIDE_BUFFERS:
ret = io_provide_buffers(req, issue_flags);
break;
case IORING_OP_REMOVE_BUFFERS:
ret = io_remove_buffers(req, issue_flags);
break;
case IORING_OP_TEE:
ret = io_tee(req, issue_flags);
break;
case IORING_OP_SHUTDOWN:
ret = io_shutdown(req, issue_flags);
break;
case IORING_OP_RENAMEAT:
ret = io_renameat(req, issue_flags);
break;
case IORING_OP_UNLINKAT:
ret = io_unlinkat(req, issue_flags);
break;
case IORING_OP_MKDIRAT:
ret = io_mkdirat(req, issue_flags);
break;
case IORING_OP_SYMLINKAT:
ret = io_symlinkat(req, issue_flags);
break;
case IORING_OP_LINKAT:
ret = io_linkat(req, issue_flags);
break;
case IORING_OP_MSG_RING:
ret = io_msg_ring(req, issue_flags);
break;
default:
ret = -EINVAL;
break;
}
if (!io_op_defs[req->opcode].audit_skip)
audit_uring_exit(!ret, ret);
if (creds)
revert_creds(creds);
if (ret)
return ret;
/* If the op doesn't have a file, we're not polling for it */
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && req->file)
io_iopoll_req_issued(req, issue_flags);
return 0;
|
Safe
|
[
"CWE-416"
] |
linux
|
e677edbcabee849bfdd43f1602bccbecf736a646
|
7.256051152996242e+37
| 144 |
io_uring: fix race between timeout flush and removal
io_flush_timeouts() assumes the timeout isn't in progress of triggering
or being removed/canceled, so it unconditionally removes it from the
timeout list and attempts to cancel it.
Leave it on the list and let the normal timeout cancelation take care
of it.
Cc: stable@vger.kernel.org # 5.5+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| 0 |
BGD_DECLARE(void) gdImageSetStyle (gdImagePtr im, int *style, int noOfPixels)
{
if (im->style) {
gdFree (im->style);
}
if (overflow2(sizeof (int), noOfPixels)) {
return;
}
im->style = (int *) gdMalloc (sizeof (int) * noOfPixels);
if (!im->style) {
return;
}
memcpy (im->style, style, sizeof (int) * noOfPixels);
im->styleLength = noOfPixels;
im->stylePos = 0;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
libgd
|
77f619d48259383628c3ec4654b1ad578e9eb40e
|
2.122581487828288e+38
| 16 |
fix #215 gdImageFillToBorder stack-overflow when invalid color is used
| 0 |
bool ValidateInput<Variant>(const Tensor& updates) {
return true;
}
|
Safe
|
[
"CWE-369"
] |
tensorflow
|
4aacb30888638da75023e6601149415b39763d76
|
3.877140163368344e+37
| 3 |
Disallow division by zero FPE in `tf.raw_ops.ResourceScatterDiv`
Had to update a test that was broken.
PiperOrigin-RevId: 388516976
Change-Id: Ic358e6bf0559e011539974d453fc7aa18b427e9c
| 0 |
void SetInput(const std::vector<float>& data) {
QuantizeAndPopulate<int16_t>(input_, data);
}
|
Safe
|
[
"CWE-369"
] |
tensorflow
|
5f7975d09eac0f10ed8a17dbb6f5964977725adc
|
2.0203019278411513e+38
| 3 |
Prevent another div by 0 in optimized pooling implementations TFLite
PiperOrigin-RevId: 370800091
Change-Id: I2119352f57fb5ca4f2051e0e2d749403304a979b
| 0 |
static void mdns_mcast_group_ipv6(struct sockaddr_in6 *ret_sa) {
assert(ret_sa);
memset(ret_sa, 0, sizeof(struct sockaddr_in6));
ret_sa->sin6_family = AF_INET6;
ret_sa->sin6_port = htons(AVAHI_MDNS_PORT);
inet_pton(AF_INET6, AVAHI_IPV6_MCAST_GROUP, &ret_sa->sin6_addr);
}
|
Safe
|
[
"CWE-399"
] |
avahi
|
46109dfec75534fe270c0ab902576f685d5ab3a6
|
2.4340744791847902e+38
| 8 |
socket: Still read corrupt packets from the sockets
Else, we end up with an infinite loop with 100% CPU.
http://www.avahi.org/ticket/325
https://bugzilla.redhat.com/show_bug.cgi?id=667187
| 0 |
static int minix_fill_super(struct super_block *s, void *data, int silent)
{
struct buffer_head *bh;
struct buffer_head **map;
struct minix_super_block *ms;
int i, block;
struct inode *root_inode;
struct minix_sb_info *sbi;
sbi = kmalloc(sizeof(struct minix_sb_info), GFP_KERNEL);
if (!sbi)
return -ENOMEM;
s->s_fs_info = sbi;
memset(sbi, 0, sizeof(struct minix_sb_info));
/* N.B. These should be compile-time tests.
Unfortunately that is impossible. */
if (32 != sizeof (struct minix_inode))
panic("bad V1 i-node size");
if (64 != sizeof(struct minix2_inode))
panic("bad V2 i-node size");
if (!sb_set_blocksize(s, BLOCK_SIZE))
goto out_bad_hblock;
if (!(bh = sb_bread(s, 1)))
goto out_bad_sb;
ms = (struct minix_super_block *) bh->b_data;
sbi->s_ms = ms;
sbi->s_sbh = bh;
sbi->s_mount_state = ms->s_state;
sbi->s_ninodes = ms->s_ninodes;
sbi->s_nzones = ms->s_nzones;
sbi->s_imap_blocks = ms->s_imap_blocks;
sbi->s_zmap_blocks = ms->s_zmap_blocks;
sbi->s_firstdatazone = ms->s_firstdatazone;
sbi->s_log_zone_size = ms->s_log_zone_size;
sbi->s_max_size = ms->s_max_size;
s->s_magic = ms->s_magic;
if (s->s_magic == MINIX_SUPER_MAGIC) {
sbi->s_version = MINIX_V1;
sbi->s_dirsize = 16;
sbi->s_namelen = 14;
sbi->s_link_max = MINIX_LINK_MAX;
} else if (s->s_magic == MINIX_SUPER_MAGIC2) {
sbi->s_version = MINIX_V1;
sbi->s_dirsize = 32;
sbi->s_namelen = 30;
sbi->s_link_max = MINIX_LINK_MAX;
} else if (s->s_magic == MINIX2_SUPER_MAGIC) {
sbi->s_version = MINIX_V2;
sbi->s_nzones = ms->s_zones;
sbi->s_dirsize = 16;
sbi->s_namelen = 14;
sbi->s_link_max = MINIX2_LINK_MAX;
} else if (s->s_magic == MINIX2_SUPER_MAGIC2) {
sbi->s_version = MINIX_V2;
sbi->s_nzones = ms->s_zones;
sbi->s_dirsize = 32;
sbi->s_namelen = 30;
sbi->s_link_max = MINIX2_LINK_MAX;
} else
goto out_no_fs;
/*
* Allocate the buffer map to keep the superblock small.
*/
i = (sbi->s_imap_blocks + sbi->s_zmap_blocks) * sizeof(bh);
map = kmalloc(i, GFP_KERNEL);
if (!map)
goto out_no_map;
memset(map, 0, i);
sbi->s_imap = &map[0];
sbi->s_zmap = &map[sbi->s_imap_blocks];
block=2;
for (i=0 ; i < sbi->s_imap_blocks ; i++) {
if (!(sbi->s_imap[i]=sb_bread(s, block)))
goto out_no_bitmap;
block++;
}
for (i=0 ; i < sbi->s_zmap_blocks ; i++) {
if (!(sbi->s_zmap[i]=sb_bread(s, block)))
goto out_no_bitmap;
block++;
}
minix_set_bit(0,sbi->s_imap[0]->b_data);
minix_set_bit(0,sbi->s_zmap[0]->b_data);
/* set up enough so that it can read an inode */
s->s_op = &minix_sops;
root_inode = iget(s, MINIX_ROOT_INO);
if (!root_inode || is_bad_inode(root_inode))
goto out_no_root;
s->s_root = d_alloc_root(root_inode);
if (!s->s_root)
goto out_iput;
if (!NO_TRUNCATE)
s->s_root->d_op = &minix_dentry_operations;
if (!(s->s_flags & MS_RDONLY)) {
ms->s_state &= ~MINIX_VALID_FS;
mark_buffer_dirty(bh);
}
if (!(sbi->s_mount_state & MINIX_VALID_FS))
printk("MINIX-fs: mounting unchecked file system, "
"running fsck is recommended\n");
else if (sbi->s_mount_state & MINIX_ERROR_FS)
printk("MINIX-fs: mounting file system with errors, "
"running fsck is recommended\n");
return 0;
out_iput:
iput(root_inode);
goto out_freemap;
out_no_root:
if (!silent)
printk("MINIX-fs: get root inode failed\n");
goto out_freemap;
out_no_bitmap:
printk("MINIX-fs: bad superblock or unable to read bitmaps\n");
out_freemap:
for (i = 0; i < sbi->s_imap_blocks; i++)
brelse(sbi->s_imap[i]);
for (i = 0; i < sbi->s_zmap_blocks; i++)
brelse(sbi->s_zmap[i]);
kfree(sbi->s_imap);
goto out_release;
out_no_map:
if (!silent)
printk("MINIX-fs: can't allocate map\n");
goto out_release;
out_no_fs:
if (!silent)
printk("VFS: Can't find a Minix or Minix V2 filesystem "
"on device %s\n", s->s_id);
out_release:
brelse(bh);
goto out;
out_bad_hblock:
printk("MINIX-fs: blocksize too small for device\n");
goto out;
out_bad_sb:
printk("MINIX-fs: unable to read superblock\n");
out:
s->s_fs_info = NULL;
kfree(sbi);
return -EINVAL;
}
|
Vulnerable
|
[
"CWE-189"
] |
linux-2.6
|
f5fb09fa3392ad43fbcfc2f4580752f383ab5996
|
2.8725563134003343e+37
| 159 |
[PATCH] Fix for minix crash
Mounting a (corrupt) minix filesystem with zero s_zmap_blocks
gives a spectacular crash on my 2.6.17.8 system, no doubt
because minix/inode.c does an unconditional
minix_set_bit(0,sbi->s_zmap[0]->b_data);
[akpm@osdl.org: make labels conistent while we're there]
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| 1 |
ram_addr_t qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
bool share, const char *mem_path,
Error **errp)
{
RAMBlock *new_block;
ram_addr_t addr;
Error *local_err = NULL;
if (xen_enabled()) {
error_setg(errp, "-mem-path not supported with Xen");
return -1;
}
if (phys_mem_alloc != qemu_anon_ram_alloc) {
/*
* file_ram_alloc() needs to allocate just like
* phys_mem_alloc, but we haven't bothered to provide
* a hook there.
*/
error_setg(errp,
"-mem-path not supported with this accelerator");
return -1;
}
size = TARGET_PAGE_ALIGN(size);
new_block = g_malloc0(sizeof(*new_block));
new_block->mr = mr;
new_block->used_length = size;
new_block->max_length = size;
new_block->flags = share ? RAM_SHARED : 0;
new_block->host = file_ram_alloc(new_block, size,
mem_path, errp);
if (!new_block->host) {
g_free(new_block);
return -1;
}
addr = ram_block_add(new_block, &local_err);
if (local_err) {
g_free(new_block);
error_propagate(errp, local_err);
return -1;
}
return addr;
}
|
Safe
|
[] |
qemu
|
c3c1bb99d1c11978d9ce94d1bdcf0705378c1459
|
8.59148512514728e+37
| 45 |
exec: Respect as_tranlsate_internal length clamp
address_space_translate_internal will clamp the *plen length argument
based on the size of the memory region being queried. The iommu walker
logic in addresss_space_translate was ignoring this by discarding the
post fn call value of *plen. Fix by just always using *plen as the
length argument throughout the fn, removing the len local variable.
This fixes a bootloader bug when a single elf section spans multiple
QEMU memory regions.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-Id: <1426570554-15940-1-git-send-email-peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| 0 |
int visited_count() const { return visited_count_; }
|
Safe
|
[] |
node
|
fd80a31e0697d6317ce8c2d289575399f4e06d21
|
5.590344687585321e+37
| 1 |
deps: backport 5f836c from v8 upstream
Original commit message:
Fix Hydrogen bounds check elimination
When combining bounds checks, they must all be moved before the first load/store
that they are guarding.
BUG=chromium:344186
LOG=y
R=svenpanne@chromium.org
Review URL: https://codereview.chromium.org/172093002
git-svn-id: https://v8.googlecode.com/svn/branches/bleeding_edge@19475 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
fix #8070
| 0 |
static int check_pbase_path(unsigned hash)
{
int pos = (!done_pbase_paths) ? -1 : done_pbase_path_pos(hash);
if (0 <= pos)
return 1;
pos = -pos - 1;
ALLOC_GROW(done_pbase_paths,
done_pbase_paths_num + 1,
done_pbase_paths_alloc);
done_pbase_paths_num++;
if (pos < done_pbase_paths_num)
memmove(done_pbase_paths + pos + 1,
done_pbase_paths + pos,
(done_pbase_paths_num - pos - 1) * sizeof(unsigned));
done_pbase_paths[pos] = hash;
return 0;
}
|
Safe
|
[
"CWE-119",
"CWE-787"
] |
git
|
de1e67d0703894cb6ea782e36abb63976ab07e60
|
3.039455946094695e+38
| 17 |
list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
| 0 |
bool PDFDoc::markAnnotations(Object *annotsObj, XRef *xRef, XRef *countRef, unsigned int numOffset, int oldPageNum, int newPageNum, std::set<Dict*> *alreadyMarkedDicts) {
bool modified = false;
Object annots = annotsObj->fetch(getXRef());
if (annots.isArray()) {
Array *array = annots.getArray();
for (int i=array->getLength() - 1; i >= 0; i--) {
Object obj1 = array->get(i);
if (obj1.isDict()) {
Dict *dict = obj1.getDict();
Object type = dict->lookup("Type");
if (type.isName() && strcmp(type.getName(), "Annot") == 0) {
Object obj2 = dict->lookupNF("P");
if (obj2.isRef()) {
if (obj2.getRef().num == oldPageNum) {
Object obj3 = array->getNF(i);
if (obj3.isRef()) {
dict->set("P", Object(newPageNum, 0));
getXRef()->setModifiedObject(&obj1, obj3.getRef());
}
} else if (obj2.getRef().num == newPageNum) {
continue;
} else {
Object page = getXRef()->fetch(obj2.getRef().num, obj2.getRef().gen);
if (page.isDict()) {
Dict *pageDict = page.getDict();
Object pagetype = pageDict->lookup("Type");
if (!pagetype.isName() || strcmp(pagetype.getName(), "Page") != 0) {
continue;
}
}
array->remove(i);
modified = true;
continue;
}
}
}
markPageObjects(dict, xRef, countRef, numOffset, oldPageNum, newPageNum, alreadyMarkedDicts);
}
obj1 = array->getNF(i);
if (obj1.isRef()) {
if (obj1.getRef().num + (int) numOffset >= xRef->getNumObjects() || xRef->getEntry(obj1.getRef().num + numOffset)->type == xrefEntryFree) {
if (getXRef()->getEntry(obj1.getRef().num)->type == xrefEntryFree) {
continue; // already marked as free => should be replaced
}
xRef->add(obj1.getRef().num + numOffset, obj1.getRef().gen, 0, true);
if (getXRef()->getEntry(obj1.getRef().num)->type == xrefEntryCompressed) {
xRef->getEntry(obj1.getRef().num + numOffset)->type = xrefEntryCompressed;
}
}
if (obj1.getRef().num + (int) numOffset >= countRef->getNumObjects() ||
countRef->getEntry(obj1.getRef().num + numOffset)->type == xrefEntryFree)
{
countRef->add(obj1.getRef().num + numOffset, 1, 0, true);
} else {
XRefEntry *entry = countRef->getEntry(obj1.getRef().num + numOffset);
entry->gen++;
}
}
}
}
if (annotsObj->isRef()) {
if (annotsObj->getRef().num + (int) numOffset >= xRef->getNumObjects() || xRef->getEntry(annotsObj->getRef().num + numOffset)->type == xrefEntryFree) {
if (getXRef()->getEntry(annotsObj->getRef().num)->type == xrefEntryFree) {
return modified; // already marked as free => should be replaced
}
xRef->add(annotsObj->getRef().num + numOffset, annotsObj->getRef().gen, 0, true);
if (getXRef()->getEntry(annotsObj->getRef().num)->type == xrefEntryCompressed) {
xRef->getEntry(annotsObj->getRef().num + numOffset)->type = xrefEntryCompressed;
}
}
if (annotsObj->getRef().num + (int) numOffset >= countRef->getNumObjects() ||
countRef->getEntry(annotsObj->getRef().num + numOffset)->type == xrefEntryFree)
{
countRef->add(annotsObj->getRef().num + numOffset, 1, 0, true);
} else {
XRefEntry *entry = countRef->getEntry(annotsObj->getRef().num + numOffset);
entry->gen++;
}
getXRef()->setModifiedObject(&annots, annotsObj->getRef());
}
return modified;
}
|
Safe
|
[
"CWE-20"
] |
poppler
|
9fd5ec0e6e5f763b190f2a55ceb5427cfe851d5f
|
2.9291413103197082e+38
| 82 |
PDFDoc::setup: Fix return value
At that point xref can have gone wrong since extractPDFSubtype() can
have caused a reconstruct that broke stuff so instead of unconditionally
returning true, return xref->isOk()
Fixes #706
| 0 |
static void lo_readdir(fuse_req_t req, fuse_ino_t ino, size_t size,
off_t offset, struct fuse_file_info *fi)
{
lo_do_readdir(req, ino, size, offset, fi, 0);
}
|
Safe
|
[] |
qemu
|
6084633dff3a05d63176e06d7012c7e15aba15be
|
3.3572795156385228e+38
| 5 |
tools/virtiofsd: xattr name mappings: Add option
Add an option to define mappings of xattr names so that
the client and server filesystems see different views.
This can be used to have different SELinux mappings as
seen by the guest, to run the virtiofsd with less privileges
(e.g. in a case where it can't set trusted/system/security
xattrs but you want the guest to be able to), or to isolate
multiple users of the same name; e.g. trusted attributes
used by stacking overlayfs.
A mapping engine is used with 3 simple rules; the rules can
be combined to allow most useful mapping scenarios.
The ruleset is defined by -o xattrmap='rules...'.
This patch doesn't use the rule maps yet.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20201023165812.36028-2-dgilbert@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
| 0 |
gst_matroska_demux_sink_activate_mode (GstPad * sinkpad, GstObject * parent,
GstPadMode mode, gboolean active)
{
switch (mode) {
case GST_PAD_MODE_PULL:
if (active) {
/* if we have a scheduler we can start the task */
gst_pad_start_task (sinkpad, (GstTaskFunction) gst_matroska_demux_loop,
sinkpad, NULL);
} else {
gst_pad_stop_task (sinkpad);
}
return TRUE;
case GST_PAD_MODE_PUSH:
return TRUE;
default:
return FALSE;
}
}
|
Safe
|
[] |
gst-plugins-good
|
9181191511f9c0be6a89c98b311f49d66bd46dc3
|
2.5960924721017537e+38
| 19 |
matroskademux: Fix extraction of multichannel WavPack
The old code had a couple of issues that all lead to potential memory
safety bugs.
- Use a constant for the Wavpack4Header size instead of using sizeof.
It's written out into the data and not from the struct and who knows
what special alignment/padding requirements some C compilers have.
- gst_buffer_set_size() does not realloc the buffer when setting a
bigger size than allocated, it only allows growing up to the maximum
allocated size. Instead use a GstAdapter to collect all the blocks
and take out everything at once in the end.
- Check that enough data is actually available in the input and
otherwise handle it an error in all cases instead of silently
ignoring it.
Among other things this fixes out of bounds writes because the code
assumed gst_buffer_set_size() can grow the buffer and simply wrote after
the end of the buffer.
Thanks to Natalie Silvanovich for reporting.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/859
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/903>
| 0 |
int db_rule_add(struct db_filter *db, const struct db_api_rule_list *rule)
{
int rc = -ENOMEM;
struct db_sys_list *s_new, *s_iter, *s_prev = NULL;
struct db_iter_state state;
bool rm_flag = false;
assert(db != NULL);
/* do all our possible memory allocation up front so we don't have to
* worry about failure once we get to the point where we start updating
* the filter db */
if (db->arch->size == ARCH_SIZE_64)
s_new = _db_rule_gen_64(db->arch, rule);
else if (db->arch->size == ARCH_SIZE_32)
s_new = _db_rule_gen_32(db->arch, rule);
else
return -EFAULT;
if (s_new == NULL)
return -ENOMEM;
/* find a matching syscall/chain or insert a new one */
s_iter = db->syscalls;
while (s_iter != NULL && s_iter->num < rule->syscall) {
s_prev = s_iter;
s_iter = s_iter->next;
}
s_new->priority = _DB_PRI_MASK_CHAIN - s_new->node_cnt;
add_reset:
if (s_iter == NULL || s_iter->num != rule->syscall) {
/* new syscall, add before s_iter */
if (s_prev != NULL) {
s_new->next = s_prev->next;
s_prev->next = s_new;
} else {
s_new->next = db->syscalls;
db->syscalls = s_new;
}
return 0;
} else if (s_iter->chains == NULL) {
if (rm_flag || !s_iter->valid) {
/* we are here because our previous pass cleared the
* entire syscall chain when searching for a subtree
* match or the existing syscall entry is a phantom,
* so either way add the new chain */
s_iter->chains = s_new->chains;
s_iter->action = s_new->action;
s_iter->node_cnt = s_new->node_cnt;
if (s_iter->valid)
s_iter->priority = s_new->priority;
s_iter->valid = true;
free(s_new);
rc = 0;
goto add_priority_update;
} else {
/* syscall exists without any chains - existing filter
* is at least as large as the new entry so cleanup and
* exit */
_db_tree_put(&s_new->chains);
free(s_new);
goto add_free_ok;
}
} else if (s_iter->chains != NULL && s_new->chains == NULL) {
/* syscall exists with chains but the new filter has no chains
* so we need to clear the existing chains and exit */
_db_tree_put(&s_iter->chains);
s_iter->chains = NULL;
s_iter->node_cnt = 0;
s_iter->action = rule->action;
/* cleanup the new tree and return */
_db_tree_put(&s_new->chains);
free(s_new);
goto add_free_ok;
}
/* prune any sub-trees that are no longer required */
memset(&state, 0, sizeof(state));
state.sx = s_iter;
state.action = rule->action;
rc = _db_tree_prune(&s_iter->chains, s_new->chains, &state);
if (rc > 0) {
/* we pruned at least some of the existing tree */
rm_flag = true;
s_iter->node_cnt -= rc;
if (s_iter->chains == NULL)
/* we pruned the entire tree */
goto add_reset;
} else if ((state.flags & _DB_IST_M_REDUNDANT) == _DB_IST_M_REDUNDANT) {
/* the existing tree is "shorter", drop the new one */
_db_tree_put(&s_new->chains);
free(s_new);
goto add_free_ok;
}
/* add the new rule to the existing filter and cleanup */
memset(&state, 0, sizeof(state));
state.sx = s_iter;
rc = _db_tree_add(&s_iter->chains, s_new->chains, &state);
if (rc < 0)
goto add_failure;
s_iter->node_cnt += s_new->node_cnt;
s_iter->node_cnt -= _db_tree_put(&s_new->chains);
free(s_new);
add_free_ok:
rc = 0;
add_priority_update:
/* update the priority */
if (s_iter != NULL) {
s_iter->priority &= (~_DB_PRI_MASK_CHAIN);
s_iter->priority |= (_DB_PRI_MASK_CHAIN - s_iter->node_cnt);
}
return rc;
add_failure:
/* NOTE: another reminder that we don't do any db error recovery here,
* use the transaction mechanism as previously mentioned */
_db_tree_put(&s_new->chains);
free(s_new);
return rc;
}
|
Safe
|
[] |
libseccomp
|
c5bf78de480b32b324e0f511c88ce533ed280b37
|
1.2824292305621455e+38
| 122 |
db: fix 64-bit argument comparisons
Our approach to doing 64-bit comparisons using 32-bit operators was
just plain wrong, leading to a number of potential problems with
filters that used the LT, GT, LE, or GE operators. This patch fixes
this problem and a few other related issues that came to light in
the course of fixing the core problem.
A special thanks to Jann Horn for bringing this problem to our
attention.
Signed-off-by: Paul Moore <paul@paul-moore.com>
| 0 |
TEST_F(OwnedImplTest, ReserveZeroCommit) {
BufferFragmentImpl frag("", 0, nullptr);
Buffer::OwnedImpl buf;
buf.addBufferFragment(frag);
buf.prepend("bbbbb");
buf.add("");
constexpr uint32_t reserve_slices = 16;
Buffer::RawSlice slices[reserve_slices];
const uint32_t allocated_slices = buf.reserve(1280, slices, reserve_slices);
for (uint32_t i = 0; i < allocated_slices; ++i) {
slices[i].len_ = 0;
}
buf.commit(slices, allocated_slices);
os_fd_t pipe_fds[2] = {0, 0};
auto& os_sys_calls = Api::OsSysCallsSingleton::get();
#ifdef WIN32
ASSERT_EQ(os_sys_calls.socketpair(AF_INET, SOCK_STREAM, 0, pipe_fds).rc_, 0);
#else
ASSERT_EQ(pipe(pipe_fds), 0);
#endif
Network::IoSocketHandleImpl io_handle(pipe_fds[0]);
ASSERT_EQ(os_sys_calls.setsocketblocking(pipe_fds[0], false).rc_, 0);
ASSERT_EQ(os_sys_calls.setsocketblocking(pipe_fds[1], false).rc_, 0);
const uint32_t max_length = 1953;
std::string data(max_length, 'e');
const ssize_t rc = os_sys_calls.write(pipe_fds[1], data.data(), max_length).rc_;
ASSERT_GT(rc, 0);
const uint32_t previous_length = buf.length();
Api::IoCallUint64Result result = buf.read(io_handle, max_length);
ASSERT_EQ(result.rc_, static_cast<uint64_t>(rc));
ASSERT_EQ(os_sys_calls.close(pipe_fds[1]).rc_, 0);
ASSERT_EQ(previous_length, buf.search(data.data(), rc, previous_length));
EXPECT_EQ("bbbbb", buf.toString().substr(0, 5));
expectSlices({{5, 0, 4056}, {1953, 2103, 4056}}, buf);
}
|
Vulnerable
|
[
"CWE-401"
] |
envoy
|
5eba69a1f375413fb93fab4173f9c393ac8c2818
|
2.32541769206316e+37
| 35 |
[buffer] Add on-drain hook to buffer API and use it to avoid fragmentation due to tracking of H2 data and control frames in the output buffer (#144)
Signed-off-by: antonio <avd@google.com>
| 1 |
gen_B(codegen_scope *s, uint8_t i)
{
emit_B(s, s->pc, i);
s->pc++;
}
|
Safe
|
[
"CWE-415",
"CWE-122"
] |
mruby
|
38b164ace7d6ae1c367883a3d67d7f559783faad
|
2.655910734130095e+38
| 5 |
codegen.c: fix a bug in `gen_values()`.
- Fix limit handling that fails 15 arguments method calls.
- Fix too early argument packing in arrays.
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.