CVE ID
stringlengths 13
43
⌀ | CVE Page
stringlengths 45
48
⌀ | CWE ID
stringclasses 90
values | codeLink
stringlengths 46
139
| commit_id
stringlengths 6
81
| commit_message
stringlengths 3
13.3k
⌀ | func_after
stringlengths 14
241k
| func_before
stringlengths 14
241k
| lang
stringclasses 3
values | project
stringclasses 309
values | vul
int8 0
1
|
---|---|---|---|---|---|---|---|---|---|---|
CVE-2011-3045
|
https://www.cvedetails.com/cve/CVE-2011-3045/
|
CWE-189
|
https://github.com/chromium/chromium/commit/4cf106cdb83dd6b35d3b26d06cc67d1d2d99041e
|
4cf106cdb83dd6b35d3b26d06cc67d1d2d99041e
|
Pull follow-up tweak from upstream.
BUG=116162
Review URL: http://codereview.chromium.org/9546033
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@125311 0039d316-1c4b-4281-b951-d872f2087c98
|
png_handle_tIME(png_structp png_ptr, png_infop info_ptr, png_uint_32 length)
{
png_byte buf[7];
png_time mod_time;
png_debug(1, "in png_handle_tIME");
if (!(png_ptr->mode & PNG_HAVE_IHDR))
png_error(png_ptr, "Out of place tIME chunk");
else if (info_ptr != NULL && (info_ptr->valid & PNG_INFO_tIME))
{
png_warning(png_ptr, "Duplicate tIME chunk");
png_crc_finish(png_ptr, length);
return;
}
if (png_ptr->mode & PNG_HAVE_IDAT)
png_ptr->mode |= PNG_AFTER_IDAT;
if (length != 7)
{
png_warning(png_ptr, "Incorrect tIME chunk length");
png_crc_finish(png_ptr, length);
return;
}
png_crc_read(png_ptr, buf, 7);
if (png_crc_finish(png_ptr, 0))
return;
mod_time.second = buf[6];
mod_time.minute = buf[5];
mod_time.hour = buf[4];
mod_time.day = buf[3];
mod_time.month = buf[2];
mod_time.year = png_get_uint_16(buf);
png_set_tIME(png_ptr, info_ptr, &mod_time);
}
|
png_handle_tIME(png_structp png_ptr, png_infop info_ptr, png_uint_32 length)
{
png_byte buf[7];
png_time mod_time;
png_debug(1, "in png_handle_tIME");
if (!(png_ptr->mode & PNG_HAVE_IHDR))
png_error(png_ptr, "Out of place tIME chunk");
else if (info_ptr != NULL && (info_ptr->valid & PNG_INFO_tIME))
{
png_warning(png_ptr, "Duplicate tIME chunk");
png_crc_finish(png_ptr, length);
return;
}
if (png_ptr->mode & PNG_HAVE_IDAT)
png_ptr->mode |= PNG_AFTER_IDAT;
if (length != 7)
{
png_warning(png_ptr, "Incorrect tIME chunk length");
png_crc_finish(png_ptr, length);
return;
}
png_crc_read(png_ptr, buf, 7);
if (png_crc_finish(png_ptr, 0))
return;
mod_time.second = buf[6];
mod_time.minute = buf[5];
mod_time.hour = buf[4];
mod_time.day = buf[3];
mod_time.month = buf[2];
mod_time.year = png_get_uint_16(buf);
png_set_tIME(png_ptr, info_ptr, &mod_time);
}
|
C
|
Chrome
| 0 |
CVE-2015-1280
|
https://www.cvedetails.com/cve/CVE-2015-1280/
|
CWE-119
|
https://github.com/chromium/chromium/commit/bc1f34b9be509f1404f0bb1ba1947614d5f0bcd1
|
bc1f34b9be509f1404f0bb1ba1947614d5f0bcd1
|
media: Support hosting mojo CDM in a standalone service
Currently when mojo CDM is enabled it is hosted in the MediaService
running in the process specified by "mojo_media_host". However, on
some platforms we need to run mojo CDM and other mojo media services in
different processes. For example, on desktop platforms, we want to run
mojo video decoder in the GPU process, but run the mojo CDM in the
utility process.
This CL adds a new build flag "enable_standalone_cdm_service". When
enabled, the mojo CDM service will be hosted in a standalone "cdm"
service running in the utility process. All other mojo media services
will sill be hosted in the "media" servie running in the process
specified by "mojo_media_host".
BUG=664364
TEST=Encrypted media browser tests using mojo CDM is still working.
Change-Id: I95be6e05adc9ebcff966b26958ef1d7becdfb487
Reviewed-on: https://chromium-review.googlesource.com/567172
Commit-Queue: Xiaohan Wang <xhwang@chromium.org>
Reviewed-by: Daniel Cheng <dcheng@chromium.org>
Reviewed-by: John Abd-El-Malek <jam@chromium.org>
Reviewed-by: Dan Sanders <sandersd@chromium.org>
Cr-Commit-Position: refs/heads/master@{#486947}
|
void ServiceManagerConnectionImpl::RemoveConnectionFilter(int filter_id) {
context_->RemoveConnectionFilter(filter_id);
}
|
void ServiceManagerConnectionImpl::RemoveConnectionFilter(int filter_id) {
context_->RemoveConnectionFilter(filter_id);
}
|
C
|
Chrome
| 0 |
CVE-2013-0281
|
https://www.cvedetails.com/cve/CVE-2013-0281/
|
CWE-399
|
https://github.com/ClusterLabs/pacemaker/commit/564f7cc2a51dcd2f28ab12a13394f31be5aa3c93
|
564f7cc2a51dcd2f28ab12a13394f31be5aa3c93
|
High: core: Internal tls api improvements for reuse with future LRMD tls backend.
|
child_timeout_callback(gpointer p)
{
mainloop_child_t *child = p;
child->timerid = 0;
if (child->timeout) {
crm_crit("%s process (PID %d) will not die!", child->desc, (int)child->pid);
return FALSE;
}
child->timeout = TRUE;
crm_warn("%s process (PID %d) timed out", child->desc, (int)child->pid);
if (kill(child->pid, SIGKILL) < 0) {
if (errno == ESRCH) {
/* Nothing left to do */
return FALSE;
}
crm_perror(LOG_ERR, "kill(%d, KILL) failed", child->pid);
}
child->timerid = g_timeout_add(5000, child_timeout_callback, child);
return FALSE;
}
|
child_timeout_callback(gpointer p)
{
mainloop_child_t *child = p;
child->timerid = 0;
if (child->timeout) {
crm_crit("%s process (PID %d) will not die!", child->desc, (int)child->pid);
return FALSE;
}
child->timeout = TRUE;
crm_warn("%s process (PID %d) timed out", child->desc, (int)child->pid);
if (kill(child->pid, SIGKILL) < 0) {
if (errno == ESRCH) {
/* Nothing left to do */
return FALSE;
}
crm_perror(LOG_ERR, "kill(%d, KILL) failed", child->pid);
}
child->timerid = g_timeout_add(5000, child_timeout_callback, child);
return FALSE;
}
|
C
|
pacemaker
| 0 |
CVE-2011-4324
|
https://www.cvedetails.com/cve/CVE-2011-4324/
| null |
https://github.com/torvalds/linux/commit/dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
|
dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
|
NFSv4: Convert the open and close ops to use fmode
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
u64 nfs_compat_user_ino64(u64 fileid)
{
int ino;
if (enable_ino64)
return fileid;
ino = fileid;
if (sizeof(ino) < sizeof(fileid))
ino ^= fileid >> (sizeof(fileid)-sizeof(ino)) * 8;
return ino;
}
|
u64 nfs_compat_user_ino64(u64 fileid)
{
int ino;
if (enable_ino64)
return fileid;
ino = fileid;
if (sizeof(ino) < sizeof(fileid))
ino ^= fileid >> (sizeof(fileid)-sizeof(ino)) * 8;
return ino;
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/1161a49d663dd395bd639549c2dfe7324f847938
|
1161a49d663dd395bd639549c2dfe7324f847938
|
Don't populate URL data in WebDropData when dragging files.
This is considered a potential security issue as well, since it leaks
filesystem paths.
BUG=332579
Review URL: https://codereview.chromium.org/135633002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@244538 0039d316-1c4b-4281-b951-d872f2087c98
|
bool NewTabButton::ShouldUseNativeFrame() const {
return GetWidget() &&
GetWidget()->GetTopLevelWidget()->ShouldUseNativeFrame();
}
|
bool NewTabButton::ShouldUseNativeFrame() const {
return GetWidget() &&
GetWidget()->GetTopLevelWidget()->ShouldUseNativeFrame();
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/be655fd4fb9ab3291a855a939496111674037a2f
|
be655fd4fb9ab3291a855a939496111674037a2f
|
Always use FrameNavigationDisabler during DocumentLoader detach.
BUG=617495
Review-Url: https://codereview.chromium.org/2079473002
Cr-Commit-Position: refs/heads/master@{#400558}
|
void FrameLoader::setHistoryItemStateForCommit(HistoryCommitType historyCommitType, HistoryNavigationType navigationType)
{
HistoryItem* oldItem = m_currentItem;
if (historyCommitType == BackForwardCommit && m_provisionalItem)
m_currentItem = m_provisionalItem.release();
else
m_currentItem = HistoryItem::create();
m_currentItem->setURL(m_documentLoader->urlForHistory());
m_currentItem->setDocumentState(m_frame->document()->formElementsState());
m_currentItem->setTarget(m_frame->tree().uniqueName());
m_currentItem->setReferrer(SecurityPolicy::generateReferrer(m_documentLoader->request().getReferrerPolicy(), m_currentItem->url(), m_documentLoader->request().httpReferrer()));
m_currentItem->setFormInfoFromRequest(m_documentLoader->request());
if (!oldItem || historyCommitType == BackForwardCommit)
return;
if (navigationType == HistoryNavigationType::DifferentDocument && (historyCommitType != HistoryInertCommit || !equalIgnoringFragmentIdentifier(oldItem->url(), m_currentItem->url())))
return;
m_currentItem->setDocumentSequenceNumber(oldItem->documentSequenceNumber());
m_currentItem->setScrollPoint(oldItem->scrollPoint());
m_currentItem->setVisualViewportScrollPoint(oldItem->visualViewportScrollPoint());
m_currentItem->setPageScaleFactor(oldItem->pageScaleFactor());
m_currentItem->setScrollRestorationType(oldItem->scrollRestorationType());
if (historyCommitType == HistoryInertCommit && (navigationType == HistoryNavigationType::HistoryApi || oldItem->url() == m_currentItem->url())) {
m_currentItem->setStateObject(oldItem->stateObject());
m_currentItem->setItemSequenceNumber(oldItem->itemSequenceNumber());
}
}
|
void FrameLoader::setHistoryItemStateForCommit(HistoryCommitType historyCommitType, HistoryNavigationType navigationType)
{
HistoryItem* oldItem = m_currentItem;
if (historyCommitType == BackForwardCommit && m_provisionalItem)
m_currentItem = m_provisionalItem.release();
else
m_currentItem = HistoryItem::create();
m_currentItem->setURL(m_documentLoader->urlForHistory());
m_currentItem->setDocumentState(m_frame->document()->formElementsState());
m_currentItem->setTarget(m_frame->tree().uniqueName());
m_currentItem->setReferrer(SecurityPolicy::generateReferrer(m_documentLoader->request().getReferrerPolicy(), m_currentItem->url(), m_documentLoader->request().httpReferrer()));
m_currentItem->setFormInfoFromRequest(m_documentLoader->request());
if (!oldItem || historyCommitType == BackForwardCommit)
return;
if (navigationType == HistoryNavigationType::DifferentDocument && (historyCommitType != HistoryInertCommit || !equalIgnoringFragmentIdentifier(oldItem->url(), m_currentItem->url())))
return;
m_currentItem->setDocumentSequenceNumber(oldItem->documentSequenceNumber());
m_currentItem->setScrollPoint(oldItem->scrollPoint());
m_currentItem->setVisualViewportScrollPoint(oldItem->visualViewportScrollPoint());
m_currentItem->setPageScaleFactor(oldItem->pageScaleFactor());
m_currentItem->setScrollRestorationType(oldItem->scrollRestorationType());
if (historyCommitType == HistoryInertCommit && (navigationType == HistoryNavigationType::HistoryApi || oldItem->url() == m_currentItem->url())) {
m_currentItem->setStateObject(oldItem->stateObject());
m_currentItem->setItemSequenceNumber(oldItem->itemSequenceNumber());
}
}
|
C
|
Chrome
| 0 |
CVE-2019-17541
|
https://www.cvedetails.com/cve/CVE-2019-17541/
| null |
https://github.com/ImageMagick/ImageMagick/commit/39f226a9c137f547e12afde972eeba7551124493
|
39f226a9c137f547e12afde972eeba7551124493
|
https://github.com/ImageMagick/ImageMagick/issues/1641
|
static void TerminateDestination(j_compress_ptr cinfo)
{
DestinationManager
*destination;
destination=(DestinationManager *) cinfo->dest;
if ((MagickMinBufferExtent-(int) destination->manager.free_in_buffer) > 0)
{
ssize_t
count;
count=WriteBlob(destination->image,MagickMinBufferExtent-
destination->manager.free_in_buffer,destination->buffer);
if (count != (ssize_t)
(MagickMinBufferExtent-destination->manager.free_in_buffer))
ERREXIT(cinfo,JERR_FILE_WRITE);
}
}
|
static void TerminateDestination(j_compress_ptr cinfo)
{
DestinationManager
*destination;
destination=(DestinationManager *) cinfo->dest;
if ((MagickMinBufferExtent-(int) destination->manager.free_in_buffer) > 0)
{
ssize_t
count;
count=WriteBlob(destination->image,MagickMinBufferExtent-
destination->manager.free_in_buffer,destination->buffer);
if (count != (ssize_t)
(MagickMinBufferExtent-destination->manager.free_in_buffer))
ERREXIT(cinfo,JERR_FILE_WRITE);
}
}
|
C
|
ImageMagick
| 0 |
CVE-2017-9739
|
https://www.cvedetails.com/cve/CVE-2017-9739/
|
CWE-125
|
http://git.ghostscript.com/?p=ghostpdl.git;a=commit;h=c501a58f8d5650c8ba21d447c0d6f07eafcb0f15
|
c501a58f8d5650c8ba21d447c0d6f07eafcb0f15
| null |
static void Ins_SCVTCI( INS_ARG )
{
CUR.GS.control_value_cutin = (TT_F26Dot6)args[0];
}
|
static void Ins_SCVTCI( INS_ARG )
{
CUR.GS.control_value_cutin = (TT_F26Dot6)args[0];
}
|
C
|
ghostscript
| 0 |
CVE-2011-3101
|
https://www.cvedetails.com/cve/CVE-2011-3101/
| null |
https://github.com/chromium/chromium/commit/8f0b86c2fc77fca1508d81314f864011abe25f04
|
8f0b86c2fc77fca1508d81314f864011abe25f04
|
Always write data to new buffer in SimulateAttrib0
This is to work around linux nvidia driver bug.
TEST=asan
BUG=118970
Review URL: http://codereview.chromium.org/10019003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@131538 0039d316-1c4b-4281-b951-d872f2087c98
|
void GLES2DecoderImpl::DoTexParameterf(
GLenum target, GLenum pname, GLfloat param) {
TextureManager::TextureInfo* info = GetTextureInfoForTarget(target);
if (!info) {
SetGLError(GL_INVALID_VALUE, "glTexParameterf: unknown texture");
return;
}
if (!texture_manager()->SetParameter(
info, pname, static_cast<GLint>(param))) {
SetGLError(GL_INVALID_ENUM, "glTexParameterf: param GL_INVALID_ENUM");
return;
}
glTexParameterf(target, pname, param);
}
|
void GLES2DecoderImpl::DoTexParameterf(
GLenum target, GLenum pname, GLfloat param) {
TextureManager::TextureInfo* info = GetTextureInfoForTarget(target);
if (!info) {
SetGLError(GL_INVALID_VALUE, "glTexParameterf: unknown texture");
return;
}
if (!texture_manager()->SetParameter(
info, pname, static_cast<GLint>(param))) {
SetGLError(GL_INVALID_ENUM, "glTexParameterf: param GL_INVALID_ENUM");
return;
}
glTexParameterf(target, pname, param);
}
|
C
|
Chrome
| 0 |
CVE-2016-6720
|
https://www.cvedetails.com/cve/CVE-2016-6720/
|
CWE-200
|
https://android.googlesource.com/platform/frameworks/av/+/7c88b498fda1c2b608a9dd73960a2fd4d7b7e3f7
|
7c88b498fda1c2b608a9dd73960a2fd4d7b7e3f7
|
IOMX: allow configuration after going to loaded state
This was disallowed recently but we still use it as MediaCodcec.stop
only goes to loaded state, and does not free component.
Bug: 31450460
Change-Id: I72e092e4e55c9f23b1baee3e950d76e84a5ef28d
(cherry picked from commit e03b22839d78c841ce0a1a0a1ee1960932188b0b)
|
status_t OMXNodeInstance::sendCommand(
OMX_COMMANDTYPE cmd, OMX_S32 param) {
if (cmd == OMX_CommandStateSet) {
// There are no configurations past first StateSet command.
mSailed = true;
}
const sp<GraphicBufferSource> bufferSource(getGraphicBufferSource());
if (bufferSource != NULL && cmd == OMX_CommandStateSet) {
if (param == OMX_StateIdle) {
bufferSource->omxIdle();
} else if (param == OMX_StateLoaded) {
bufferSource->omxLoaded();
setGraphicBufferSource(NULL);
}
}
Mutex::Autolock autoLock(mLock);
{
Mutex::Autolock _l(mDebugLock);
bumpDebugLevel_l(2 /* numInputBuffers */, 2 /* numOutputBuffers */);
}
const char *paramString =
cmd == OMX_CommandStateSet ? asString((OMX_STATETYPE)param) : portString(param);
CLOG_STATE(sendCommand, "%s(%d), %s(%d)", asString(cmd), cmd, paramString, param);
OMX_ERRORTYPE err = OMX_SendCommand(mHandle, cmd, param, NULL);
CLOG_IF_ERROR(sendCommand, err, "%s(%d), %s(%d)", asString(cmd), cmd, paramString, param);
return StatusFromOMXError(err);
}
|
status_t OMXNodeInstance::sendCommand(
OMX_COMMANDTYPE cmd, OMX_S32 param) {
if (cmd == OMX_CommandStateSet) {
mSailed = true;
}
const sp<GraphicBufferSource> bufferSource(getGraphicBufferSource());
if (bufferSource != NULL && cmd == OMX_CommandStateSet) {
if (param == OMX_StateIdle) {
bufferSource->omxIdle();
} else if (param == OMX_StateLoaded) {
bufferSource->omxLoaded();
setGraphicBufferSource(NULL);
}
}
Mutex::Autolock autoLock(mLock);
{
Mutex::Autolock _l(mDebugLock);
bumpDebugLevel_l(2 /* numInputBuffers */, 2 /* numOutputBuffers */);
}
const char *paramString =
cmd == OMX_CommandStateSet ? asString((OMX_STATETYPE)param) : portString(param);
CLOG_STATE(sendCommand, "%s(%d), %s(%d)", asString(cmd), cmd, paramString, param);
OMX_ERRORTYPE err = OMX_SendCommand(mHandle, cmd, param, NULL);
CLOG_IF_ERROR(sendCommand, err, "%s(%d), %s(%d)", asString(cmd), cmd, paramString, param);
return StatusFromOMXError(err);
}
|
C
|
Android
| 1 |
CVE-2010-2520
|
https://www.cvedetails.com/cve/CVE-2010-2520/
|
CWE-119
|
https://git.savannah.gnu.org/cgit/freetype/freetype2.git/commit/?id=888cd1843e935fe675cf2ac303116d4ed5b9d54b
|
888cd1843e935fe675cf2ac303116d4ed5b9d54b
| null |
Direct_Move( EXEC_OP_ TT_GlyphZone zone,
FT_UShort point,
FT_F26Dot6 distance )
{
FT_F26Dot6 v;
#ifdef TT_CONFIG_OPTION_UNPATENTED_HINTING
FT_ASSERT( !CUR.face->unpatented_hinting );
#endif
v = CUR.GS.freeVector.x;
if ( v != 0 )
{
zone->cur[point].x += TT_MULDIV( distance,
v * 0x10000L,
CUR.F_dot_P );
zone->tags[point] |= FT_CURVE_TAG_TOUCH_X;
}
v = CUR.GS.freeVector.y;
if ( v != 0 )
{
zone->cur[point].y += TT_MULDIV( distance,
v * 0x10000L,
CUR.F_dot_P );
zone->tags[point] |= FT_CURVE_TAG_TOUCH_Y;
}
}
|
Direct_Move( EXEC_OP_ TT_GlyphZone zone,
FT_UShort point,
FT_F26Dot6 distance )
{
FT_F26Dot6 v;
#ifdef TT_CONFIG_OPTION_UNPATENTED_HINTING
FT_ASSERT( !CUR.face->unpatented_hinting );
#endif
v = CUR.GS.freeVector.x;
if ( v != 0 )
{
zone->cur[point].x += TT_MULDIV( distance,
v * 0x10000L,
CUR.F_dot_P );
zone->tags[point] |= FT_CURVE_TAG_TOUCH_X;
}
v = CUR.GS.freeVector.y;
if ( v != 0 )
{
zone->cur[point].y += TT_MULDIV( distance,
v * 0x10000L,
CUR.F_dot_P );
zone->tags[point] |= FT_CURVE_TAG_TOUCH_Y;
}
}
|
C
|
savannah
| 0 |
CVE-2017-14604
|
https://www.cvedetails.com/cve/CVE-2017-14604/
|
CWE-20
|
https://github.com/GNOME/nautilus/commit/1630f53481f445ada0a455e9979236d31a8d3bb0
|
1630f53481f445ada0a455e9979236d31a8d3bb0
|
mime-actions: use file metadata for trusting desktop files
Currently we only trust desktop files that have the executable bit
set, and don't replace the displayed icon or the displayed name until
it's trusted, which prevents for running random programs by a malicious
desktop file.
However, the executable permission is preserved if the desktop file
comes from a compressed file.
To prevent this, add a metadata::trusted metadata to the file once the
user acknowledges the file as trusted. This adds metadata to the file,
which cannot be added unless it has access to the computer.
Also remove the SHEBANG "trusted" content we were putting inside the
desktop file, since that doesn't add more security since it can come
with the file itself.
https://bugzilla.gnome.org/show_bug.cgi?id=777991
|
async_job_end (NautilusDirectory *directory,
const char *job)
{
#ifdef DEBUG_ASYNC_JOBS
char *key;
gpointer table_key, value;
#endif
#ifdef DEBUG_START_STOP
g_message ("stopping %s in %p", job, directory->details->location);
#endif
g_assert (async_job_count > 0);
#ifdef DEBUG_ASYNC_JOBS
{
char *uri;
uri = nautilus_directory_get_uri (directory);
g_assert (async_jobs != NULL);
key = g_strconcat (uri, ": ", job, NULL);
if (!g_hash_table_lookup_extended (async_jobs, key, &table_key, &value))
{
g_warning ("ending job we didn't start: %s in %s",
job, uri);
}
else
{
g_hash_table_remove (async_jobs, key);
g_free (table_key);
}
g_free (uri);
g_free (key);
}
#endif
async_job_count -= 1;
}
|
async_job_end (NautilusDirectory *directory,
const char *job)
{
#ifdef DEBUG_ASYNC_JOBS
char *key;
gpointer table_key, value;
#endif
#ifdef DEBUG_START_STOP
g_message ("stopping %s in %p", job, directory->details->location);
#endif
g_assert (async_job_count > 0);
#ifdef DEBUG_ASYNC_JOBS
{
char *uri;
uri = nautilus_directory_get_uri (directory);
g_assert (async_jobs != NULL);
key = g_strconcat (uri, ": ", job, NULL);
if (!g_hash_table_lookup_extended (async_jobs, key, &table_key, &value))
{
g_warning ("ending job we didn't start: %s in %s",
job, uri);
}
else
{
g_hash_table_remove (async_jobs, key);
g_free (table_key);
}
g_free (uri);
g_free (key);
}
#endif
async_job_count -= 1;
}
|
C
|
nautilus
| 0 |
CVE-2014-1446
|
https://www.cvedetails.com/cve/CVE-2014-1446/
|
CWE-399
|
https://github.com/torvalds/linux/commit/8e3fbf870481eb53b2d3a322d1fc395ad8b367ed
|
8e3fbf870481eb53b2d3a322d1fc395ad8b367ed
|
hamradio/yam: fix info leak in ioctl
The yam_ioctl() code fails to initialise the cmd field
of the struct yamdrv_ioctl_cfg. Add an explicit memset(0)
before filling the structure to avoid the 4-byte info leak.
Signed-off-by: Salva Peiró <speiro@ai2.upv.es>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
static void yam_dotimer(unsigned long dummy)
{
int i;
for (i = 0; i < NR_PORTS; i++) {
struct net_device *dev = yam_devs[i];
if (dev && netif_running(dev))
yam_arbitrate(dev);
}
yam_timer.expires = jiffies + HZ / 100;
add_timer(&yam_timer);
}
|
static void yam_dotimer(unsigned long dummy)
{
int i;
for (i = 0; i < NR_PORTS; i++) {
struct net_device *dev = yam_devs[i];
if (dev && netif_running(dev))
yam_arbitrate(dev);
}
yam_timer.expires = jiffies + HZ / 100;
add_timer(&yam_timer);
}
|
C
|
linux
| 0 |
CVE-2017-14170
|
https://www.cvedetails.com/cve/CVE-2017-14170/
|
CWE-834
|
https://github.com/FFmpeg/FFmpeg/commit/900f39692ca0337a98a7cf047e4e2611071810c2
|
900f39692ca0337a98a7cf047e4e2611071810c2
|
avformat/mxfdec: Fix DoS issues in mxf_read_index_entry_array()
Fixes: 20170829A.mxf
Co-Author: 张洪亮(望初)" <wangchu.zhl@alibaba-inc.com>
Found-by: Xiaohei and Wangchu from Alibaba Security Team
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
|
static int mxf_compute_ptses_fake_index(MXFContext *mxf, MXFIndexTable *index_table)
{
int i, j, x;
int8_t max_temporal_offset = -128;
uint8_t *flags;
/* first compute how many entries we have */
for (i = 0; i < index_table->nb_segments; i++) {
MXFIndexTableSegment *s = index_table->segments[i];
if (!s->nb_index_entries) {
index_table->nb_ptses = 0;
return 0; /* no TemporalOffsets */
}
index_table->nb_ptses += s->index_duration;
}
/* paranoid check */
if (index_table->nb_ptses <= 0)
return 0;
if (!(index_table->ptses = av_calloc(index_table->nb_ptses, sizeof(int64_t))) ||
!(index_table->fake_index = av_calloc(index_table->nb_ptses, sizeof(AVIndexEntry))) ||
!(index_table->offsets = av_calloc(index_table->nb_ptses, sizeof(int8_t))) ||
!(flags = av_calloc(index_table->nb_ptses, sizeof(uint8_t)))) {
av_freep(&index_table->ptses);
av_freep(&index_table->fake_index);
av_freep(&index_table->offsets);
return AVERROR(ENOMEM);
}
/* we may have a few bad TemporalOffsets
* make sure the corresponding PTSes don't have the bogus value 0 */
for (x = 0; x < index_table->nb_ptses; x++)
index_table->ptses[x] = AV_NOPTS_VALUE;
/**
* We have this:
*
* x TemporalOffset
* 0: 0
* 1: 1
* 2: 1
* 3: -2
* 4: 1
* 5: 1
* 6: -2
*
* We want to transform it into this:
*
* x DTS PTS
* 0: -1 0
* 1: 0 3
* 2: 1 1
* 3: 2 2
* 4: 3 6
* 5: 4 4
* 6: 5 5
*
* We do this by bucket sorting x by x+TemporalOffset[x] into mxf->ptses,
* then settings mxf->first_dts = -max(TemporalOffset[x]).
* The latter makes DTS <= PTS.
*/
for (i = x = 0; i < index_table->nb_segments; i++) {
MXFIndexTableSegment *s = index_table->segments[i];
int index_delta = 1;
int n = s->nb_index_entries;
if (s->nb_index_entries == 2 * s->index_duration + 1) {
index_delta = 2; /* Avid index */
/* ignore the last entry - it's the size of the essence container */
n--;
}
for (j = 0; j < n; j += index_delta, x++) {
int offset = s->temporal_offset_entries[j] / index_delta;
int index = x + offset;
if (x >= index_table->nb_ptses) {
av_log(mxf->fc, AV_LOG_ERROR,
"x >= nb_ptses - IndexEntryCount %i < IndexDuration %"PRId64"?\n",
s->nb_index_entries, s->index_duration);
break;
}
flags[x] = !(s->flag_entries[j] & 0x30) ? AVINDEX_KEYFRAME : 0;
if (index < 0 || index >= index_table->nb_ptses) {
av_log(mxf->fc, AV_LOG_ERROR,
"index entry %i + TemporalOffset %i = %i, which is out of bounds\n",
x, offset, index);
continue;
}
index_table->offsets[x] = offset;
index_table->ptses[index] = x;
max_temporal_offset = FFMAX(max_temporal_offset, offset);
}
}
/* calculate the fake index table in display order */
for (x = 0; x < index_table->nb_ptses; x++) {
index_table->fake_index[x].timestamp = x;
if (index_table->ptses[x] != AV_NOPTS_VALUE)
index_table->fake_index[index_table->ptses[x]].flags = flags[x];
}
av_freep(&flags);
index_table->first_dts = -max_temporal_offset;
return 0;
}
|
static int mxf_compute_ptses_fake_index(MXFContext *mxf, MXFIndexTable *index_table)
{
int i, j, x;
int8_t max_temporal_offset = -128;
uint8_t *flags;
/* first compute how many entries we have */
for (i = 0; i < index_table->nb_segments; i++) {
MXFIndexTableSegment *s = index_table->segments[i];
if (!s->nb_index_entries) {
index_table->nb_ptses = 0;
return 0; /* no TemporalOffsets */
}
index_table->nb_ptses += s->index_duration;
}
/* paranoid check */
if (index_table->nb_ptses <= 0)
return 0;
if (!(index_table->ptses = av_calloc(index_table->nb_ptses, sizeof(int64_t))) ||
!(index_table->fake_index = av_calloc(index_table->nb_ptses, sizeof(AVIndexEntry))) ||
!(index_table->offsets = av_calloc(index_table->nb_ptses, sizeof(int8_t))) ||
!(flags = av_calloc(index_table->nb_ptses, sizeof(uint8_t)))) {
av_freep(&index_table->ptses);
av_freep(&index_table->fake_index);
av_freep(&index_table->offsets);
return AVERROR(ENOMEM);
}
/* we may have a few bad TemporalOffsets
* make sure the corresponding PTSes don't have the bogus value 0 */
for (x = 0; x < index_table->nb_ptses; x++)
index_table->ptses[x] = AV_NOPTS_VALUE;
/**
* We have this:
*
* x TemporalOffset
* 0: 0
* 1: 1
* 2: 1
* 3: -2
* 4: 1
* 5: 1
* 6: -2
*
* We want to transform it into this:
*
* x DTS PTS
* 0: -1 0
* 1: 0 3
* 2: 1 1
* 3: 2 2
* 4: 3 6
* 5: 4 4
* 6: 5 5
*
* We do this by bucket sorting x by x+TemporalOffset[x] into mxf->ptses,
* then settings mxf->first_dts = -max(TemporalOffset[x]).
* The latter makes DTS <= PTS.
*/
for (i = x = 0; i < index_table->nb_segments; i++) {
MXFIndexTableSegment *s = index_table->segments[i];
int index_delta = 1;
int n = s->nb_index_entries;
if (s->nb_index_entries == 2 * s->index_duration + 1) {
index_delta = 2; /* Avid index */
/* ignore the last entry - it's the size of the essence container */
n--;
}
for (j = 0; j < n; j += index_delta, x++) {
int offset = s->temporal_offset_entries[j] / index_delta;
int index = x + offset;
if (x >= index_table->nb_ptses) {
av_log(mxf->fc, AV_LOG_ERROR,
"x >= nb_ptses - IndexEntryCount %i < IndexDuration %"PRId64"?\n",
s->nb_index_entries, s->index_duration);
break;
}
flags[x] = !(s->flag_entries[j] & 0x30) ? AVINDEX_KEYFRAME : 0;
if (index < 0 || index >= index_table->nb_ptses) {
av_log(mxf->fc, AV_LOG_ERROR,
"index entry %i + TemporalOffset %i = %i, which is out of bounds\n",
x, offset, index);
continue;
}
index_table->offsets[x] = offset;
index_table->ptses[index] = x;
max_temporal_offset = FFMAX(max_temporal_offset, offset);
}
}
/* calculate the fake index table in display order */
for (x = 0; x < index_table->nb_ptses; x++) {
index_table->fake_index[x].timestamp = x;
if (index_table->ptses[x] != AV_NOPTS_VALUE)
index_table->fake_index[index_table->ptses[x]].flags = flags[x];
}
av_freep(&flags);
index_table->first_dts = -max_temporal_offset;
return 0;
}
|
C
|
FFmpeg
| 0 |
CVE-2015-6780
|
https://www.cvedetails.com/cve/CVE-2015-6780/
| null |
https://github.com/chromium/chromium/commit/f2cba0d13b3a6d76dedede66731e5ca253d3b2af
|
f2cba0d13b3a6d76dedede66731e5ca253d3b2af
|
Fix UAF in Origin Info Bubble and permission settings UI.
In addition to fixing the UAF, will this also fix the problem of the bubble
showing over the previous tab (if the bubble is open when the tab it was opened
for closes).
BUG=490492
TBR=tedchoc
Review URL: https://codereview.chromium.org/1317443002
Cr-Commit-Position: refs/heads/master@{#346023}
|
static jlong Init(JNIEnv* env,
jclass clazz,
jobject obj,
jobject java_web_contents) {
content::WebContents* web_contents =
content::WebContents::FromJavaWebContents(java_web_contents);
return reinterpret_cast<intptr_t>(
new ConnectionInfoPopupAndroid(env, obj, web_contents));
}
|
static jlong Init(JNIEnv* env,
jclass clazz,
jobject obj,
jobject java_web_contents) {
content::WebContents* web_contents =
content::WebContents::FromJavaWebContents(java_web_contents);
return reinterpret_cast<intptr_t>(
new ConnectionInfoPopupAndroid(env, obj, web_contents));
}
|
C
|
Chrome
| 0 |
CVE-2017-12154
|
https://www.cvedetails.com/cve/CVE-2017-12154/
| null |
https://github.com/torvalds/linux/commit/51aa68e7d57e3217192d88ce90fd5b8ef29ec94f
|
51aa68e7d57e3217192d88ce90fd5b8ef29ec94f
|
kvm: nVMX: Don't allow L2 to access the hardware CR8
If L1 does not specify the "use TPR shadow" VM-execution control in
vmcs12, then L0 must specify the "CR8-load exiting" and "CR8-store
exiting" VM-execution controls in vmcs02. Failure to do so will give
the L2 VM unrestricted read/write access to the hardware CR8.
This fixes CVE-2017-12154.
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
{
struct vmcs12 *vmcs12;
struct vcpu_vmx *vmx = to_vmx(vcpu);
u32 interrupt_shadow = vmx_get_interrupt_shadow(vcpu);
u32 exit_qual;
int ret;
if (!nested_vmx_check_permission(vcpu))
return 1;
if (!nested_vmx_check_vmcs12(vcpu))
goto out;
vmcs12 = get_vmcs12(vcpu);
if (enable_shadow_vmcs)
copy_shadow_to_vmcs12(vmx);
/*
* The nested entry process starts with enforcing various prerequisites
* on vmcs12 as required by the Intel SDM, and act appropriately when
* they fail: As the SDM explains, some conditions should cause the
* instruction to fail, while others will cause the instruction to seem
* to succeed, but return an EXIT_REASON_INVALID_STATE.
* To speed up the normal (success) code path, we should avoid checking
* for misconfigurations which will anyway be caught by the processor
* when using the merged vmcs02.
*/
if (interrupt_shadow & KVM_X86_SHADOW_INT_MOV_SS) {
nested_vmx_failValid(vcpu,
VMXERR_ENTRY_EVENTS_BLOCKED_BY_MOV_SS);
goto out;
}
if (vmcs12->launch_state == launch) {
nested_vmx_failValid(vcpu,
launch ? VMXERR_VMLAUNCH_NONCLEAR_VMCS
: VMXERR_VMRESUME_NONLAUNCHED_VMCS);
goto out;
}
ret = check_vmentry_prereqs(vcpu, vmcs12);
if (ret) {
nested_vmx_failValid(vcpu, ret);
goto out;
}
/*
* After this point, the trap flag no longer triggers a singlestep trap
* on the vm entry instructions; don't call kvm_skip_emulated_instruction.
* This is not 100% correct; for performance reasons, we delegate most
* of the checks on host state to the processor. If those fail,
* the singlestep trap is missed.
*/
skip_emulated_instruction(vcpu);
ret = check_vmentry_postreqs(vcpu, vmcs12, &exit_qual);
if (ret) {
nested_vmx_entry_failure(vcpu, vmcs12,
EXIT_REASON_INVALID_STATE, exit_qual);
return 1;
}
/*
* We're finally done with prerequisite checking, and can start with
* the nested entry.
*/
ret = enter_vmx_non_root_mode(vcpu, true);
if (ret)
return ret;
if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT)
return kvm_vcpu_halt(vcpu);
vmx->nested.nested_run_pending = 1;
return 1;
out:
return kvm_skip_emulated_instruction(vcpu);
}
|
static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
{
struct vmcs12 *vmcs12;
struct vcpu_vmx *vmx = to_vmx(vcpu);
u32 interrupt_shadow = vmx_get_interrupt_shadow(vcpu);
u32 exit_qual;
int ret;
if (!nested_vmx_check_permission(vcpu))
return 1;
if (!nested_vmx_check_vmcs12(vcpu))
goto out;
vmcs12 = get_vmcs12(vcpu);
if (enable_shadow_vmcs)
copy_shadow_to_vmcs12(vmx);
/*
* The nested entry process starts with enforcing various prerequisites
* on vmcs12 as required by the Intel SDM, and act appropriately when
* they fail: As the SDM explains, some conditions should cause the
* instruction to fail, while others will cause the instruction to seem
* to succeed, but return an EXIT_REASON_INVALID_STATE.
* To speed up the normal (success) code path, we should avoid checking
* for misconfigurations which will anyway be caught by the processor
* when using the merged vmcs02.
*/
if (interrupt_shadow & KVM_X86_SHADOW_INT_MOV_SS) {
nested_vmx_failValid(vcpu,
VMXERR_ENTRY_EVENTS_BLOCKED_BY_MOV_SS);
goto out;
}
if (vmcs12->launch_state == launch) {
nested_vmx_failValid(vcpu,
launch ? VMXERR_VMLAUNCH_NONCLEAR_VMCS
: VMXERR_VMRESUME_NONLAUNCHED_VMCS);
goto out;
}
ret = check_vmentry_prereqs(vcpu, vmcs12);
if (ret) {
nested_vmx_failValid(vcpu, ret);
goto out;
}
/*
* After this point, the trap flag no longer triggers a singlestep trap
* on the vm entry instructions; don't call kvm_skip_emulated_instruction.
* This is not 100% correct; for performance reasons, we delegate most
* of the checks on host state to the processor. If those fail,
* the singlestep trap is missed.
*/
skip_emulated_instruction(vcpu);
ret = check_vmentry_postreqs(vcpu, vmcs12, &exit_qual);
if (ret) {
nested_vmx_entry_failure(vcpu, vmcs12,
EXIT_REASON_INVALID_STATE, exit_qual);
return 1;
}
/*
* We're finally done with prerequisite checking, and can start with
* the nested entry.
*/
ret = enter_vmx_non_root_mode(vcpu, true);
if (ret)
return ret;
if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT)
return kvm_vcpu_halt(vcpu);
vmx->nested.nested_run_pending = 1;
return 1;
out:
return kvm_skip_emulated_instruction(vcpu);
}
|
C
|
linux
| 0 |
CVE-2017-5548
|
https://www.cvedetails.com/cve/CVE-2017-5548/
|
CWE-119
|
https://github.com/torvalds/linux/commit/05a974efa4bdf6e2a150e3f27dc6fcf0a9ad5655
|
05a974efa4bdf6e2a150e3f27dc6fcf0a9ad5655
|
ieee802154: atusb: do not use the stack for buffers to make them DMA able
From 4.9 we should really avoid using the stack here as this will not be DMA
able on various platforms. This changes the buffers already being present in
time of 4.9 being released. This should go into stable as well.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable@vger.kernel.org
Signed-off-by: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
static void atusb_tx_done(struct atusb *atusb, uint8_t seq)
{
struct usb_device *usb_dev = atusb->usb_dev;
uint8_t expect = atusb->tx_ack_seq;
dev_dbg(&usb_dev->dev, "atusb_tx_done (0x%02x/0x%02x)\n", seq, expect);
if (seq == expect) {
/* TODO check for ifs handling in firmware */
ieee802154_xmit_complete(atusb->hw, atusb->tx_skb, false);
} else {
/* TODO I experience this case when atusb has a tx complete
* irq before probing, we should fix the firmware it's an
* unlikely case now that seq == expect is then true, but can
* happen and fail with a tx_skb = NULL;
*/
ieee802154_wake_queue(atusb->hw);
if (atusb->tx_skb)
dev_kfree_skb_irq(atusb->tx_skb);
}
}
|
static void atusb_tx_done(struct atusb *atusb, uint8_t seq)
{
struct usb_device *usb_dev = atusb->usb_dev;
uint8_t expect = atusb->tx_ack_seq;
dev_dbg(&usb_dev->dev, "atusb_tx_done (0x%02x/0x%02x)\n", seq, expect);
if (seq == expect) {
/* TODO check for ifs handling in firmware */
ieee802154_xmit_complete(atusb->hw, atusb->tx_skb, false);
} else {
/* TODO I experience this case when atusb has a tx complete
* irq before probing, we should fix the firmware it's an
* unlikely case now that seq == expect is then true, but can
* happen and fail with a tx_skb = NULL;
*/
ieee802154_wake_queue(atusb->hw);
if (atusb->tx_skb)
dev_kfree_skb_irq(atusb->tx_skb);
}
}
|
C
|
linux
| 0 |
CVE-2013-0311
|
https://www.cvedetails.com/cve/CVE-2013-0311/
| null |
https://github.com/torvalds/linux/commit/bd97120fc3d1a11f3124c7c9ba1d91f51829eb85
|
bd97120fc3d1a11f3124c7c9ba1d91f51829eb85
|
vhost: fix length for cross region descriptor
If a single descriptor crosses a region, the
second chunk length should be decremented
by size translated so far, instead it includes
the full descriptor length.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
static int vhost_attach_cgroups(struct vhost_dev *dev)
{
struct vhost_attach_cgroups_struct attach;
attach.owner = current;
vhost_work_init(&attach.work, vhost_attach_cgroups_work);
vhost_work_queue(dev, &attach.work);
vhost_work_flush(dev, &attach.work);
return attach.ret;
}
|
static int vhost_attach_cgroups(struct vhost_dev *dev)
{
struct vhost_attach_cgroups_struct attach;
attach.owner = current;
vhost_work_init(&attach.work, vhost_attach_cgroups_work);
vhost_work_queue(dev, &attach.work);
vhost_work_flush(dev, &attach.work);
return attach.ret;
}
|
C
|
linux
| 0 |
CVE-2016-5770
|
https://www.cvedetails.com/cve/CVE-2016-5770/
|
CWE-190
|
https://github.com/php/php-src/commit/7245bff300d3fa8bacbef7897ff080a6f1c23eba?w=1
|
7245bff300d3fa8bacbef7897ff080a6f1c23eba?w=1
|
Fix bug #72262 - do not overflow int
|
PHP_MINIT_FUNCTION(spl_directory)
{
REGISTER_SPL_STD_CLASS_EX(SplFileInfo, spl_filesystem_object_new, spl_SplFileInfo_functions);
memcpy(&spl_filesystem_object_handlers, zend_get_std_object_handlers(), sizeof(zend_object_handlers));
spl_filesystem_object_handlers.clone_obj = spl_filesystem_object_clone;
spl_filesystem_object_handlers.cast_object = spl_filesystem_object_cast;
spl_filesystem_object_handlers.get_debug_info = spl_filesystem_object_get_debug_info;
spl_ce_SplFileInfo->serialize = zend_class_serialize_deny;
spl_ce_SplFileInfo->unserialize = zend_class_unserialize_deny;
REGISTER_SPL_SUB_CLASS_EX(DirectoryIterator, SplFileInfo, spl_filesystem_object_new, spl_DirectoryIterator_functions);
zend_class_implements(spl_ce_DirectoryIterator TSRMLS_CC, 1, zend_ce_iterator);
REGISTER_SPL_IMPLEMENTS(DirectoryIterator, SeekableIterator);
spl_ce_DirectoryIterator->get_iterator = spl_filesystem_dir_get_iterator;
REGISTER_SPL_SUB_CLASS_EX(FilesystemIterator, DirectoryIterator, spl_filesystem_object_new, spl_FilesystemIterator_functions);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_MODE_MASK", SPL_FILE_DIR_CURRENT_MODE_MASK);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_AS_PATHNAME", SPL_FILE_DIR_CURRENT_AS_PATHNAME);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_AS_FILEINFO", SPL_FILE_DIR_CURRENT_AS_FILEINFO);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_AS_SELF", SPL_FILE_DIR_CURRENT_AS_SELF);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "KEY_MODE_MASK", SPL_FILE_DIR_KEY_MODE_MASK);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "KEY_AS_PATHNAME", SPL_FILE_DIR_KEY_AS_PATHNAME);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "FOLLOW_SYMLINKS", SPL_FILE_DIR_FOLLOW_SYMLINKS);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "KEY_AS_FILENAME", SPL_FILE_DIR_KEY_AS_FILENAME);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "NEW_CURRENT_AND_KEY", SPL_FILE_DIR_KEY_AS_FILENAME|SPL_FILE_DIR_CURRENT_AS_FILEINFO);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "OTHER_MODE_MASK", SPL_FILE_DIR_OTHERS_MASK);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "SKIP_DOTS", SPL_FILE_DIR_SKIPDOTS);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "UNIX_PATHS", SPL_FILE_DIR_UNIXPATHS);
spl_ce_FilesystemIterator->get_iterator = spl_filesystem_tree_get_iterator;
REGISTER_SPL_SUB_CLASS_EX(RecursiveDirectoryIterator, FilesystemIterator, spl_filesystem_object_new, spl_RecursiveDirectoryIterator_functions);
REGISTER_SPL_IMPLEMENTS(RecursiveDirectoryIterator, RecursiveIterator);
memcpy(&spl_filesystem_object_check_handlers, &spl_filesystem_object_handlers, sizeof(zend_object_handlers));
spl_filesystem_object_check_handlers.get_method = spl_filesystem_object_get_method_check;
#ifdef HAVE_GLOB
REGISTER_SPL_SUB_CLASS_EX(GlobIterator, FilesystemIterator, spl_filesystem_object_new_check, spl_GlobIterator_functions);
REGISTER_SPL_IMPLEMENTS(GlobIterator, Countable);
#endif
REGISTER_SPL_SUB_CLASS_EX(SplFileObject, SplFileInfo, spl_filesystem_object_new_check, spl_SplFileObject_functions);
REGISTER_SPL_IMPLEMENTS(SplFileObject, RecursiveIterator);
REGISTER_SPL_IMPLEMENTS(SplFileObject, SeekableIterator);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "DROP_NEW_LINE", SPL_FILE_OBJECT_DROP_NEW_LINE);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "READ_AHEAD", SPL_FILE_OBJECT_READ_AHEAD);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "SKIP_EMPTY", SPL_FILE_OBJECT_SKIP_EMPTY);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "READ_CSV", SPL_FILE_OBJECT_READ_CSV);
REGISTER_SPL_SUB_CLASS_EX(SplTempFileObject, SplFileObject, spl_filesystem_object_new_check, spl_SplTempFileObject_functions);
return SUCCESS;
}
|
PHP_MINIT_FUNCTION(spl_directory)
{
REGISTER_SPL_STD_CLASS_EX(SplFileInfo, spl_filesystem_object_new, spl_SplFileInfo_functions);
memcpy(&spl_filesystem_object_handlers, zend_get_std_object_handlers(), sizeof(zend_object_handlers));
spl_filesystem_object_handlers.clone_obj = spl_filesystem_object_clone;
spl_filesystem_object_handlers.cast_object = spl_filesystem_object_cast;
spl_filesystem_object_handlers.get_debug_info = spl_filesystem_object_get_debug_info;
spl_ce_SplFileInfo->serialize = zend_class_serialize_deny;
spl_ce_SplFileInfo->unserialize = zend_class_unserialize_deny;
REGISTER_SPL_SUB_CLASS_EX(DirectoryIterator, SplFileInfo, spl_filesystem_object_new, spl_DirectoryIterator_functions);
zend_class_implements(spl_ce_DirectoryIterator TSRMLS_CC, 1, zend_ce_iterator);
REGISTER_SPL_IMPLEMENTS(DirectoryIterator, SeekableIterator);
spl_ce_DirectoryIterator->get_iterator = spl_filesystem_dir_get_iterator;
REGISTER_SPL_SUB_CLASS_EX(FilesystemIterator, DirectoryIterator, spl_filesystem_object_new, spl_FilesystemIterator_functions);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_MODE_MASK", SPL_FILE_DIR_CURRENT_MODE_MASK);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_AS_PATHNAME", SPL_FILE_DIR_CURRENT_AS_PATHNAME);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_AS_FILEINFO", SPL_FILE_DIR_CURRENT_AS_FILEINFO);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "CURRENT_AS_SELF", SPL_FILE_DIR_CURRENT_AS_SELF);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "KEY_MODE_MASK", SPL_FILE_DIR_KEY_MODE_MASK);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "KEY_AS_PATHNAME", SPL_FILE_DIR_KEY_AS_PATHNAME);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "FOLLOW_SYMLINKS", SPL_FILE_DIR_FOLLOW_SYMLINKS);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "KEY_AS_FILENAME", SPL_FILE_DIR_KEY_AS_FILENAME);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "NEW_CURRENT_AND_KEY", SPL_FILE_DIR_KEY_AS_FILENAME|SPL_FILE_DIR_CURRENT_AS_FILEINFO);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "OTHER_MODE_MASK", SPL_FILE_DIR_OTHERS_MASK);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "SKIP_DOTS", SPL_FILE_DIR_SKIPDOTS);
REGISTER_SPL_CLASS_CONST_LONG(FilesystemIterator, "UNIX_PATHS", SPL_FILE_DIR_UNIXPATHS);
spl_ce_FilesystemIterator->get_iterator = spl_filesystem_tree_get_iterator;
REGISTER_SPL_SUB_CLASS_EX(RecursiveDirectoryIterator, FilesystemIterator, spl_filesystem_object_new, spl_RecursiveDirectoryIterator_functions);
REGISTER_SPL_IMPLEMENTS(RecursiveDirectoryIterator, RecursiveIterator);
memcpy(&spl_filesystem_object_check_handlers, &spl_filesystem_object_handlers, sizeof(zend_object_handlers));
spl_filesystem_object_check_handlers.get_method = spl_filesystem_object_get_method_check;
#ifdef HAVE_GLOB
REGISTER_SPL_SUB_CLASS_EX(GlobIterator, FilesystemIterator, spl_filesystem_object_new_check, spl_GlobIterator_functions);
REGISTER_SPL_IMPLEMENTS(GlobIterator, Countable);
#endif
REGISTER_SPL_SUB_CLASS_EX(SplFileObject, SplFileInfo, spl_filesystem_object_new_check, spl_SplFileObject_functions);
REGISTER_SPL_IMPLEMENTS(SplFileObject, RecursiveIterator);
REGISTER_SPL_IMPLEMENTS(SplFileObject, SeekableIterator);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "DROP_NEW_LINE", SPL_FILE_OBJECT_DROP_NEW_LINE);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "READ_AHEAD", SPL_FILE_OBJECT_READ_AHEAD);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "SKIP_EMPTY", SPL_FILE_OBJECT_SKIP_EMPTY);
REGISTER_SPL_CLASS_CONST_LONG(SplFileObject, "READ_CSV", SPL_FILE_OBJECT_READ_CSV);
REGISTER_SPL_SUB_CLASS_EX(SplTempFileObject, SplFileObject, spl_filesystem_object_new_check, spl_SplTempFileObject_functions);
return SUCCESS;
}
|
C
|
php-src
| 1 |
CVE-2017-12183
|
https://www.cvedetails.com/cve/CVE-2017-12183/
|
CWE-20
|
https://cgit.freedesktop.org/xorg/xserver/commit/?id=55caa8b08c84af2b50fbc936cf334a5a93dd7db5
|
55caa8b08c84af2b50fbc936cf334a5a93dd7db5
| null |
SProcXFixesHideCursor(ClientPtr client)
{
REQUEST(xXFixesHideCursorReq);
swaps(&stuff->length);
REQUEST_SIZE_MATCH(xXFixesHideCursorReq);
swapl(&stuff->window);
return (*ProcXFixesVector[stuff->xfixesReqType]) (client);
}
|
SProcXFixesHideCursor(ClientPtr client)
{
REQUEST(xXFixesHideCursorReq);
swaps(&stuff->length);
REQUEST_SIZE_MATCH(xXFixesHideCursorReq);
swapl(&stuff->window);
return (*ProcXFixesVector[stuff->xfixesReqType]) (client);
}
|
C
|
xserver
| 0 |
CVE-2011-2486
|
https://www.cvedetails.com/cve/CVE-2011-2486/
|
CWE-264
|
https://github.com/davidben/nspluginwrapper/commit/7e4ab8e1189846041f955e6c83f72bc1624e7a98
|
7e4ab8e1189846041f955e6c83f72bc1624e7a98
|
Support all the new variables added
|
g_NPN_GetStringIdentifiers(const NPUTF8 **names, int32_t nameCount, NPIdentifier *identifiers)
{
if (!thread_check()) {
npw_printf("WARNING: NPN_GetStringIdentifiers not called from the main thread\n");
return;
}
if (names == NULL)
return;
if (identifiers == NULL)
return;
D(bugiI("NPN_GetStringIdentifiers names=%p\n", names));
cached_NPN_GetStringIdentifiers(names, nameCount, identifiers);
D(bugiD("NPN_GetStringIdentifiers done\n"));
}
|
g_NPN_GetStringIdentifiers(const NPUTF8 **names, int32_t nameCount, NPIdentifier *identifiers)
{
if (!thread_check()) {
npw_printf("WARNING: NPN_GetStringIdentifiers not called from the main thread\n");
return;
}
if (names == NULL)
return;
if (identifiers == NULL)
return;
D(bugiI("NPN_GetStringIdentifiers names=%p\n", names));
cached_NPN_GetStringIdentifiers(names, nameCount, identifiers);
D(bugiD("NPN_GetStringIdentifiers done\n"));
}
|
C
|
nspluginwrapper
| 0 |
CVE-2018-13006
|
https://www.cvedetails.com/cve/CVE-2018-13006/
|
CWE-125
|
https://github.com/gpac/gpac/commit/bceb03fd2be95097a7b409ea59914f332fb6bc86
|
bceb03fd2be95097a7b409ea59914f332fb6bc86
|
fixed 2 possible heap overflows (inc. #1088)
|
GF_Err blnk_dump(GF_Box *a, FILE * trace)
{
GF_TextBlinkBox*p = (GF_TextBlinkBox*)a;
gf_isom_box_dump_start(a, "TextBlinkBox", trace);
fprintf(trace, "start_charoffset=\"%d\" end_charoffset=\"%d\">\n", p->startcharoffset, p->endcharoffset);
gf_isom_box_dump_done("TextBlinkBox", a, trace);
return GF_OK;
}
|
GF_Err blnk_dump(GF_Box *a, FILE * trace)
{
GF_TextBlinkBox*p = (GF_TextBlinkBox*)a;
gf_isom_box_dump_start(a, "TextBlinkBox", trace);
fprintf(trace, "start_charoffset=\"%d\" end_charoffset=\"%d\">\n", p->startcharoffset, p->endcharoffset);
gf_isom_box_dump_done("TextBlinkBox", a, trace);
return GF_OK;
}
|
C
|
gpac
| 0 |
CVE-2016-3764
|
https://www.cvedetails.com/cve/CVE-2016-3764/
|
CWE-20
|
https://android.googlesource.com/platform/frameworks/av/+/daef4327fe0c75b0a90bb8627458feec7a301e1f
|
daef4327fe0c75b0a90bb8627458feec7a301e1f
|
Clear unused pointer field when sending across binder
Bug: 28377502
Change-Id: Iad5ebfb0a9ef89f09755bb332579dbd3534f9c98
|
status_t MetadataRetrieverClient::setDataSource(
const sp<IDataSource>& source)
{
ALOGV("setDataSource(IDataSource)");
Mutex::Autolock lock(mLock);
sp<DataSource> dataSource = DataSource::CreateFromIDataSource(source);
player_type playerType =
MediaPlayerFactory::getPlayerType(NULL /* client */, dataSource);
ALOGV("player type = %d", playerType);
sp<MediaMetadataRetrieverBase> p = createRetriever(playerType);
if (p == NULL) return NO_INIT;
status_t ret = p->setDataSource(dataSource);
if (ret == NO_ERROR) mRetriever = p;
return ret;
}
|
status_t MetadataRetrieverClient::setDataSource(
const sp<IDataSource>& source)
{
ALOGV("setDataSource(IDataSource)");
Mutex::Autolock lock(mLock);
sp<DataSource> dataSource = DataSource::CreateFromIDataSource(source);
player_type playerType =
MediaPlayerFactory::getPlayerType(NULL /* client */, dataSource);
ALOGV("player type = %d", playerType);
sp<MediaMetadataRetrieverBase> p = createRetriever(playerType);
if (p == NULL) return NO_INIT;
status_t ret = p->setDataSource(dataSource);
if (ret == NO_ERROR) mRetriever = p;
return ret;
}
|
C
|
Android
| 0 |
CVE-2014-0143
|
https://www.cvedetails.com/cve/CVE-2014-0143/
|
CWE-190
|
https://git.qemu.org/?p=qemu.git;a=commit;h=6a83f8b5bec6f59e56cc49bd49e4c3f8f805d56f
|
6a83f8b5bec6f59e56cc49bd49e4c3f8f805d56f
| null |
static int qcow2_refresh_limits(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
bs->bl.write_zeroes_alignment = s->cluster_sectors;
return 0;
}
|
static int qcow2_refresh_limits(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
bs->bl.write_zeroes_alignment = s->cluster_sectors;
return 0;
}
|
C
|
qemu
| 0 |
CVE-2014-3690
|
https://www.cvedetails.com/cve/CVE-2014-3690/
|
CWE-399
|
https://github.com/torvalds/linux/commit/d974baa398f34393db76be45f7d4d04fbdbb4a0a
|
d974baa398f34393db76be45f7d4d04fbdbb4a0a
|
x86,kvm,vmx: Preserve CR4 across VM entry
CR4 isn't constant; at least the TSD and PCE bits can vary.
TBH, treating CR0 and CR3 as constant scares me a bit, too, but it looks
like it's correct.
This adds a branch and a read from cr4 to each vm entry. Because it is
extremely likely that consecutive entries into the same vcpu will have
the same host cr4 value, this fixes up the vmcs instead of restoring cr4
after the fact. A subsequent patch will add a kernel-wide cr4 shadow,
reducing the overhead in the common case to just two memory reads and a
branch.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: stable@vger.kernel.org
Cc: Petr Matousek <pmatouse@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
static bool tr_valid(struct kvm_vcpu *vcpu)
{
struct kvm_segment tr;
vmx_get_segment(vcpu, &tr, VCPU_SREG_TR);
if (tr.unusable)
return false;
if (tr.selector & SELECTOR_TI_MASK) /* TI = 1 */
return false;
if (tr.type != 3 && tr.type != 11) /* TODO: Check if guest is in IA32e mode */
return false;
if (!tr.present)
return false;
return true;
}
|
static bool tr_valid(struct kvm_vcpu *vcpu)
{
struct kvm_segment tr;
vmx_get_segment(vcpu, &tr, VCPU_SREG_TR);
if (tr.unusable)
return false;
if (tr.selector & SELECTOR_TI_MASK) /* TI = 1 */
return false;
if (tr.type != 3 && tr.type != 11) /* TODO: Check if guest is in IA32e mode */
return false;
if (!tr.present)
return false;
return true;
}
|
C
|
linux
| 0 |
CVE-2018-6053
|
https://www.cvedetails.com/cve/CVE-2018-6053/
|
CWE-200
|
https://github.com/chromium/chromium/commit/6c6888565ff1fde9ef21ef17c27ad4c8304643d2
|
6c6888565ff1fde9ef21ef17c27ad4c8304643d2
|
TopSites: Clear thumbnails from the cache when their URLs get removed
We already cleared the thumbnails from persistent storage, but they
remained in the in-memory cache, so they remained accessible (until the
next Chrome restart) even after all browsing data was cleared.
Bug: 758169
Change-Id: Id916d22358430a82e6d5043ac04fa463a32f824f
Reviewed-on: https://chromium-review.googlesource.com/758640
Commit-Queue: Marc Treib <treib@chromium.org>
Reviewed-by: Sylvain Defresne <sdefresne@chromium.org>
Cr-Commit-Position: refs/heads/master@{#514861}
|
bool TopSitesImpl::AddPrepopulatedPages(MostVisitedURLList* urls,
size_t num_forced_urls) const {
bool added = false;
for (const auto& prepopulated_page : prepopulated_pages_) {
if (urls->size() - num_forced_urls < kNonForcedTopSitesNumber &&
IndexOf(*urls, prepopulated_page.most_visited.url) == -1) {
urls->push_back(prepopulated_page.most_visited);
added = true;
}
}
return added;
}
|
bool TopSitesImpl::AddPrepopulatedPages(MostVisitedURLList* urls,
size_t num_forced_urls) const {
bool added = false;
for (const auto& prepopulated_page : prepopulated_pages_) {
if (urls->size() - num_forced_urls < kNonForcedTopSitesNumber &&
IndexOf(*urls, prepopulated_page.most_visited.url) == -1) {
urls->push_back(prepopulated_page.most_visited);
added = true;
}
}
return added;
}
|
C
|
Chrome
| 0 |
CVE-2012-2880
|
https://www.cvedetails.com/cve/CVE-2012-2880/
|
CWE-362
|
https://github.com/chromium/chromium/commit/fcd3a7a671ecf2d5f46ea34787d27507a914d2f5
|
fcd3a7a671ecf2d5f46ea34787d27507a914d2f5
|
[Sync] Cleanup all tab sync enabling logic now that its on by default.
BUG=none
TEST=
Review URL: https://chromiumcodereview.appspot.com/10443046
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@139462 0039d316-1c4b-4281-b951-d872f2087c98
|
void SyncBackendHost::Shutdown(bool sync_disabled) {
if (sync_thread_.IsRunning()) {
sync_thread_.message_loop()->PostTask(FROM_HERE,
base::Bind(&SyncBackendHost::Core::DoShutdown, core_.get(),
sync_disabled));
}
base::Time stop_thread_start_time = base::Time::Now();
{
base::ThreadRestrictions::ScopedAllowIO allow_io;
sync_thread_.Stop();
}
base::TimeDelta stop_sync_thread_time = base::Time::Now() -
stop_thread_start_time;
UMA_HISTOGRAM_TIMES("Sync.Shutdown.StopSyncThreadTime",
stop_sync_thread_time);
registrar_.reset();
frontend_ = NULL;
core_ = NULL; // Releases reference to core_.
}
|
void SyncBackendHost::Shutdown(bool sync_disabled) {
if (sync_thread_.IsRunning()) {
sync_thread_.message_loop()->PostTask(FROM_HERE,
base::Bind(&SyncBackendHost::Core::DoShutdown, core_.get(),
sync_disabled));
}
base::Time stop_thread_start_time = base::Time::Now();
{
base::ThreadRestrictions::ScopedAllowIO allow_io;
sync_thread_.Stop();
}
base::TimeDelta stop_sync_thread_time = base::Time::Now() -
stop_thread_start_time;
UMA_HISTOGRAM_TIMES("Sync.Shutdown.StopSyncThreadTime",
stop_sync_thread_time);
registrar_.reset();
frontend_ = NULL;
core_ = NULL; // Releases reference to core_.
}
|
C
|
Chrome
| 0 |
CVE-2018-7191
|
https://www.cvedetails.com/cve/CVE-2018-7191/
|
CWE-476
|
https://github.com/torvalds/linux/commit/0ad646c81b2182f7fa67ec0c8c825e0ee165696d
|
0ad646c81b2182f7fa67ec0c8c825e0ee165696d
|
tun: call dev_get_valid_name() before register_netdevice()
register_netdevice() could fail early when we have an invalid
dev name, in which case ->ndo_uninit() is not called. For tun
device, this is a problem because a timer etc. are already
initialized and it expects ->ndo_uninit() to clean them up.
We could move these initializations into a ->ndo_init() so
that register_netdevice() knows better, however this is still
complicated due to the logic in tun_detach().
Therefore, I choose to just call dev_get_valid_name() before
register_netdevice(), which is quicker and much easier to audit.
And for this specific case, it is already enough.
Fixes: 96442e42429e ("tuntap: choose the txq based on rxq")
Reported-by: Dmitry Alexeev <avekceeb@gmail.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
static inline bool tun_legacy_is_little_endian(struct tun_struct *tun)
{
return virtio_legacy_is_little_endian();
}
|
static inline bool tun_legacy_is_little_endian(struct tun_struct *tun)
{
return virtio_legacy_is_little_endian();
}
|
C
|
linux
| 0 |
CVE-2017-5122
|
https://www.cvedetails.com/cve/CVE-2017-5122/
|
CWE-119
|
https://github.com/chromium/chromium/commit/f8675cbb337440a11bf9afb10ea11bae42bb92cb
|
f8675cbb337440a11bf9afb10ea11bae42bb92cb
|
cros: Enable some tests in //ash/wm in ash_unittests --mash
For the ones that fail, disable them via filter file instead of in the
code, per our disablement policy.
Bug: 698085, 695556, 698878, 698888, 698093, 698894
Test: ash_unittests --mash
Change-Id: Ic145ab6a95508968d6884d14fac2a3ca08888d26
Reviewed-on: https://chromium-review.googlesource.com/752423
Commit-Queue: James Cook <jamescook@chromium.org>
Reviewed-by: Steven Bennetts <stevenjb@chromium.org>
Cr-Commit-Position: refs/heads/master@{#513836}
|
void DestroyTabletModeWindowManager() {
Shell::Get()->tablet_mode_controller()->EnableTabletModeWindowManager(
false);
EXPECT_FALSE(tablet_mode_window_manager());
}
|
void DestroyTabletModeWindowManager() {
Shell::Get()->tablet_mode_controller()->EnableTabletModeWindowManager(
false);
EXPECT_FALSE(tablet_mode_window_manager());
}
|
C
|
Chrome
| 0 |
CVE-2011-2861
|
https://www.cvedetails.com/cve/CVE-2011-2861/
|
CWE-20
|
https://github.com/chromium/chromium/commit/8262245d384be025f13e2a5b3a03b7e5c98374ce
|
8262245d384be025f13e2a5b3a03b7e5c98374ce
|
DevTools: move DevToolsAgent/Client into content.
BUG=84078
TEST=
Review URL: http://codereview.chromium.org/7461019
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@93596 0039d316-1c4b-4281-b951-d872f2087c98
|
void RenderView::StartPluginIme() {
IPC::Message* msg = new ViewHostMsg_StartPluginIme(routing_id());
msg->set_unblock(true);
Send(msg);
}
|
void RenderView::StartPluginIme() {
IPC::Message* msg = new ViewHostMsg_StartPluginIme(routing_id());
msg->set_unblock(true);
Send(msg);
}
|
C
|
Chrome
| 0 |
CVE-2012-2816
|
https://www.cvedetails.com/cve/CVE-2012-2816/
| null |
https://github.com/chromium/chromium/commit/cd0bd79d6ebdb72183e6f0833673464cc10b3600
|
cd0bd79d6ebdb72183e6f0833673464cc10b3600
|
Convert plugin and GPU process to brokered handle duplication.
BUG=119250
Review URL: https://chromiumcodereview.appspot.com/9958034
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@132303 0039d316-1c4b-4281-b951-d872f2087c98
|
void GpuChannelHost::MessageFilter::RemoveRoute(int route_id) {
DCHECK(parent_->factory_->IsIOThread());
ListenerMap::iterator it = listeners_.find(route_id);
if (it != listeners_.end())
listeners_.erase(it);
}
|
void GpuChannelHost::MessageFilter::RemoveRoute(int route_id) {
DCHECK(parent_->factory_->IsIOThread());
ListenerMap::iterator it = listeners_.find(route_id);
if (it != listeners_.end())
listeners_.erase(it);
}
|
C
|
Chrome
| 0 |
CVE-2016-4486
|
https://www.cvedetails.com/cve/CVE-2016-4486/
|
CWE-200
|
https://github.com/torvalds/linux/commit/5f8e44741f9f216e33736ea4ec65ca9ac03036e6
|
5f8e44741f9f216e33736ea4ec65ca9ac03036e6
|
net: fix infoleak in rtnetlink
The stack object “map” has a total size of 32 bytes. Its last 4
bytes are padding generated by compiler. These padding bytes are
not initialized and sent out via “nla_put”.
Signed-off-by: Kangjie Lu <kjlu@gatech.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics)
{
struct nlattr *mx;
int i, valid = 0;
mx = nla_nest_start(skb, RTA_METRICS);
if (mx == NULL)
return -ENOBUFS;
for (i = 0; i < RTAX_MAX; i++) {
if (metrics[i]) {
if (i == RTAX_CC_ALGO - 1) {
char tmp[TCP_CA_NAME_MAX], *name;
name = tcp_ca_get_name_by_key(metrics[i], tmp);
if (!name)
continue;
if (nla_put_string(skb, i + 1, name))
goto nla_put_failure;
} else if (i == RTAX_FEATURES - 1) {
u32 user_features = metrics[i] & RTAX_FEATURE_MASK;
BUILD_BUG_ON(RTAX_FEATURE_MASK & DST_FEATURE_MASK);
if (nla_put_u32(skb, i + 1, user_features))
goto nla_put_failure;
} else {
if (nla_put_u32(skb, i + 1, metrics[i]))
goto nla_put_failure;
}
valid++;
}
}
if (!valid) {
nla_nest_cancel(skb, mx);
return 0;
}
return nla_nest_end(skb, mx);
nla_put_failure:
nla_nest_cancel(skb, mx);
return -EMSGSIZE;
}
|
int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics)
{
struct nlattr *mx;
int i, valid = 0;
mx = nla_nest_start(skb, RTA_METRICS);
if (mx == NULL)
return -ENOBUFS;
for (i = 0; i < RTAX_MAX; i++) {
if (metrics[i]) {
if (i == RTAX_CC_ALGO - 1) {
char tmp[TCP_CA_NAME_MAX], *name;
name = tcp_ca_get_name_by_key(metrics[i], tmp);
if (!name)
continue;
if (nla_put_string(skb, i + 1, name))
goto nla_put_failure;
} else if (i == RTAX_FEATURES - 1) {
u32 user_features = metrics[i] & RTAX_FEATURE_MASK;
BUILD_BUG_ON(RTAX_FEATURE_MASK & DST_FEATURE_MASK);
if (nla_put_u32(skb, i + 1, user_features))
goto nla_put_failure;
} else {
if (nla_put_u32(skb, i + 1, metrics[i]))
goto nla_put_failure;
}
valid++;
}
}
if (!valid) {
nla_nest_cancel(skb, mx);
return 0;
}
return nla_nest_end(skb, mx);
nla_put_failure:
nla_nest_cancel(skb, mx);
return -EMSGSIZE;
}
|
C
|
linux
| 0 |
CVE-2017-17862
|
https://www.cvedetails.com/cve/CVE-2017-17862/
|
CWE-20
|
https://github.com/torvalds/linux/commit/c131187db2d3fa2f8bf32fdf4e9a4ef805168467
|
c131187db2d3fa2f8bf32fdf4e9a4ef805168467
|
bpf: fix branch pruning logic
when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.
Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
static void init_reg_state(struct bpf_verifier_env *env,
struct bpf_reg_state *regs)
{
int i;
for (i = 0; i < MAX_BPF_REG; i++) {
mark_reg_not_init(env, regs, i);
regs[i].live = REG_LIVE_NONE;
}
/* frame pointer */
regs[BPF_REG_FP].type = PTR_TO_STACK;
mark_reg_known_zero(env, regs, BPF_REG_FP);
/* 1st arg to a function */
regs[BPF_REG_1].type = PTR_TO_CTX;
mark_reg_known_zero(env, regs, BPF_REG_1);
}
|
static void init_reg_state(struct bpf_verifier_env *env,
struct bpf_reg_state *regs)
{
int i;
for (i = 0; i < MAX_BPF_REG; i++) {
mark_reg_not_init(env, regs, i);
regs[i].live = REG_LIVE_NONE;
}
/* frame pointer */
regs[BPF_REG_FP].type = PTR_TO_STACK;
mark_reg_known_zero(env, regs, BPF_REG_FP);
/* 1st arg to a function */
regs[BPF_REG_1].type = PTR_TO_CTX;
mark_reg_known_zero(env, regs, BPF_REG_1);
}
|
C
|
linux
| 0 |
CVE-2014-4344
|
https://www.cvedetails.com/cve/CVE-2014-4344/
|
CWE-476
|
https://github.com/krb5/krb5/commit/a7886f0ed1277c69142b14a2c6629175a6331edc
|
a7886f0ed1277c69142b14a2c6629175a6331edc
|
Fix null deref in SPNEGO acceptor [CVE-2014-4344]
When processing a continuation token, acc_ctx_cont was dereferencing
the initial byte of the token without checking the length. This could
result in a null dereference.
CVE-2014-4344:
In MIT krb5 1.5 and newer, an unauthenticated or partially
authenticated remote attacker can cause a NULL dereference and
application crash during a SPNEGO negotiation by sending an empty
token as the second or later context token from initiator to acceptor.
The attacker must provide at least one valid context token in the
security context negotiation before sending the empty token. This can
be done by an unauthenticated attacker by forcing SPNEGO to
renegotiate the underlying mechanism, or by using IAKERB to wrap an
unauthenticated AS-REQ as the first token.
CVSSv2 Vector: AV:N/AC:L/Au:N/C:N/I:N/A:C/E:POC/RL:OF/RC:C
[kaduk@mit.edu: CVE summary, CVSSv2 vector]
(cherry picked from commit 524688ce87a15fc75f87efc8c039ba4c7d5c197b)
ticket: 7970
version_fixed: 1.12.2
status: resolved
|
g_token_size(gss_OID_const mech, unsigned int body_size)
{
int hdrsize;
/*
* Initialize the header size to the
* MECH_OID byte + the bytes needed to indicate the
* length of the OID + the OID itself.
*
* 0x06 [MECHLENFIELD] MECHDATA
*/
hdrsize = 1 + gssint_der_length_size(mech->length) + mech->length;
/*
* Now add the bytes needed for the initial header
* token bytes:
* 0x60 + [DER_LEN] + HDRSIZE
*/
hdrsize += 1 + gssint_der_length_size(body_size + hdrsize);
return (hdrsize + body_size);
}
|
g_token_size(gss_OID_const mech, unsigned int body_size)
{
int hdrsize;
/*
* Initialize the header size to the
* MECH_OID byte + the bytes needed to indicate the
* length of the OID + the OID itself.
*
* 0x06 [MECHLENFIELD] MECHDATA
*/
hdrsize = 1 + gssint_der_length_size(mech->length) + mech->length;
/*
* Now add the bytes needed for the initial header
* token bytes:
* 0x60 + [DER_LEN] + HDRSIZE
*/
hdrsize += 1 + gssint_der_length_size(body_size + hdrsize);
return (hdrsize + body_size);
}
|
C
|
krb5
| 0 |
CVE-2016-9934
|
https://www.cvedetails.com/cve/CVE-2016-9934/
|
CWE-476
|
https://github.com/php/php-src/commit/6045de69c7dedcba3eadf7c4bba424b19c81d00d
|
6045de69c7dedcba3eadf7c4bba424b19c81d00d
|
Fix bug #73331 - do not try to serialize/unserialize objects wddx can not handle
Proper soltion would be to call serialize/unserialize and deal with the result,
but this requires more work that should be done by wddx maintainer (not me).
|
static zend_class_entry *row_get_ce(const zval *object TSRMLS_DC)
{
return pdo_row_ce;
}
|
static zend_class_entry *row_get_ce(const zval *object TSRMLS_DC)
{
return pdo_row_ce;
}
|
C
|
php-src
| 0 |
CVE-2013-0217
|
https://www.cvedetails.com/cve/CVE-2013-0217/
|
CWE-399
|
https://github.com/torvalds/linux/commit/7d5145d8eb2b9791533ffe4dc003b129b9696c48
|
7d5145d8eb2b9791533ffe4dc003b129b9696c48
|
xen/netback: don't leak pages on failure in xen_netbk_tx_check_gop.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
int xen_netbk_rx_ring_full(struct xenvif *vif)
{
RING_IDX peek = vif->rx_req_cons_peek;
RING_IDX needed = max_required_rx_slots(vif);
return ((vif->rx.sring->req_prod - peek) < needed) ||
((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
}
|
int xen_netbk_rx_ring_full(struct xenvif *vif)
{
RING_IDX peek = vif->rx_req_cons_peek;
RING_IDX needed = max_required_rx_slots(vif);
return ((vif->rx.sring->req_prod - peek) < needed) ||
((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
}
|
C
|
linux
| 0 |
CVE-2018-18021
|
https://www.cvedetails.com/cve/CVE-2018-18021/
|
CWE-20
|
https://github.com/torvalds/linux/commit/d26c25a9d19b5976b319af528886f89cf455692d
|
d26c25a9d19b5976b319af528886f89cf455692d
|
arm64: KVM: Tighten guest core register access from userspace
We currently allow userspace to access the core register file
in about any possible way, including straddling multiple
registers and doing unaligned accesses.
This is not the expected use of the ABI, and nobody is actually
using it that way. Let's tighten it by explicitly checking
the size and alignment for each field of the register file.
Cc: <stable@vger.kernel.org>
Fixes: 2f4a07c5f9fe ("arm64: KVM: guest one-reg interface")
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
[maz: rewrote Dave's initial patch to be more easily backported]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
static u64 core_reg_offset_from_id(u64 id)
{
return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
}
|
static u64 core_reg_offset_from_id(u64 id)
{
return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
}
|
C
|
linux
| 0 |
CVE-2016-7425
|
https://www.cvedetails.com/cve/CVE-2016-7425/
|
CWE-119
|
https://github.com/torvalds/linux/commit/7bc2b55a5c030685b399bb65b6baa9ccc3d1f167
|
7bc2b55a5c030685b399bb65b6baa9ccc3d1f167
|
scsi: arcmsr: Buffer overflow in arcmsr_iop_message_xfer()
We need to put an upper bound on "user_len" so the memcpy() doesn't
overflow.
Cc: <stable@vger.kernel.org>
Reported-by: Marco Grassi <marco.gra@gmail.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Tomas Henzl <thenzl@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
static uint8_t arcmsr_hbaA_abort_allcmd(struct AdapterControlBlock *acb)
{
struct MessageUnit_A __iomem *reg = acb->pmuA;
writel(ARCMSR_INBOUND_MESG0_ABORT_CMD, ®->inbound_msgaddr0);
if (!arcmsr_hbaA_wait_msgint_ready(acb)) {
printk(KERN_NOTICE
"arcmsr%d: wait 'abort all outstanding command' timeout\n"
, acb->host->host_no);
return false;
}
return true;
}
|
static uint8_t arcmsr_hbaA_abort_allcmd(struct AdapterControlBlock *acb)
{
struct MessageUnit_A __iomem *reg = acb->pmuA;
writel(ARCMSR_INBOUND_MESG0_ABORT_CMD, ®->inbound_msgaddr0);
if (!arcmsr_hbaA_wait_msgint_ready(acb)) {
printk(KERN_NOTICE
"arcmsr%d: wait 'abort all outstanding command' timeout\n"
, acb->host->host_no);
return false;
}
return true;
}
|
C
|
linux
| 0 |
CVE-2014-5077
|
https://www.cvedetails.com/cve/CVE-2014-5077/
| null |
https://github.com/torvalds/linux/commit/1be9a950c646c9092fb3618197f7b6bfb50e82aa
|
1be9a950c646c9092fb3618197f7b6bfb50e82aa
|
net: sctp: inherit auth_capable on INIT collisions
Jason reported an oops caused by SCTP on his ARM machine with
SCTP authentication enabled:
Internal error: Oops: 17 [#1] ARM
CPU: 0 PID: 104 Comm: sctp-test Not tainted 3.13.0-68744-g3632f30c9b20-dirty #1
task: c6eefa40 ti: c6f52000 task.ti: c6f52000
PC is at sctp_auth_calculate_hmac+0xc4/0x10c
LR is at sg_init_table+0x20/0x38
pc : [<c024bb80>] lr : [<c00f32dc>] psr: 40000013
sp : c6f538e8 ip : 00000000 fp : c6f53924
r10: c6f50d80 r9 : 00000000 r8 : 00010000
r7 : 00000000 r6 : c7be4000 r5 : 00000000 r4 : c6f56254
r3 : c00c8170 r2 : 00000001 r1 : 00000008 r0 : c6f1e660
Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 0005397f Table: 06f28000 DAC: 00000015
Process sctp-test (pid: 104, stack limit = 0xc6f521c0)
Stack: (0xc6f538e8 to 0xc6f54000)
[...]
Backtrace:
[<c024babc>] (sctp_auth_calculate_hmac+0x0/0x10c) from [<c0249af8>] (sctp_packet_transmit+0x33c/0x5c8)
[<c02497bc>] (sctp_packet_transmit+0x0/0x5c8) from [<c023e96c>] (sctp_outq_flush+0x7fc/0x844)
[<c023e170>] (sctp_outq_flush+0x0/0x844) from [<c023ef78>] (sctp_outq_uncork+0x24/0x28)
[<c023ef54>] (sctp_outq_uncork+0x0/0x28) from [<c0234364>] (sctp_side_effects+0x1134/0x1220)
[<c0233230>] (sctp_side_effects+0x0/0x1220) from [<c02330b0>] (sctp_do_sm+0xac/0xd4)
[<c0233004>] (sctp_do_sm+0x0/0xd4) from [<c023675c>] (sctp_assoc_bh_rcv+0x118/0x160)
[<c0236644>] (sctp_assoc_bh_rcv+0x0/0x160) from [<c023d5bc>] (sctp_inq_push+0x6c/0x74)
[<c023d550>] (sctp_inq_push+0x0/0x74) from [<c024a6b0>] (sctp_rcv+0x7d8/0x888)
While we already had various kind of bugs in that area
ec0223ec48a9 ("net: sctp: fix sctp_sf_do_5_1D_ce to verify if
we/peer is AUTH capable") and b14878ccb7fa ("net: sctp: cache
auth_enable per endpoint"), this one is a bit of a different
kind.
Giving a bit more background on why SCTP authentication is
needed can be found in RFC4895:
SCTP uses 32-bit verification tags to protect itself against
blind attackers. These values are not changed during the
lifetime of an SCTP association.
Looking at new SCTP extensions, there is the need to have a
method of proving that an SCTP chunk(s) was really sent by
the original peer that started the association and not by a
malicious attacker.
To cause this bug, we're triggering an INIT collision between
peers; normal SCTP handshake where both sides intent to
authenticate packets contains RANDOM; CHUNKS; HMAC-ALGO
parameters that are being negotiated among peers:
---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
<------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
-------------------- COOKIE-ECHO -------------------->
<-------------------- COOKIE-ACK ---------------------
RFC4895 says that each endpoint therefore knows its own random
number and the peer's random number *after* the association
has been established. The local and peer's random number along
with the shared key are then part of the secret used for
calculating the HMAC in the AUTH chunk.
Now, in our scenario, we have 2 threads with 1 non-blocking
SEQ_PACKET socket each, setting up common shared SCTP_AUTH_KEY
and SCTP_AUTH_ACTIVE_KEY properly, and each of them calling
sctp_bindx(3), listen(2) and connect(2) against each other,
thus the handshake looks similar to this, e.g.:
---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
<------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
<--------- INIT[RANDOM; CHUNKS; HMAC-ALGO] -----------
-------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] -------->
...
Since such collisions can also happen with verification tags,
the RFC4895 for AUTH rather vaguely says under section 6.1:
In case of INIT collision, the rules governing the handling
of this Random Number follow the same pattern as those for
the Verification Tag, as explained in Section 5.2.4 of
RFC 2960 [5]. Therefore, each endpoint knows its own Random
Number and the peer's Random Number after the association
has been established.
In RFC2960, section 5.2.4, we're eventually hitting Action B:
B) In this case, both sides may be attempting to start an
association at about the same time but the peer endpoint
started its INIT after responding to the local endpoint's
INIT. Thus it may have picked a new Verification Tag not
being aware of the previous Tag it had sent this endpoint.
The endpoint should stay in or enter the ESTABLISHED
state but it MUST update its peer's Verification Tag from
the State Cookie, stop any init or cookie timers that may
running and send a COOKIE ACK.
In other words, the handling of the Random parameter is the
same as behavior for the Verification Tag as described in
Action B of section 5.2.4.
Looking at the code, we exactly hit the sctp_sf_do_dupcook_b()
case which triggers an SCTP_CMD_UPDATE_ASSOC command to the
side effect interpreter, and in fact it properly copies over
peer_{random, hmacs, chunks} parameters from the newly created
association to update the existing one.
Also, the old asoc_shared_key is being released and based on
the new params, sctp_auth_asoc_init_active_key() updated.
However, the issue observed in this case is that the previous
asoc->peer.auth_capable was 0, and has *not* been updated, so
that instead of creating a new secret, we're doing an early
return from the function sctp_auth_asoc_init_active_key()
leaving asoc->asoc_shared_key as NULL. However, we now have to
authenticate chunks from the updated chunk list (e.g. COOKIE-ACK).
That in fact causes the server side when responding with ...
<------------------ AUTH; COOKIE-ACK -----------------
... to trigger a NULL pointer dereference, since in
sctp_packet_transmit(), it discovers that an AUTH chunk is
being queued for xmit, and thus it calls sctp_auth_calculate_hmac().
Since the asoc->active_key_id is still inherited from the
endpoint, and the same as encoded into the chunk, it uses
asoc->asoc_shared_key, which is still NULL, as an asoc_key
and dereferences it in ...
crypto_hash_setkey(desc.tfm, &asoc_key->data[0], asoc_key->len)
... causing an oops. All this happens because sctp_make_cookie_ack()
called with the *new* association has the peer.auth_capable=1
and therefore marks the chunk with auth=1 after checking
sctp_auth_send_cid(), but it is *actually* sent later on over
the then *updated* association's transport that didn't initialize
its shared key due to peer.auth_capable=0. Since control chunks
in that case are not sent by the temporary association which
are scheduled for deletion, they are issued for xmit via
SCTP_CMD_REPLY in the interpreter with the context of the
*updated* association. peer.auth_capable was 0 in the updated
association (which went from COOKIE_WAIT into ESTABLISHED state),
since all previous processing that performed sctp_process_init()
was being done on temporary associations, that we eventually
throw away each time.
The correct fix is to update to the new peer.auth_capable
value as well in the collision case via sctp_assoc_update(),
so that in case the collision migrated from 0 -> 1,
sctp_auth_asoc_init_active_key() can properly recalculate
the secret. This therefore fixes the observed server panic.
Fixes: 730fc3d05cd4 ("[SCTP]: Implete SCTP-AUTH parameter processing")
Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Tested-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: Vlad Yasevich <vyasevich@gmail.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
int sctp_assoc_lookup_laddr(struct sctp_association *asoc,
const union sctp_addr *laddr)
{
int found = 0;
if ((asoc->base.bind_addr.port == ntohs(laddr->v4.sin_port)) &&
sctp_bind_addr_match(&asoc->base.bind_addr, laddr,
sctp_sk(asoc->base.sk)))
found = 1;
return found;
}
|
int sctp_assoc_lookup_laddr(struct sctp_association *asoc,
const union sctp_addr *laddr)
{
int found = 0;
if ((asoc->base.bind_addr.port == ntohs(laddr->v4.sin_port)) &&
sctp_bind_addr_match(&asoc->base.bind_addr, laddr,
sctp_sk(asoc->base.sk)))
found = 1;
return found;
}
|
C
|
linux
| 0 |
CVE-2018-6111
|
https://www.cvedetails.com/cve/CVE-2018-6111/
|
CWE-20
|
https://github.com/chromium/chromium/commit/3c8e4852477d5b1e2da877808c998dc57db9460f
|
3c8e4852477d5b1e2da877808c998dc57db9460f
|
DevTools: speculative fix for crash in NetworkHandler::Disable
This keeps BrowserContext* and StoragePartition* instead of
RenderProcessHost* in an attemp to resolve UAF of RenderProcessHost
upon closure of DevTools front-end.
Bug: 801117, 783067, 780694
Change-Id: I6c2cca60cc0c29f0949d189cf918769059f80c1b
Reviewed-on: https://chromium-review.googlesource.com/876657
Commit-Queue: Andrey Kosyakov <caseq@chromium.org>
Reviewed-by: Dmitry Gozman <dgozman@chromium.org>
Cr-Commit-Position: refs/heads/master@{#531157}
|
void RenderFrameDevToolsAgentHost::DisconnectWebContents() {
frame_tree_node_ = nullptr;
navigation_handles_.clear();
WebContentsObserver::Observe(nullptr);
scoped_refptr<RenderFrameDevToolsAgentHost> protect(this);
UpdateFrameHost(nullptr);
for (DevToolsSession* session : sessions())
session->ResumeSendingMessagesToAgent();
}
|
void RenderFrameDevToolsAgentHost::DisconnectWebContents() {
frame_tree_node_ = nullptr;
navigation_handles_.clear();
WebContentsObserver::Observe(nullptr);
scoped_refptr<RenderFrameDevToolsAgentHost> protect(this);
UpdateFrameHost(nullptr);
for (DevToolsSession* session : sessions())
session->ResumeSendingMessagesToAgent();
}
|
C
|
Chrome
| 0 |
CVE-2017-9250
|
https://www.cvedetails.com/cve/CVE-2017-9250/
|
CWE-476
|
https://github.com/zherczeg/jerryscript/commit/03a8c630f015f63268639d3ed3bf82cff6fa77d8
|
03a8c630f015f63268639d3ed3bf82cff6fa77d8
|
Do not allocate memory for zero length strings.
Fixes #1821.
JerryScript-DCO-1.0-Signed-off-by: Zoltan Herczeg zherczeg.u-szeged@partner.samsung.com
|
lexer_expect_identifier (parser_context_t *context_p, /**< context */
uint8_t literal_type) /**< literal type */
{
JERRY_ASSERT (literal_type == LEXER_STRING_LITERAL
|| literal_type == LEXER_IDENT_LITERAL);
skip_spaces (context_p);
context_p->token.line = context_p->line;
context_p->token.column = context_p->column;
if (context_p->source_p < context_p->source_end_p
&& (lit_char_is_identifier_start (context_p->source_p) || context_p->source_p[0] == LIT_CHAR_BACKSLASH))
{
lexer_parse_identifier (context_p, literal_type != LEXER_STRING_LITERAL);
if (context_p->token.type == LEXER_LITERAL)
{
lexer_construct_literal_object (context_p,
&context_p->token.lit_location,
literal_type);
if (literal_type == LEXER_IDENT_LITERAL
&& (context_p->status_flags & PARSER_IS_STRICT)
&& context_p->lit_object.type != LEXER_LITERAL_OBJECT_ANY)
{
parser_error_t error;
if (context_p->lit_object.type == LEXER_LITERAL_OBJECT_EVAL)
{
error = PARSER_ERR_EVAL_NOT_ALLOWED;
}
else
{
JERRY_ASSERT (context_p->lit_object.type == LEXER_LITERAL_OBJECT_ARGUMENTS);
error = PARSER_ERR_ARGUMENTS_NOT_ALLOWED;
}
parser_raise_error (context_p, error);
}
context_p->token.lit_location.type = literal_type;
return;
}
}
parser_raise_error (context_p, PARSER_ERR_IDENTIFIER_EXPECTED);
} /* lexer_expect_identifier */
|
lexer_expect_identifier (parser_context_t *context_p, /**< context */
uint8_t literal_type) /**< literal type */
{
JERRY_ASSERT (literal_type == LEXER_STRING_LITERAL
|| literal_type == LEXER_IDENT_LITERAL);
skip_spaces (context_p);
context_p->token.line = context_p->line;
context_p->token.column = context_p->column;
if (context_p->source_p < context_p->source_end_p
&& (lit_char_is_identifier_start (context_p->source_p) || context_p->source_p[0] == LIT_CHAR_BACKSLASH))
{
lexer_parse_identifier (context_p, literal_type != LEXER_STRING_LITERAL);
if (context_p->token.type == LEXER_LITERAL)
{
lexer_construct_literal_object (context_p,
&context_p->token.lit_location,
literal_type);
if (literal_type == LEXER_IDENT_LITERAL
&& (context_p->status_flags & PARSER_IS_STRICT)
&& context_p->lit_object.type != LEXER_LITERAL_OBJECT_ANY)
{
parser_error_t error;
if (context_p->lit_object.type == LEXER_LITERAL_OBJECT_EVAL)
{
error = PARSER_ERR_EVAL_NOT_ALLOWED;
}
else
{
JERRY_ASSERT (context_p->lit_object.type == LEXER_LITERAL_OBJECT_ARGUMENTS);
error = PARSER_ERR_ARGUMENTS_NOT_ALLOWED;
}
parser_raise_error (context_p, error);
}
context_p->token.lit_location.type = literal_type;
return;
}
}
parser_raise_error (context_p, PARSER_ERR_IDENTIFIER_EXPECTED);
} /* lexer_expect_identifier */
|
C
|
jerryscript
| 0 |
CVE-2013-2884
|
https://www.cvedetails.com/cve/CVE-2013-2884/
|
CWE-399
|
https://github.com/chromium/chromium/commit/4ac8bc08e3306f38a5ab3e551aef6ad43753579c
|
4ac8bc08e3306f38a5ab3e551aef6ad43753579c
|
Set Attr.ownerDocument in Element#setAttributeNode()
Attr objects can move across documents by setAttributeNode().
So It needs to reset ownerDocument through TreeScopeAdoptr::adoptIfNeeded().
BUG=248950
TEST=set-attribute-node-from-iframe.html
Review URL: https://chromiumcodereview.appspot.com/17583003
git-svn-id: svn://svn.chromium.org/blink/trunk@152938 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
const AtomicString& Element::imageSourceURL() const
{
return getAttribute(srcAttr);
}
|
const AtomicString& Element::imageSourceURL() const
{
return getAttribute(srcAttr);
}
|
C
|
Chrome
| 0 |
CVE-2013-0842
|
https://www.cvedetails.com/cve/CVE-2013-0842/
| null |
https://github.com/chromium/chromium/commit/10cbaf017570ba6454174c55b844647aa6a9b3b4
|
10cbaf017570ba6454174c55b844647aa6a9b3b4
|
Validate that paths don't contain embedded NULLs at deserialization.
BUG=166867
Review URL: https://chromiumcodereview.appspot.com/11743009
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@174935 0039d316-1c4b-4281-b951-d872f2087c98
|
void ParamTraits<unsigned short>::Log(const param_type& p, std::string* l) {
l->append(base::UintToString(p));
}
|
void ParamTraits<unsigned short>::Log(const param_type& p, std::string* l) {
l->append(base::UintToString(p));
}
|
C
|
Chrome
| 0 |
CVE-2014-8324
|
https://www.cvedetails.com/cve/CVE-2014-8324/
|
CWE-20
|
https://github.com/aircrack-ng/aircrack-ng/commit/88702a3ce4c28a973bf69023cd0312f412f6193e
|
88702a3ce4c28a973bf69023cd0312f412f6193e
|
OSdep: Fixed segmentation fault that happens with a malicious server sending a negative length (Closes #16 on GitHub).
git-svn-id: http://svn.aircrack-ng.org/trunk@2419 28c6078b-6c39-48e3-add9-af49d547ecab
|
static int net_write(struct wif *wi, unsigned char *h80211, int len,
struct tx_info *ti)
{
struct priv_net *pn = wi_priv(wi);
int sz = sizeof(*ti);
unsigned char buf[2048];
unsigned char *ptr = buf;
/* XXX */
if (ti)
memcpy(ptr, ti, sz);
else
memset(ptr, 0, sizeof(*ti));
ptr += sz;
memcpy(ptr, h80211, len);
sz += len;
return net_cmd(pn, NET_WRITE, buf, sz);
}
|
static int net_write(struct wif *wi, unsigned char *h80211, int len,
struct tx_info *ti)
{
struct priv_net *pn = wi_priv(wi);
int sz = sizeof(*ti);
unsigned char buf[2048];
unsigned char *ptr = buf;
/* XXX */
if (ti)
memcpy(ptr, ti, sz);
else
memset(ptr, 0, sizeof(*ti));
ptr += sz;
memcpy(ptr, h80211, len);
sz += len;
return net_cmd(pn, NET_WRITE, buf, sz);
}
|
C
|
aircrack-ng
| 0 |
CVE-2016-2324
|
https://www.cvedetails.com/cve/CVE-2016-2324/
|
CWE-119
|
https://github.com/git/git/commit/de1e67d0703894cb6ea782e36abb63976ab07e60
|
de1e67d0703894cb6ea782e36abb63976ab07e60
|
list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
static int check_pack_inflate(struct packed_git *p,
struct pack_window **w_curs,
off_t offset,
off_t len,
unsigned long expect)
{
git_zstream stream;
unsigned char fakebuf[4096], *in;
int st;
memset(&stream, 0, sizeof(stream));
git_inflate_init(&stream);
do {
in = use_pack(p, w_curs, offset, &stream.avail_in);
stream.next_in = in;
stream.next_out = fakebuf;
stream.avail_out = sizeof(fakebuf);
st = git_inflate(&stream, Z_FINISH);
offset += stream.next_in - in;
} while (st == Z_OK || st == Z_BUF_ERROR);
git_inflate_end(&stream);
return (st == Z_STREAM_END &&
stream.total_out == expect &&
stream.total_in == len) ? 0 : -1;
}
|
static int check_pack_inflate(struct packed_git *p,
struct pack_window **w_curs,
off_t offset,
off_t len,
unsigned long expect)
{
git_zstream stream;
unsigned char fakebuf[4096], *in;
int st;
memset(&stream, 0, sizeof(stream));
git_inflate_init(&stream);
do {
in = use_pack(p, w_curs, offset, &stream.avail_in);
stream.next_in = in;
stream.next_out = fakebuf;
stream.avail_out = sizeof(fakebuf);
st = git_inflate(&stream, Z_FINISH);
offset += stream.next_in - in;
} while (st == Z_OK || st == Z_BUF_ERROR);
git_inflate_end(&stream);
return (st == Z_STREAM_END &&
stream.total_out == expect &&
stream.total_in == len) ? 0 : -1;
}
|
C
|
git
| 0 |
CVE-2016-3839
|
https://www.cvedetails.com/cve/CVE-2016-3839/
|
CWE-284
|
https://android.googlesource.com/platform/system/bt/+/472271b153c5dc53c28beac55480a8d8434b2d5c
|
472271b153c5dc53c28beac55480a8d8434b2d5c
|
DO NOT MERGE Fix potential DoS caused by delivering signal to BT process
Bug: 28885210
Change-Id: I63866d894bfca47464d6e42e3fb0357c4f94d360
Conflicts:
btif/co/bta_hh_co.c
btif/src/btif_core.c
Merge conflict resolution of ag/1161415 (referencing ag/1164670)
- Directly into mnc-mr2-release
|
static section_t *section_find(const config_t *config, const char *section) {
for (const list_node_t *node = list_begin(config->sections); node != list_end(config->sections); node = list_next(node)) {
section_t *sec = list_node(node);
if (!strcmp(sec->name, section))
return sec;
}
return NULL;
}
|
static section_t *section_find(const config_t *config, const char *section) {
for (const list_node_t *node = list_begin(config->sections); node != list_end(config->sections); node = list_next(node)) {
section_t *sec = list_node(node);
if (!strcmp(sec->name, section))
return sec;
}
return NULL;
}
|
C
|
Android
| 0 |
CVE-2019-5828
|
https://www.cvedetails.com/cve/CVE-2019-5828/
|
CWE-416
|
https://github.com/chromium/chromium/commit/761d65ebcac0cdb730fd27b87e207201ac38e3b4
|
761d65ebcac0cdb730fd27b87e207201ac38e3b4
|
[Payment Handler] Don't wait for response from closed payment app.
Before this patch, tapping the back button on top of the payment handler
window on desktop would not affect the |response_helper_|, which would
continue waiting for a response from the payment app. The service worker
of the closed payment app could timeout after 5 minutes and invoke the
|response_helper_|. Depending on what else the user did afterwards, in
the best case scenario, the payment sheet would display a "Transaction
failed" error message. In the worst case scenario, the
|response_helper_| would be used after free.
This patch clears the |response_helper_| in the PaymentRequestState and
in the ServiceWorkerPaymentInstrument after the payment app is closed.
After this patch, the cancelled payment app does not show "Transaction
failed" and does not use memory after it was freed.
Bug: 956597
Change-Id: I64134b911a4f8c154cb56d537a8243a68a806394
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1588682
Reviewed-by: anthonyvd <anthonyvd@chromium.org>
Commit-Queue: Rouslan Solomakhin <rouslan@chromium.org>
Cr-Commit-Position: refs/heads/master@{#654995}
|
void ServiceWorkerPaymentInstrument::OnCanMakePaymentEventResponded(
ValidateCanMakePaymentCallback callback,
bool result) {
if (base::FeatureList::IsEnabled(
::features::kPaymentRequestHasEnrolledInstrument)) {
can_make_payment_result_ = true;
has_enrolled_instrument_result_ = result;
} else {
can_make_payment_result_ = result;
has_enrolled_instrument_result_ = result;
}
base::ThreadTaskRunnerHandle::Get()->PostTask(
FROM_HERE,
base::BindOnce(std::move(callback), this, can_make_payment_result_));
}
|
void ServiceWorkerPaymentInstrument::OnCanMakePaymentEventResponded(
ValidateCanMakePaymentCallback callback,
bool result) {
if (base::FeatureList::IsEnabled(
::features::kPaymentRequestHasEnrolledInstrument)) {
can_make_payment_result_ = true;
has_enrolled_instrument_result_ = result;
} else {
can_make_payment_result_ = result;
has_enrolled_instrument_result_ = result;
}
base::ThreadTaskRunnerHandle::Get()->PostTask(
FROM_HERE,
base::BindOnce(std::move(callback), this, can_make_payment_result_));
}
|
C
|
Chrome
| 0 |
CVE-2012-6075
|
https://www.cvedetails.com/cve/CVE-2012-6075/
|
CWE-119
|
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=b0d9ffcd0251161c7c92f94804dcf599dfa3edeb
|
b0d9ffcd0251161c7c92f94804dcf599dfa3edeb
| null |
set_eecd(E1000State *s, int index, uint32_t val)
{
uint32_t oldval = s->eecd_state.old_eecd;
s->eecd_state.old_eecd = val & (E1000_EECD_SK | E1000_EECD_CS |
E1000_EECD_DI|E1000_EECD_FWE_MASK|E1000_EECD_REQ);
if (!(E1000_EECD_CS & val)) // CS inactive; nothing to do
return;
if (E1000_EECD_CS & (val ^ oldval)) { // CS rise edge; reset state
s->eecd_state.val_in = 0;
s->eecd_state.bitnum_in = 0;
s->eecd_state.bitnum_out = 0;
s->eecd_state.reading = 0;
}
if (!(E1000_EECD_SK & (val ^ oldval))) // no clock edge
return;
if (!(E1000_EECD_SK & val)) { // falling edge
s->eecd_state.bitnum_out++;
return;
}
s->eecd_state.val_in <<= 1;
if (val & E1000_EECD_DI)
s->eecd_state.val_in |= 1;
if (++s->eecd_state.bitnum_in == 9 && !s->eecd_state.reading) {
s->eecd_state.bitnum_out = ((s->eecd_state.val_in & 0x3f)<<4)-1;
s->eecd_state.reading = (((s->eecd_state.val_in >> 6) & 7) ==
EEPROM_READ_OPCODE_MICROWIRE);
}
DBGOUT(EEPROM, "eeprom bitnum in %d out %d, reading %d\n",
s->eecd_state.bitnum_in, s->eecd_state.bitnum_out,
s->eecd_state.reading);
}
|
set_eecd(E1000State *s, int index, uint32_t val)
{
uint32_t oldval = s->eecd_state.old_eecd;
s->eecd_state.old_eecd = val & (E1000_EECD_SK | E1000_EECD_CS |
E1000_EECD_DI|E1000_EECD_FWE_MASK|E1000_EECD_REQ);
if (!(E1000_EECD_CS & val)) // CS inactive; nothing to do
return;
if (E1000_EECD_CS & (val ^ oldval)) { // CS rise edge; reset state
s->eecd_state.val_in = 0;
s->eecd_state.bitnum_in = 0;
s->eecd_state.bitnum_out = 0;
s->eecd_state.reading = 0;
}
if (!(E1000_EECD_SK & (val ^ oldval))) // no clock edge
return;
if (!(E1000_EECD_SK & val)) { // falling edge
s->eecd_state.bitnum_out++;
return;
}
s->eecd_state.val_in <<= 1;
if (val & E1000_EECD_DI)
s->eecd_state.val_in |= 1;
if (++s->eecd_state.bitnum_in == 9 && !s->eecd_state.reading) {
s->eecd_state.bitnum_out = ((s->eecd_state.val_in & 0x3f)<<4)-1;
s->eecd_state.reading = (((s->eecd_state.val_in >> 6) & 7) ==
EEPROM_READ_OPCODE_MICROWIRE);
}
DBGOUT(EEPROM, "eeprom bitnum in %d out %d, reading %d\n",
s->eecd_state.bitnum_in, s->eecd_state.bitnum_out,
s->eecd_state.reading);
}
|
C
|
qemu
| 0 |
CVE-2013-0842
|
https://www.cvedetails.com/cve/CVE-2013-0842/
| null |
https://github.com/chromium/chromium/commit/10cbaf017570ba6454174c55b844647aa6a9b3b4
|
10cbaf017570ba6454174c55b844647aa6a9b3b4
|
Validate that paths don't contain embedded NULLs at deserialization.
BUG=166867
Review URL: https://chromiumcodereview.appspot.com/11743009
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@174935 0039d316-1c4b-4281-b951-d872f2087c98
|
void ParamTraits<float>::Log(const param_type& p, std::string* l) {
l->append(StringPrintf("%e", p));
}
|
void ParamTraits<float>::Log(const param_type& p, std::string* l) {
l->append(StringPrintf("%e", p));
}
|
C
|
Chrome
| 0 |
CVE-2018-6033
|
https://www.cvedetails.com/cve/CVE-2018-6033/
|
CWE-20
|
https://github.com/chromium/chromium/commit/a8d6ae61d266d8bc44c3dd2d08bda32db701e359
|
a8d6ae61d266d8bc44c3dd2d08bda32db701e359
|
Downloads : Fixed an issue of opening incorrect download file
When one download overwrites another completed download, calling download.open in the old download causes the new download to open, which could be dangerous and undesirable. In this CL, we are trying to avoid this by blocking the opening of the old download.
Bug: 793620
Change-Id: Ic948175756700ad7c08489c3cc347330daedb6f8
Reviewed-on: https://chromium-review.googlesource.com/826477
Reviewed-by: David Trainor <dtrainor@chromium.org>
Reviewed-by: Xing Liu <xingliu@chromium.org>
Reviewed-by: John Abd-El-Malek <jam@chromium.org>
Commit-Queue: Shakti Sahu <shaktisahu@chromium.org>
Cr-Commit-Position: refs/heads/master@{#525810}
|
void DownloadManagerImpl::InterceptNavigationOnChecksComplete(
ResourceRequestInfo::WebContentsGetter web_contents_getter,
std::unique_ptr<ResourceRequest> resource_request,
std::vector<GURL> url_chain,
scoped_refptr<ResourceResponse> response,
net::CertStatus cert_status,
mojom::URLLoaderClientEndpointsPtr url_loader_client_endpoints,
bool is_download_allowed) {
if (!is_download_allowed)
return;
BrowserThread::PostTask(
BrowserThread::IO, FROM_HERE,
base::BindOnce(&DownloadManagerImpl::CreateDownloadHandlerForNavigation,
weak_factory_.GetWeakPtr(), web_contents_getter,
std::move(resource_request), std::move(url_chain),
std::move(response), std::move(cert_status),
std::move(url_loader_client_endpoints)));
}
|
void DownloadManagerImpl::InterceptNavigationOnChecksComplete(
ResourceRequestInfo::WebContentsGetter web_contents_getter,
std::unique_ptr<ResourceRequest> resource_request,
std::vector<GURL> url_chain,
scoped_refptr<ResourceResponse> response,
net::CertStatus cert_status,
mojom::URLLoaderClientEndpointsPtr url_loader_client_endpoints,
bool is_download_allowed) {
if (!is_download_allowed)
return;
BrowserThread::PostTask(
BrowserThread::IO, FROM_HERE,
base::BindOnce(&DownloadManagerImpl::CreateDownloadHandlerForNavigation,
weak_factory_.GetWeakPtr(), web_contents_getter,
std::move(resource_request), std::move(url_chain),
std::move(response), std::move(cert_status),
std::move(url_loader_client_endpoints)));
}
|
C
|
Chrome
| 0 |
CVE-2017-6903
|
https://www.cvedetails.com/cve/CVE-2017-6903/
|
CWE-269
|
https://github.com/ioquake/ioq3/commit/b173ac05993f634a42be3d3535e1b158de0c3372
|
b173ac05993f634a42be3d3535e1b158de0c3372
|
Merge some file writing extension checks from OpenJK.
Thanks Ensiform.
https://github.com/JACoders/OpenJK/commit/05928a57f9e4aae15a3bd0
https://github.com/JACoders/OpenJK/commit/ef124fd0fc48af164581176
|
void Com_InitZoneMemory( void ) {
cvar_t *cv;
cv = Cvar_Get( "com_zoneMegs", DEF_COMZONEMEGS_S, CVAR_LATCH | CVAR_ARCHIVE );
if ( cv->integer < DEF_COMZONEMEGS ) {
s_zoneTotal = 1024 * 1024 * DEF_COMZONEMEGS;
} else {
s_zoneTotal = cv->integer * 1024 * 1024;
}
mainzone = calloc( s_zoneTotal, 1 );
if ( !mainzone ) {
Com_Error( ERR_FATAL, "Zone data failed to allocate %i megs", s_zoneTotal / (1024*1024) );
}
Z_ClearZone( mainzone, s_zoneTotal );
}
|
void Com_InitZoneMemory( void ) {
cvar_t *cv;
cv = Cvar_Get( "com_zoneMegs", DEF_COMZONEMEGS_S, CVAR_LATCH | CVAR_ARCHIVE );
if ( cv->integer < DEF_COMZONEMEGS ) {
s_zoneTotal = 1024 * 1024 * DEF_COMZONEMEGS;
} else {
s_zoneTotal = cv->integer * 1024 * 1024;
}
mainzone = calloc( s_zoneTotal, 1 );
if ( !mainzone ) {
Com_Error( ERR_FATAL, "Zone data failed to allocate %i megs", s_zoneTotal / (1024*1024) );
}
Z_ClearZone( mainzone, s_zoneTotal );
}
|
C
|
OpenJK
| 0 |
CVE-2013-0922
|
https://www.cvedetails.com/cve/CVE-2013-0922/
|
CWE-264
|
https://github.com/chromium/chromium/commit/28aaa72a03df96fa1934876b0efbbc7e6b4b38af
|
28aaa72a03df96fa1934876b0efbbc7e6b4b38af
|
Revert cross-origin auth prompt blocking.
BUG=174129
Review URL: https://chromiumcodereview.appspot.com/12183030
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@181113 0039d316-1c4b-4281-b951-d872f2087c98
|
AuthInfo(const std::string username,
const std::string password)
: username_(username), password_(password) {}
|
AuthInfo(const std::string username,
const std::string password)
: username_(username), password_(password) {}
|
C
|
Chrome
| 0 |
CVE-2016-3698
|
https://www.cvedetails.com/cve/CVE-2016-3698/
|
CWE-284
|
https://github.com/jpirko/libndp/commit/a4892df306e0532487f1634ba6d4c6d4bb381c7f
|
a4892df306e0532487f1634ba6d4c6d4bb381c7f
|
libndp: validate the IPv6 hop limit
None of the NDP messages should ever come from a non-local network; as
stated in RFC4861's 6.1.1 (RS), 6.1.2 (RA), 7.1.1 (NS), 7.1.2 (NA),
and 8.1. (redirect):
- The IP Hop Limit field has a value of 255, i.e., the packet
could not possibly have been forwarded by a router.
This fixes CVE-2016-3698.
Reported by: Julien BERNARD <julien.bernard@viagenie.ca>
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
|
uint32_t ndp_msg_opt_prefix_valid_time(struct ndp_msg *msg, int offset)
{
struct nd_opt_prefix_info *pi =
ndp_msg_payload_opts_offset(msg, offset);
return ntohl(pi->nd_opt_pi_valid_time);
}
|
uint32_t ndp_msg_opt_prefix_valid_time(struct ndp_msg *msg, int offset)
{
struct nd_opt_prefix_info *pi =
ndp_msg_payload_opts_offset(msg, offset);
return ntohl(pi->nd_opt_pi_valid_time);
}
|
C
|
libndp
| 0 |
CVE-2016-7916
|
https://www.cvedetails.com/cve/CVE-2016-7916/
|
CWE-362
|
https://github.com/torvalds/linux/commit/8148a73c9901a8794a50f950083c00ccf97d43b3
|
8148a73c9901a8794a50f950083c00ccf97d43b3
|
proc: prevent accessing /proc/<PID>/environ until it's ready
If /proc/<PID>/environ gets read before the envp[] array is fully set up
in create_{aout,elf,elf_fdpic,flat}_tables(), we might end up trying to
read more bytes than are actually written, as env_start will already be
set but env_end will still be zero, making the range calculation
underflow, allowing to read beyond the end of what has been written.
Fix this as it is done for /proc/<PID>/cmdline by testing env_end for
zero. It is, apparently, intentionally set last in create_*_tables().
This bug was found by the PaX size_overflow plugin that detected the
arithmetic underflow of 'this_len = env_end - (env_start + src)' when
env_end is still zero.
The expected consequence is that userland trying to access
/proc/<PID>/environ of a not yet fully set up process may get
inconsistent data as we're in the middle of copying in the environment
variables.
Fixes: https://forums.grsecurity.net/viewtopic.php?f=3&t=4363
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=116461
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Pax Team <pageexec@freemail.hu>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Mateusz Guzik <mguzik@redhat.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
static int map_files_d_revalidate(struct dentry *dentry, unsigned int flags)
{
unsigned long vm_start, vm_end;
bool exact_vma_exists = false;
struct mm_struct *mm = NULL;
struct task_struct *task;
const struct cred *cred;
struct inode *inode;
int status = 0;
if (flags & LOOKUP_RCU)
return -ECHILD;
inode = d_inode(dentry);
task = get_proc_task(inode);
if (!task)
goto out_notask;
mm = mm_access(task, PTRACE_MODE_READ_FSCREDS);
if (IS_ERR_OR_NULL(mm))
goto out;
if (!dname_to_vma_addr(dentry, &vm_start, &vm_end)) {
down_read(&mm->mmap_sem);
exact_vma_exists = !!find_exact_vma(mm, vm_start, vm_end);
up_read(&mm->mmap_sem);
}
mmput(mm);
if (exact_vma_exists) {
if (task_dumpable(task)) {
rcu_read_lock();
cred = __task_cred(task);
inode->i_uid = cred->euid;
inode->i_gid = cred->egid;
rcu_read_unlock();
} else {
inode->i_uid = GLOBAL_ROOT_UID;
inode->i_gid = GLOBAL_ROOT_GID;
}
security_task_to_inode(task, inode);
status = 1;
}
out:
put_task_struct(task);
out_notask:
return status;
}
|
static int map_files_d_revalidate(struct dentry *dentry, unsigned int flags)
{
unsigned long vm_start, vm_end;
bool exact_vma_exists = false;
struct mm_struct *mm = NULL;
struct task_struct *task;
const struct cred *cred;
struct inode *inode;
int status = 0;
if (flags & LOOKUP_RCU)
return -ECHILD;
inode = d_inode(dentry);
task = get_proc_task(inode);
if (!task)
goto out_notask;
mm = mm_access(task, PTRACE_MODE_READ_FSCREDS);
if (IS_ERR_OR_NULL(mm))
goto out;
if (!dname_to_vma_addr(dentry, &vm_start, &vm_end)) {
down_read(&mm->mmap_sem);
exact_vma_exists = !!find_exact_vma(mm, vm_start, vm_end);
up_read(&mm->mmap_sem);
}
mmput(mm);
if (exact_vma_exists) {
if (task_dumpable(task)) {
rcu_read_lock();
cred = __task_cred(task);
inode->i_uid = cred->euid;
inode->i_gid = cred->egid;
rcu_read_unlock();
} else {
inode->i_uid = GLOBAL_ROOT_UID;
inode->i_gid = GLOBAL_ROOT_GID;
}
security_task_to_inode(task, inode);
status = 1;
}
out:
put_task_struct(task);
out_notask:
return status;
}
|
C
|
linux
| 0 |
CVE-2018-16435
|
https://www.cvedetails.com/cve/CVE-2018-16435/
|
CWE-190
|
https://github.com/mm2/Little-CMS/commit/768f70ca405cd3159d990e962d54456773bb8cf8
|
768f70ca405cd3159d990e962d54456773bb8cf8
|
Upgrade Visual studio 2017 15.8
- Upgrade to 15.8
- Add check on CGATS memory allocation (thanks to Quang Nguyen for
pointing out this)
|
cmsBool CMSEXPORT cmsIT8SetDataDbl(cmsHANDLE hIT8, const char* cPatch,
const char* cSample,
cmsFloat64Number Val)
{
cmsIT8* it8 = (cmsIT8*) hIT8;
char Buff[256];
_cmsAssert(hIT8 != NULL);
snprintf(Buff, 255, it8->DoubleFormatter, Val);
return cmsIT8SetData(hIT8, cPatch, cSample, Buff);
}
|
cmsBool CMSEXPORT cmsIT8SetDataDbl(cmsHANDLE hIT8, const char* cPatch,
const char* cSample,
cmsFloat64Number Val)
{
cmsIT8* it8 = (cmsIT8*) hIT8;
char Buff[256];
_cmsAssert(hIT8 != NULL);
snprintf(Buff, 255, it8->DoubleFormatter, Val);
return cmsIT8SetData(hIT8, cPatch, cSample, Buff);
}
|
C
|
Little-CMS
| 0 |
CVE-2019-5794
|
https://www.cvedetails.com/cve/CVE-2019-5794/
|
CWE-20
|
https://github.com/chromium/chromium/commit/56b512399a5c2221ba4812f5170f3f8dc352cd74
|
56b512399a5c2221ba4812f5170f3f8dc352cd74
|
Show an error page if a URL redirects to a javascript: URL.
BUG=935175
Change-Id: Id4a9198d5dff823bc3d324b9de9bff2ee86dc499
Reviewed-on: https://chromium-review.googlesource.com/c/1488152
Commit-Queue: Charlie Reis <creis@chromium.org>
Reviewed-by: Arthur Sonzogni <arthursonzogni@chromium.org>
Cr-Commit-Position: refs/heads/master@{#635848}
|
void NavigationRequest::CommitErrorPage(
const base::Optional<std::string>& error_page_content) {
UpdateCommitNavigationParamsHistory();
frame_tree_node_->TransferNavigationRequestOwnership(render_frame_host_);
commit_params_.origin_to_commit.reset();
if (IsPerNavigationMojoInterfaceEnabled() && request_navigation_client_ &&
request_navigation_client_.is_bound()) {
if (associated_site_instance_id_ ==
render_frame_host_->GetSiteInstance()->GetId()) {
commit_navigation_client_ = std::move(request_navigation_client_);
} else {
IgnoreInterfaceDisconnection();
}
associated_site_instance_id_.reset();
}
navigation_handle_->ReadyToCommitNavigation(true);
render_frame_host_->FailedNavigation(this, common_params_, commit_params_,
has_stale_copy_in_cache_, net_error_,
error_page_content);
}
|
void NavigationRequest::CommitErrorPage(
const base::Optional<std::string>& error_page_content) {
UpdateCommitNavigationParamsHistory();
frame_tree_node_->TransferNavigationRequestOwnership(render_frame_host_);
commit_params_.origin_to_commit.reset();
if (IsPerNavigationMojoInterfaceEnabled() && request_navigation_client_ &&
request_navigation_client_.is_bound()) {
if (associated_site_instance_id_ ==
render_frame_host_->GetSiteInstance()->GetId()) {
commit_navigation_client_ = std::move(request_navigation_client_);
} else {
IgnoreInterfaceDisconnection();
}
associated_site_instance_id_.reset();
}
navigation_handle_->ReadyToCommitNavigation(true);
render_frame_host_->FailedNavigation(this, common_params_, commit_params_,
has_stale_copy_in_cache_, net_error_,
error_page_content);
}
|
C
|
Chrome
| 0 |
CVE-2019-5827
|
https://www.cvedetails.com/cve/CVE-2019-5827/
|
CWE-190
|
https://github.com/chromium/chromium/commit/517ac71c9ee27f856f9becde8abea7d1604af9d4
|
517ac71c9ee27f856f9becde8abea7d1604af9d4
|
sqlite: backport bugfixes for dbfuzz2
Bug: 952406
Change-Id: Icbec429742048d6674828726c96d8e265c41b595
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1568152
Reviewed-by: Chris Mumford <cmumford@google.com>
Commit-Queue: Darwin Huang <huangdarwin@chromium.org>
Cr-Commit-Position: refs/heads/master@{#651030}
|
static const sqlite3_io_methods *autolockIoFinderImpl(
const char *filePath, /* name of the database file */
unixFile *pNew /* open file object for the database file */
){
static const struct Mapping {
const char *zFilesystem; /* Filesystem type name */
const sqlite3_io_methods *pMethods; /* Appropriate locking method */
} aMap[] = {
{ "hfs", &posixIoMethods },
{ "ufs", &posixIoMethods },
{ "afpfs", &afpIoMethods },
{ "smbfs", &afpIoMethods },
{ "webdav", &nolockIoMethods },
{ 0, 0 }
};
int i;
struct statfs fsInfo;
struct flock lockInfo;
if( !filePath ){
/* If filePath==NULL that means we are dealing with a transient file
** that does not need to be locked. */
return &nolockIoMethods;
}
if( statfs(filePath, &fsInfo) != -1 ){
if( fsInfo.f_flags & MNT_RDONLY ){
return &nolockIoMethods;
}
for(i=0; aMap[i].zFilesystem; i++){
if( strcmp(fsInfo.f_fstypename, aMap[i].zFilesystem)==0 ){
return aMap[i].pMethods;
}
}
}
/* Default case. Handles, amongst others, "nfs".
** Test byte-range lock using fcntl(). If the call succeeds,
** assume that the file-system supports POSIX style locks.
*/
lockInfo.l_len = 1;
lockInfo.l_start = 0;
lockInfo.l_whence = SEEK_SET;
lockInfo.l_type = F_RDLCK;
if( osFcntl(pNew->h, F_GETLK, &lockInfo)!=-1 ) {
if( strcmp(fsInfo.f_fstypename, "nfs")==0 ){
return &nfsIoMethods;
} else {
return &posixIoMethods;
}
}else{
return &dotlockIoMethods;
}
}
|
static const sqlite3_io_methods *autolockIoFinderImpl(
const char *filePath, /* name of the database file */
unixFile *pNew /* open file object for the database file */
){
static const struct Mapping {
const char *zFilesystem; /* Filesystem type name */
const sqlite3_io_methods *pMethods; /* Appropriate locking method */
} aMap[] = {
{ "hfs", &posixIoMethods },
{ "ufs", &posixIoMethods },
{ "afpfs", &afpIoMethods },
{ "smbfs", &afpIoMethods },
{ "webdav", &nolockIoMethods },
{ 0, 0 }
};
int i;
struct statfs fsInfo;
struct flock lockInfo;
if( !filePath ){
/* If filePath==NULL that means we are dealing with a transient file
** that does not need to be locked. */
return &nolockIoMethods;
}
if( statfs(filePath, &fsInfo) != -1 ){
if( fsInfo.f_flags & MNT_RDONLY ){
return &nolockIoMethods;
}
for(i=0; aMap[i].zFilesystem; i++){
if( strcmp(fsInfo.f_fstypename, aMap[i].zFilesystem)==0 ){
return aMap[i].pMethods;
}
}
}
/* Default case. Handles, amongst others, "nfs".
** Test byte-range lock using fcntl(). If the call succeeds,
** assume that the file-system supports POSIX style locks.
*/
lockInfo.l_len = 1;
lockInfo.l_start = 0;
lockInfo.l_whence = SEEK_SET;
lockInfo.l_type = F_RDLCK;
if( osFcntl(pNew->h, F_GETLK, &lockInfo)!=-1 ) {
if( strcmp(fsInfo.f_fstypename, "nfs")==0 ){
return &nfsIoMethods;
} else {
return &posixIoMethods;
}
}else{
return &dotlockIoMethods;
}
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/8353baf8d1504dbdd4ad7584ff2466de657521cd
|
8353baf8d1504dbdd4ad7584ff2466de657521cd
|
Remove WebFrame::canHaveSecureChild
To simplify the public API, ServiceWorkerNetworkProvider can do the
parent walk itself.
Follow-up to https://crrev.com/ad1850962644e19.
BUG=607543
Review-Url: https://codereview.chromium.org/2082493002
Cr-Commit-Position: refs/heads/master@{#400896}
|
bool Frame::canHaveSecureChild() const
|
bool Frame::canHaveSecureChild() const
{
for (const Frame* parent = this; parent; parent = parent->tree().parent()) {
if (!parent->securityContext()->getSecurityOrigin()->isPotentiallyTrustworthy())
return false;
}
return true;
}
|
C
|
Chrome
| 1 |
CVE-2018-6074
|
https://www.cvedetails.com/cve/CVE-2018-6074/
|
CWE-20
|
https://github.com/chromium/chromium/commit/c59ad14fc61393a50b2ca3e89c7ecaba7028c4c4
|
c59ad14fc61393a50b2ca3e89c7ecaba7028c4c4
|
DevTools: allow styling the page number element when printing over the protocol.
Bug: none
Change-Id: I13e6afbd86a7c6bcdedbf0645183194b9de7cfb4
Reviewed-on: https://chromium-review.googlesource.com/809759
Commit-Queue: Pavel Feldman <pfeldman@chromium.org>
Reviewed-by: Lei Zhang <thestig@chromium.org>
Reviewed-by: Tom Sepez <tsepez@chromium.org>
Reviewed-by: Jianzhou Feng <jzfeng@chromium.org>
Cr-Commit-Position: refs/heads/master@{#523966}
|
void OnGetDefaultPrintSettings(IPC::Message* reply_msg) {
manager->OnGetDefaultPrintSettings(reply_msg);
}
|
void OnGetDefaultPrintSettings(IPC::Message* reply_msg) {
manager->OnGetDefaultPrintSettings(reply_msg);
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/6a13a6c2fbae0b3269743e6a141fdfe0d9ec9793
|
6a13a6c2fbae0b3269743e6a141fdfe0d9ec9793
|
Don't delete the current NavigationEntry when leaving an interstitial page.
BUG=107182
TEST=See bug
Review URL: http://codereview.chromium.org/8976014
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@115189 0039d316-1c4b-4281-b951-d872f2087c98
|
void SafeBrowsingBlockingPage::PopulatePhishingStringDictionary(
DictionaryValue* strings) {
std::string proceed_link = base::StringPrintf(
kPLinkHtml,
l10n_util::GetStringUTF8(
IDS_SAFE_BROWSING_PHISHING_PROCEED_LINK).c_str());
string16 description3 = l10n_util::GetStringFUTF16(
IDS_SAFE_BROWSING_PHISHING_DESCRIPTION3,
UTF8ToUTF16(proceed_link));
PopulateStringDictionary(
strings,
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_TITLE),
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_HEADLINE),
l10n_util::GetStringFUTF16(IDS_SAFE_BROWSING_PHISHING_DESCRIPTION1,
UTF8ToUTF16(url().host())),
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_DESCRIPTION2),
description3);
strings->SetString("back_button",
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_BACK_BUTTON));
strings->SetString("report_error",
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_REPORT_ERROR));
strings->SetString("textdirection", base::i18n::IsRTL() ? "rtl" : "ltr");
}
|
void SafeBrowsingBlockingPage::PopulatePhishingStringDictionary(
DictionaryValue* strings) {
std::string proceed_link = base::StringPrintf(
kPLinkHtml,
l10n_util::GetStringUTF8(
IDS_SAFE_BROWSING_PHISHING_PROCEED_LINK).c_str());
string16 description3 = l10n_util::GetStringFUTF16(
IDS_SAFE_BROWSING_PHISHING_DESCRIPTION3,
UTF8ToUTF16(proceed_link));
PopulateStringDictionary(
strings,
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_TITLE),
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_HEADLINE),
l10n_util::GetStringFUTF16(IDS_SAFE_BROWSING_PHISHING_DESCRIPTION1,
UTF8ToUTF16(url().host())),
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_DESCRIPTION2),
description3);
strings->SetString("back_button",
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_BACK_BUTTON));
strings->SetString("report_error",
l10n_util::GetStringUTF16(IDS_SAFE_BROWSING_PHISHING_REPORT_ERROR));
strings->SetString("textdirection", base::i18n::IsRTL() ? "rtl" : "ltr");
}
|
C
|
Chrome
| 0 |
CVE-2018-13006
|
https://www.cvedetails.com/cve/CVE-2018-13006/
|
CWE-125
|
https://github.com/gpac/gpac/commit/bceb03fd2be95097a7b409ea59914f332fb6bc86
|
bceb03fd2be95097a7b409ea59914f332fb6bc86
|
fixed 2 possible heap overflows (inc. #1088)
|
GF_Err elng_Write(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
GF_ExtendedLanguageBox *ptr = (GF_ExtendedLanguageBox *)s;
e = gf_isom_full_box_write(s, bs);
if (e) return e;
if (ptr->extended_language) {
gf_bs_write_data(bs, ptr->extended_language, (u32)(strlen(ptr->extended_language)+1));
}
return GF_OK;
}
|
GF_Err elng_Write(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
GF_ExtendedLanguageBox *ptr = (GF_ExtendedLanguageBox *)s;
e = gf_isom_full_box_write(s, bs);
if (e) return e;
if (ptr->extended_language) {
gf_bs_write_data(bs, ptr->extended_language, (u32)(strlen(ptr->extended_language)+1));
}
return GF_OK;
}
|
C
|
gpac
| 0 |
CVE-2015-6779
|
https://www.cvedetails.com/cve/CVE-2015-6779/
|
CWE-264
|
https://github.com/chromium/chromium/commit/1eefa26e1795192c5a347a1e1e7a99e88c47f9c4
|
1eefa26e1795192c5a347a1e1e7a99e88c47f9c4
|
This patch implements a mechanism for more granular link URL permissions (filtering on scheme/host). This fixes the bug that allowed PDFs to have working links to any "chrome://" URLs.
BUG=528505,226927
Review URL: https://codereview.chromium.org/1362433002
Cr-Commit-Position: refs/heads/master@{#351705}
|
SecurityState()
: enabled_bindings_(0),
can_read_raw_cookies_(false),
can_send_midi_sysex_(false) { }
|
SecurityState()
: enabled_bindings_(0),
can_read_raw_cookies_(false),
can_send_midi_sysex_(false) { }
|
C
|
Chrome
| 0 |
CVE-2013-2858
|
https://www.cvedetails.com/cve/CVE-2013-2858/
|
CWE-416
|
https://github.com/chromium/chromium/commit/828eab2216a765dea92575c290421c115b8ad028
|
828eab2216a765dea92575c290421c115b8ad028
|
Added daily UMA for non-data-reduction-proxy data usage when the proxy is enabled.
BUG=325325
Review URL: https://codereview.chromium.org/106113002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@239897 0039d316-1c4b-4281-b951-d872f2087c98
|
void IOThread::UpdateDnsClientEnabled() {
globals()->host_resolver->SetDnsClientEnabled(*dns_client_enabled_);
}
|
void IOThread::UpdateDnsClientEnabled() {
globals()->host_resolver->SetDnsClientEnabled(*dns_client_enabled_);
}
|
C
|
Chrome
| 0 |
CVE-2018-6138
|
https://www.cvedetails.com/cve/CVE-2018-6138/
|
CWE-20
|
https://github.com/chromium/chromium/commit/0aca6bc05a263ea9eafee515fc6ba14da94c1964
|
0aca6bc05a263ea9eafee515fc6ba14da94c1964
|
[Extensions] Restrict tabs.captureVisibleTab()
Modify the permissions for tabs.captureVisibleTab(). Instead of just
checking for <all_urls> and assuming its safe, do the following:
- If the page is a "normal" web page (e.g., http/https), allow the
capture if the extension has activeTab granted or <all_urls>.
- If the page is a file page (file:///), allow the capture if the
extension has file access *and* either of the <all_urls> or
activeTab permissions.
- If the page is a chrome:// page, allow the capture only if the
extension has activeTab granted.
Bug: 810220
Change-Id: I1e2f71281e2f331d641ba0e435df10d66d721304
Reviewed-on: https://chromium-review.googlesource.com/981195
Commit-Queue: Devlin <rdevlin.cronin@chromium.org>
Reviewed-by: Karan Bhatia <karandeepb@chromium.org>
Cr-Commit-Position: refs/heads/master@{#548891}
|
ScriptExecutor* ExecuteCodeInTabFunction::GetScriptExecutor() {
Browser* browser = NULL;
content::WebContents* contents = NULL;
bool success =
GetTabById(execute_tab_id_, browser_context(), include_incognito(),
&browser, nullptr, &contents, nullptr, &error_) &&
contents && browser;
if (!success)
return NULL;
return TabHelper::FromWebContents(contents)->script_executor();
}
|
ScriptExecutor* ExecuteCodeInTabFunction::GetScriptExecutor() {
Browser* browser = NULL;
content::WebContents* contents = NULL;
bool success =
GetTabById(execute_tab_id_, browser_context(), include_incognito(),
&browser, nullptr, &contents, nullptr, &error_) &&
contents && browser;
if (!success)
return NULL;
return TabHelper::FromWebContents(contents)->script_executor();
}
|
C
|
Chrome
| 0 |
CVE-2013-6626
|
https://www.cvedetails.com/cve/CVE-2013-6626/
| null |
https://github.com/chromium/chromium/commit/90fb08ed0146c9beacfd4dde98a20fc45419fff3
|
90fb08ed0146c9beacfd4dde98a20fc45419fff3
|
Cancel JavaScript dialogs when an interstitial appears.
BUG=295695
TEST=See bug for repro steps.
Review URL: https://chromiumcodereview.appspot.com/24360011
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@225026 0039d316-1c4b-4281-b951-d872f2087c98
|
BrowserPluginEmbedder* WebContentsImpl::GetBrowserPluginEmbedder() const {
return browser_plugin_embedder_.get();
}
|
BrowserPluginEmbedder* WebContentsImpl::GetBrowserPluginEmbedder() const {
return browser_plugin_embedder_.get();
}
|
C
|
Chrome
| 0 |
CVE-2015-3417
|
https://www.cvedetails.com/cve/CVE-2015-3417/
| null |
https://github.com/FFmpeg/FFmpeg/commit/e8714f6f93d1a32f4e4655209960afcf4c185214
|
e8714f6f93d1a32f4e4655209960afcf4c185214
|
avcodec/h264: Clear delayed_pic on deallocation
Fixes use of freed memory
Fixes: case5_av_frame_copy_props.mp4
Found-by: Michal Zalewski <lcamtuf@coredump.cx>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
|
void ff_h264_flush_change(H264Context *h)
{
int i, j;
h->outputed_poc = h->next_outputed_poc = INT_MIN;
h->prev_interlaced_frame = 1;
idr(h);
h->prev_frame_num = -1;
if (h->cur_pic_ptr) {
h->cur_pic_ptr->reference = 0;
for (j=i=0; h->delayed_pic[i]; i++)
if (h->delayed_pic[i] != h->cur_pic_ptr)
h->delayed_pic[j++] = h->delayed_pic[i];
h->delayed_pic[j] = NULL;
}
h->first_field = 0;
memset(h->ref_list[0], 0, sizeof(h->ref_list[0]));
memset(h->ref_list[1], 0, sizeof(h->ref_list[1]));
memset(h->default_ref_list[0], 0, sizeof(h->default_ref_list[0]));
memset(h->default_ref_list[1], 0, sizeof(h->default_ref_list[1]));
ff_h264_reset_sei(h);
h->recovery_frame = -1;
h->frame_recovered = 0;
h->list_count = 0;
h->current_slice = 0;
h->mmco_reset = 1;
}
|
void ff_h264_flush_change(H264Context *h)
{
int i, j;
h->outputed_poc = h->next_outputed_poc = INT_MIN;
h->prev_interlaced_frame = 1;
idr(h);
h->prev_frame_num = -1;
if (h->cur_pic_ptr) {
h->cur_pic_ptr->reference = 0;
for (j=i=0; h->delayed_pic[i]; i++)
if (h->delayed_pic[i] != h->cur_pic_ptr)
h->delayed_pic[j++] = h->delayed_pic[i];
h->delayed_pic[j] = NULL;
}
h->first_field = 0;
memset(h->ref_list[0], 0, sizeof(h->ref_list[0]));
memset(h->ref_list[1], 0, sizeof(h->ref_list[1]));
memset(h->default_ref_list[0], 0, sizeof(h->default_ref_list[0]));
memset(h->default_ref_list[1], 0, sizeof(h->default_ref_list[1]));
ff_h264_reset_sei(h);
h->recovery_frame = -1;
h->frame_recovered = 0;
h->list_count = 0;
h->current_slice = 0;
h->mmco_reset = 1;
}
|
C
|
FFmpeg
| 0 |
CVE-2018-13006
|
https://www.cvedetails.com/cve/CVE-2018-13006/
|
CWE-125
|
https://github.com/gpac/gpac/commit/bceb03fd2be95097a7b409ea59914f332fb6bc86
|
bceb03fd2be95097a7b409ea59914f332fb6bc86
|
fixed 2 possible heap overflows (inc. #1088)
|
GF_Err lsr1_Read(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
GF_LASeRSampleEntryBox *ptr = (GF_LASeRSampleEntryBox*)s;
e = gf_isom_base_sample_entry_read((GF_SampleEntryBox *)ptr, bs);
if (e) return e;
ISOM_DECREASE_SIZE(ptr, 8);
return gf_isom_box_array_read(s, bs, lsr1_AddBox);
}
|
GF_Err lsr1_Read(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
GF_LASeRSampleEntryBox *ptr = (GF_LASeRSampleEntryBox*)s;
e = gf_isom_base_sample_entry_read((GF_SampleEntryBox *)ptr, bs);
if (e) return e;
ISOM_DECREASE_SIZE(ptr, 8);
return gf_isom_box_array_read(s, bs, lsr1_AddBox);
}
|
C
|
gpac
| 0 |
CVE-2015-1335
|
https://www.cvedetails.com/cve/CVE-2015-1335/
|
CWE-59
|
https://github.com/lxc/lxc/commit/592fd47a6245508b79fe6ac819fe6d3b2c1289be
|
592fd47a6245508b79fe6ac819fe6d3b2c1289be
|
CVE-2015-1335: Protect container mounts against symlinks
When a container starts up, lxc sets up the container's inital fstree
by doing a bunch of mounting, guided by the container configuration
file. The container config is owned by the admin or user on the host,
so we do not try to guard against bad entries. However, since the
mount target is in the container, it's possible that the container admin
could divert the mount with symbolic links. This could bypass proper
container startup (i.e. confinement of a root-owned container by the
restrictive apparmor policy, by diverting the required write to
/proc/self/attr/current), or bypass the (path-based) apparmor policy
by diverting, say, /proc to /mnt in the container.
To prevent this,
1. do not allow mounts to paths containing symbolic links
2. do not allow bind mounts from relative paths containing symbolic
links.
Details:
Define safe_mount which ensures that the container has not inserted any
symbolic links into any mount targets for mounts to be done during
container setup.
The host's mount path may contain symbolic links. As it is under the
control of the administrator, that's ok. So safe_mount begins the check
for symbolic links after the rootfs->mount, by opening that directory.
It opens each directory along the path using openat() relative to the
parent directory using O_NOFOLLOW. When the target is reached, it
mounts onto /proc/self/fd/<targetfd>.
Use safe_mount() in mount_entry(), when mounting container proc,
and when needed. In particular, safe_mount() need not be used in
any case where:
1. the mount is done in the container's namespace
2. the mount is for the container's rootfs
3. the mount is relative to a tmpfs or proc/sysfs which we have
just safe_mount()ed ourselves
Since we were using proc/net as a temporary placeholder for /proc/sys/net
during container startup, and proc/net is a symbolic link, use proc/tty
instead.
Update the lxc.container.conf manpage with details about the new
restrictions.
Finally, add a testcase to test some symbolic link possibilities.
Reported-by: Roman Fiedler
Signed-off-by: Serge Hallyn <serge.hallyn@ubuntu.com>
Acked-by: Stéphane Graber <stgraber@ubuntu.com>
|
static bool cgm_unfreeze(void *hdata)
{
struct cgm_data *d = hdata;
bool ret = true;
if (!d || !d->cgroup_path)
return false;
if (!cgm_dbus_connect()) {
ERROR("Error connecting to cgroup manager");
return false;
}
if (cgmanager_set_value_sync(NULL, cgroup_manager, "freezer", d->cgroup_path,
"freezer.state", "THAWED") != 0) {
NihError *nerr;
nerr = nih_error_get();
ERROR("call to cgmanager_set_value_sync failed: %s", nerr->message);
nih_free(nerr);
ERROR("Error unfreezing %s", d->cgroup_path);
ret = false;
}
cgm_dbus_disconnect();
return ret;
}
|
static bool cgm_unfreeze(void *hdata)
{
struct cgm_data *d = hdata;
bool ret = true;
if (!d || !d->cgroup_path)
return false;
if (!cgm_dbus_connect()) {
ERROR("Error connecting to cgroup manager");
return false;
}
if (cgmanager_set_value_sync(NULL, cgroup_manager, "freezer", d->cgroup_path,
"freezer.state", "THAWED") != 0) {
NihError *nerr;
nerr = nih_error_get();
ERROR("call to cgmanager_set_value_sync failed: %s", nerr->message);
nih_free(nerr);
ERROR("Error unfreezing %s", d->cgroup_path);
ret = false;
}
cgm_dbus_disconnect();
return ret;
}
|
C
|
lxc
| 0 |
CVE-2011-3045
|
https://www.cvedetails.com/cve/CVE-2011-3045/
|
CWE-189
|
https://github.com/chromium/chromium/commit/4cf106cdb83dd6b35d3b26d06cc67d1d2d99041e
|
4cf106cdb83dd6b35d3b26d06cc67d1d2d99041e
|
Pull follow-up tweak from upstream.
BUG=116162
Review URL: http://codereview.chromium.org/9546033
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@125311 0039d316-1c4b-4281-b951-d872f2087c98
|
png_read_finish_row(png_structp png_ptr)
{
#ifdef PNG_READ_INTERLACING_SUPPORTED
/* Arrays to facilitate easy interlacing - use pass (0 - 6) as index */
/* Start of interlace block */
PNG_CONST int png_pass_start[7] = {0, 4, 0, 2, 0, 1, 0};
/* Offset to next interlace block */
PNG_CONST int png_pass_inc[7] = {8, 8, 4, 4, 2, 2, 1};
/* Start of interlace block in the y direction */
PNG_CONST int png_pass_ystart[7] = {0, 0, 4, 0, 2, 0, 1};
/* Offset to next interlace block in the y direction */
PNG_CONST int png_pass_yinc[7] = {8, 8, 8, 4, 4, 2, 2};
#endif /* PNG_READ_INTERLACING_SUPPORTED */
png_debug(1, "in png_read_finish_row");
png_ptr->row_number++;
if (png_ptr->row_number < png_ptr->num_rows)
return;
#ifdef PNG_READ_INTERLACING_SUPPORTED
if (png_ptr->interlaced)
{
png_ptr->row_number = 0;
png_memset_check(png_ptr, png_ptr->prev_row, 0,
png_ptr->rowbytes + 1);
do
{
png_ptr->pass++;
if (png_ptr->pass >= 7)
break;
png_ptr->iwidth = (png_ptr->width +
png_pass_inc[png_ptr->pass] - 1 -
png_pass_start[png_ptr->pass]) /
png_pass_inc[png_ptr->pass];
if (!(png_ptr->transformations & PNG_INTERLACE))
{
png_ptr->num_rows = (png_ptr->height +
png_pass_yinc[png_ptr->pass] - 1 -
png_pass_ystart[png_ptr->pass]) /
png_pass_yinc[png_ptr->pass];
if (!(png_ptr->num_rows))
continue;
}
else /* if (png_ptr->transformations & PNG_INTERLACE) */
break;
} while (png_ptr->iwidth == 0);
if (png_ptr->pass < 7)
return;
}
#endif /* PNG_READ_INTERLACING_SUPPORTED */
if (!(png_ptr->flags & PNG_FLAG_ZLIB_FINISHED))
{
#ifdef PNG_USE_LOCAL_ARRAYS
PNG_CONST PNG_IDAT;
#endif
char extra;
int ret;
png_ptr->zstream.next_out = (Byte *)&extra;
png_ptr->zstream.avail_out = (uInt)1;
for (;;)
{
if (!(png_ptr->zstream.avail_in))
{
while (!png_ptr->idat_size)
{
png_byte chunk_length[4];
png_crc_finish(png_ptr, 0);
png_read_data(png_ptr, chunk_length, 4);
png_ptr->idat_size = png_get_uint_31(png_ptr, chunk_length);
png_reset_crc(png_ptr);
png_crc_read(png_ptr, png_ptr->chunk_name, 4);
if (png_memcmp(png_ptr->chunk_name, png_IDAT, 4))
png_error(png_ptr, "Not enough image data");
}
png_ptr->zstream.avail_in = (uInt)png_ptr->zbuf_size;
png_ptr->zstream.next_in = png_ptr->zbuf;
if (png_ptr->zbuf_size > png_ptr->idat_size)
png_ptr->zstream.avail_in = (uInt)png_ptr->idat_size;
png_crc_read(png_ptr, png_ptr->zbuf, png_ptr->zstream.avail_in);
png_ptr->idat_size -= png_ptr->zstream.avail_in;
}
ret = inflate(&png_ptr->zstream, Z_PARTIAL_FLUSH);
if (ret == Z_STREAM_END)
{
if (!(png_ptr->zstream.avail_out) || png_ptr->zstream.avail_in ||
png_ptr->idat_size)
png_warning(png_ptr, "Extra compressed data.");
png_ptr->mode |= PNG_AFTER_IDAT;
png_ptr->flags |= PNG_FLAG_ZLIB_FINISHED;
break;
}
if (ret != Z_OK)
png_error(png_ptr, png_ptr->zstream.msg ? png_ptr->zstream.msg :
"Decompression Error");
if (!(png_ptr->zstream.avail_out))
{
png_warning(png_ptr, "Extra compressed data.");
png_ptr->mode |= PNG_AFTER_IDAT;
png_ptr->flags |= PNG_FLAG_ZLIB_FINISHED;
break;
}
}
png_ptr->zstream.avail_out = 0;
}
if (png_ptr->idat_size || png_ptr->zstream.avail_in)
png_warning(png_ptr, "Extra compression data.");
inflateReset(&png_ptr->zstream);
png_ptr->mode |= PNG_AFTER_IDAT;
}
|
png_read_finish_row(png_structp png_ptr)
{
#ifdef PNG_READ_INTERLACING_SUPPORTED
/* Arrays to facilitate easy interlacing - use pass (0 - 6) as index */
/* Start of interlace block */
PNG_CONST int png_pass_start[7] = {0, 4, 0, 2, 0, 1, 0};
/* Offset to next interlace block */
PNG_CONST int png_pass_inc[7] = {8, 8, 4, 4, 2, 2, 1};
/* Start of interlace block in the y direction */
PNG_CONST int png_pass_ystart[7] = {0, 0, 4, 0, 2, 0, 1};
/* Offset to next interlace block in the y direction */
PNG_CONST int png_pass_yinc[7] = {8, 8, 8, 4, 4, 2, 2};
#endif /* PNG_READ_INTERLACING_SUPPORTED */
png_debug(1, "in png_read_finish_row");
png_ptr->row_number++;
if (png_ptr->row_number < png_ptr->num_rows)
return;
#ifdef PNG_READ_INTERLACING_SUPPORTED
if (png_ptr->interlaced)
{
png_ptr->row_number = 0;
png_memset_check(png_ptr, png_ptr->prev_row, 0,
png_ptr->rowbytes + 1);
do
{
png_ptr->pass++;
if (png_ptr->pass >= 7)
break;
png_ptr->iwidth = (png_ptr->width +
png_pass_inc[png_ptr->pass] - 1 -
png_pass_start[png_ptr->pass]) /
png_pass_inc[png_ptr->pass];
if (!(png_ptr->transformations & PNG_INTERLACE))
{
png_ptr->num_rows = (png_ptr->height +
png_pass_yinc[png_ptr->pass] - 1 -
png_pass_ystart[png_ptr->pass]) /
png_pass_yinc[png_ptr->pass];
if (!(png_ptr->num_rows))
continue;
}
else /* if (png_ptr->transformations & PNG_INTERLACE) */
break;
} while (png_ptr->iwidth == 0);
if (png_ptr->pass < 7)
return;
}
#endif /* PNG_READ_INTERLACING_SUPPORTED */
if (!(png_ptr->flags & PNG_FLAG_ZLIB_FINISHED))
{
#ifdef PNG_USE_LOCAL_ARRAYS
PNG_CONST PNG_IDAT;
#endif
char extra;
int ret;
png_ptr->zstream.next_out = (Byte *)&extra;
png_ptr->zstream.avail_out = (uInt)1;
for (;;)
{
if (!(png_ptr->zstream.avail_in))
{
while (!png_ptr->idat_size)
{
png_byte chunk_length[4];
png_crc_finish(png_ptr, 0);
png_read_data(png_ptr, chunk_length, 4);
png_ptr->idat_size = png_get_uint_31(png_ptr, chunk_length);
png_reset_crc(png_ptr);
png_crc_read(png_ptr, png_ptr->chunk_name, 4);
if (png_memcmp(png_ptr->chunk_name, png_IDAT, 4))
png_error(png_ptr, "Not enough image data");
}
png_ptr->zstream.avail_in = (uInt)png_ptr->zbuf_size;
png_ptr->zstream.next_in = png_ptr->zbuf;
if (png_ptr->zbuf_size > png_ptr->idat_size)
png_ptr->zstream.avail_in = (uInt)png_ptr->idat_size;
png_crc_read(png_ptr, png_ptr->zbuf, png_ptr->zstream.avail_in);
png_ptr->idat_size -= png_ptr->zstream.avail_in;
}
ret = inflate(&png_ptr->zstream, Z_PARTIAL_FLUSH);
if (ret == Z_STREAM_END)
{
if (!(png_ptr->zstream.avail_out) || png_ptr->zstream.avail_in ||
png_ptr->idat_size)
png_warning(png_ptr, "Extra compressed data.");
png_ptr->mode |= PNG_AFTER_IDAT;
png_ptr->flags |= PNG_FLAG_ZLIB_FINISHED;
break;
}
if (ret != Z_OK)
png_error(png_ptr, png_ptr->zstream.msg ? png_ptr->zstream.msg :
"Decompression Error");
if (!(png_ptr->zstream.avail_out))
{
png_warning(png_ptr, "Extra compressed data.");
png_ptr->mode |= PNG_AFTER_IDAT;
png_ptr->flags |= PNG_FLAG_ZLIB_FINISHED;
break;
}
}
png_ptr->zstream.avail_out = 0;
}
if (png_ptr->idat_size || png_ptr->zstream.avail_in)
png_warning(png_ptr, "Extra compression data.");
inflateReset(&png_ptr->zstream);
png_ptr->mode |= PNG_AFTER_IDAT;
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/7480de4954fe0ea90dd9e49c4f583f6ea321a55b
|
7480de4954fe0ea90dd9e49c4f583f6ea321a55b
|
GTK: allow bookmark manager to MOVE as well as COPY.
BUG=41601
TEST=manual
Review URL: http://codereview.chromium.org/1610035
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@44955 0039d316-1c4b-4281-b951-d872f2087c98
|
void CustomDrag::OnDragBegin(GtkWidget* widget, GdkDragContext* drag_context) {
if (pixbuf_)
gtk_drag_set_icon_pixbuf(drag_context, pixbuf_, 0, 0);
}
|
void CustomDrag::OnDragBegin(GtkWidget* widget, GdkDragContext* drag_context) {
if (pixbuf_)
gtk_drag_set_icon_pixbuf(drag_context, pixbuf_, 0, 0);
}
|
C
|
Chrome
| 0 |
CVE-2011-2918
|
https://www.cvedetails.com/cve/CVE-2011-2918/
|
CWE-399
|
https://github.com/torvalds/linux/commit/a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
static void perf_remove_from_context(struct perf_event *event)
{
struct perf_event_context *ctx = event->ctx;
struct task_struct *task = ctx->task;
lockdep_assert_held(&ctx->mutex);
if (!task) {
/*
* Per cpu events are removed via an smp call and
* the removal is always successful.
*/
cpu_function_call(event->cpu, __perf_remove_from_context, event);
return;
}
retry:
if (!task_function_call(task, __perf_remove_from_context, event))
return;
raw_spin_lock_irq(&ctx->lock);
/*
* If we failed to find a running task, but find the context active now
* that we've acquired the ctx->lock, retry.
*/
if (ctx->is_active) {
raw_spin_unlock_irq(&ctx->lock);
goto retry;
}
/*
* Since the task isn't running, its safe to remove the event, us
* holding the ctx->lock ensures the task won't get scheduled in.
*/
list_del_event(event, ctx);
raw_spin_unlock_irq(&ctx->lock);
}
|
static void perf_remove_from_context(struct perf_event *event)
{
struct perf_event_context *ctx = event->ctx;
struct task_struct *task = ctx->task;
lockdep_assert_held(&ctx->mutex);
if (!task) {
/*
* Per cpu events are removed via an smp call and
* the removal is always successful.
*/
cpu_function_call(event->cpu, __perf_remove_from_context, event);
return;
}
retry:
if (!task_function_call(task, __perf_remove_from_context, event))
return;
raw_spin_lock_irq(&ctx->lock);
/*
* If we failed to find a running task, but find the context active now
* that we've acquired the ctx->lock, retry.
*/
if (ctx->is_active) {
raw_spin_unlock_irq(&ctx->lock);
goto retry;
}
/*
* Since the task isn't running, its safe to remove the event, us
* holding the ctx->lock ensures the task won't get scheduled in.
*/
list_del_event(event, ctx);
raw_spin_unlock_irq(&ctx->lock);
}
|
C
|
linux
| 0 |
CVE-2013-0881
|
https://www.cvedetails.com/cve/CVE-2013-0881/
|
CWE-20
|
https://github.com/chromium/chromium/commit/634c5943f46abe8c6280079f6d394dfee08c3c8f
|
634c5943f46abe8c6280079f6d394dfee08c3c8f
|
Disable some more query compositingState asserts.
This gets the tests passing again on Mac. See the bug for the stacktrace.
A future patch will need to actually fix the incorrect reading of
compositingState.
BUG=343179
Review URL: https://codereview.chromium.org/162153002
git-svn-id: svn://svn.chromium.org/blink/trunk@167069 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void RenderLayerCompositor::updateRootLayerAttachment()
{
ensureRootLayer();
}
|
void RenderLayerCompositor::updateRootLayerAttachment()
{
ensureRootLayer();
}
|
C
|
Chrome
| 0 |
CVE-2016-1631
|
https://www.cvedetails.com/cve/CVE-2016-1631/
|
CWE-264
|
https://github.com/chromium/chromium/commit/dd77c2a41c72589d929db0592565125ca629fb2c
|
dd77c2a41c72589d929db0592565125ca629fb2c
|
Fix PPB_Flash_MessageLoop.
This CL suspends script callbacks and resource loads while running nested message loop using PPB_Flash_MessageLoop.
BUG=569496
Review URL: https://codereview.chromium.org/1559113002
Cr-Commit-Position: refs/heads/master@{#374529}
|
virtual ~State() {}
|
virtual ~State() {}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/a44b00c88bc5ea35b5b150217c5fd6e4ce168e58
|
a44b00c88bc5ea35b5b150217c5fd6e4ce168e58
|
Apply behaviour change fix from upstream for previous XPath change.
BUG=58731
TEST=NONE
Review URL: http://codereview.chromium.org/4027006
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@63572 0039d316-1c4b-4281-b951-d872f2087c98
|
xmlXPathNodeSetCreateSize(int size) {
xmlNodeSetPtr ret;
ret = (xmlNodeSetPtr) xmlMalloc(sizeof(xmlNodeSet));
if (ret == NULL) {
xmlXPathErrMemory(NULL, "creating nodeset\n");
return(NULL);
}
memset(ret, 0 , (size_t) sizeof(xmlNodeSet));
if (size < XML_NODESET_DEFAULT)
size = XML_NODESET_DEFAULT;
ret->nodeTab = (xmlNodePtr *) xmlMalloc(size * sizeof(xmlNodePtr));
if (ret->nodeTab == NULL) {
xmlXPathErrMemory(NULL, "creating nodeset\n");
xmlFree(ret);
return(NULL);
}
memset(ret->nodeTab, 0 , size * (size_t) sizeof(xmlNodePtr));
ret->nodeMax = size;
return(ret);
}
|
xmlXPathNodeSetCreateSize(int size) {
xmlNodeSetPtr ret;
ret = (xmlNodeSetPtr) xmlMalloc(sizeof(xmlNodeSet));
if (ret == NULL) {
xmlXPathErrMemory(NULL, "creating nodeset\n");
return(NULL);
}
memset(ret, 0 , (size_t) sizeof(xmlNodeSet));
if (size < XML_NODESET_DEFAULT)
size = XML_NODESET_DEFAULT;
ret->nodeTab = (xmlNodePtr *) xmlMalloc(size * sizeof(xmlNodePtr));
if (ret->nodeTab == NULL) {
xmlXPathErrMemory(NULL, "creating nodeset\n");
xmlFree(ret);
return(NULL);
}
memset(ret->nodeTab, 0 , size * (size_t) sizeof(xmlNodePtr));
ret->nodeMax = size;
return(ret);
}
|
C
|
Chrome
| 0 |
CVE-2013-2874
|
https://www.cvedetails.com/cve/CVE-2013-2874/
|
CWE-264
|
https://github.com/chromium/chromium/commit/c0da7c1c6e9ffe5006e146b6426f987238d4bf2e
|
c0da7c1c6e9ffe5006e146b6426f987238d4bf2e
|
DevTools: handle devtools renderer unresponsiveness during beforeunload event interception
This patch fixes the crash which happenes under the following conditions:
1. DevTools window is in undocked state
2. DevTools renderer is unresponsive
3. User attempts to close inspected page
BUG=322380
Review URL: https://codereview.chromium.org/84883002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@237611 0039d316-1c4b-4281-b951-d872f2087c98
|
DevToolsWindow* DevToolsWindow::OpenDevToolsWindow(
content::RenderViewHost* inspected_rvh) {
return ToggleDevToolsWindow(
inspected_rvh, true, DevToolsToggleAction::Show());
}
|
DevToolsWindow* DevToolsWindow::OpenDevToolsWindow(
content::RenderViewHost* inspected_rvh) {
return ToggleDevToolsWindow(
inspected_rvh, true, DevToolsToggleAction::Show());
}
|
C
|
Chrome
| 0 |
CVE-2011-3953
|
https://www.cvedetails.com/cve/CVE-2011-3953/
| null |
https://github.com/chromium/chromium/commit/1a828911013ff501b87aacc5b555e470b31f2909
|
1a828911013ff501b87aacc5b555e470b31f2909
|
Use XFixes to update the clipboard sequence number.
BUG=73478
TEST=manual testing
Review URL: http://codereview.chromium.org/8501002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@109528 0039d316-1c4b-4281-b951-d872f2087c98
|
Clipboard::FormatType Clipboard::GetPlainTextFormatType() {
return GdkAtomToString(GDK_TARGET_STRING);
}
|
Clipboard::FormatType Clipboard::GetPlainTextFormatType() {
return GdkAtomToString(GDK_TARGET_STRING);
}
|
C
|
Chrome
| 0 |
CVE-2014-0203
|
https://www.cvedetails.com/cve/CVE-2014-0203/
|
CWE-20
|
https://github.com/torvalds/linux/commit/86acdca1b63e6890540fa19495cfc708beff3d8b
|
86acdca1b63e6890540fa19495cfc708beff3d8b
|
fix autofs/afs/etc. magic mountpoint breakage
We end up trying to kfree() nd.last.name on open("/mnt/tmp", O_CREAT)
if /mnt/tmp is an autofs direct mount. The reason is that nd.last_type
is bogus here; we want LAST_BIND for everything of that kind and we
get LAST_NORM left over from finding parent directory.
So make sure that it *is* set properly; set to LAST_BIND before
doing ->follow_link() - for normal symlinks it will be changed
by __vfs_follow_link() and everything else needs it set that way.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
static void *proc_self_follow_link(struct dentry *dentry, struct nameidata *nd)
{
struct pid_namespace *ns = dentry->d_sb->s_fs_info;
pid_t tgid = task_tgid_nr_ns(current, ns);
char tmp[PROC_NUMBUF];
if (!tgid)
return ERR_PTR(-ENOENT);
sprintf(tmp, "%d", task_tgid_nr_ns(current, ns));
return ERR_PTR(vfs_follow_link(nd,tmp));
}
|
static void *proc_self_follow_link(struct dentry *dentry, struct nameidata *nd)
{
struct pid_namespace *ns = dentry->d_sb->s_fs_info;
pid_t tgid = task_tgid_nr_ns(current, ns);
char tmp[PROC_NUMBUF];
if (!tgid)
return ERR_PTR(-ENOENT);
sprintf(tmp, "%d", task_tgid_nr_ns(current, ns));
return ERR_PTR(vfs_follow_link(nd,tmp));
}
|
C
|
linux
| 0 |
CVE-2014-3597
|
https://www.cvedetails.com/cve/CVE-2014-3597/
|
CWE-119
|
https://github.com/php/php-src/commit/2fefae47716d501aec41c1102f3fd4531f070b05
|
2fefae47716d501aec41c1102f3fd4531f070b05
|
Fixed Sec Bug #67717 segfault in dns_get_record CVE-2014-3597
Incomplete fix for CVE-2014-4049
Check possible buffer overflow
- pass real buffer end to dn_expand calls
- check buffer len before each read
|
static u_char *php_parserr(u_char *cp, querybuf *answer, int type_to_fetch, int store, int raw, zval **subarray)
static u_char *php_parserr(u_char *cp, u_char *end, querybuf *answer, int type_to_fetch, int store, int raw, zval **subarray)
{
u_short type, class, dlen;
u_long ttl;
long n, i;
u_short s;
u_char *tp, *p;
char name[MAXHOSTNAMELEN];
int have_v6_break = 0, in_v6_break = 0;
*subarray = NULL;
n = dn_expand(answer->qb2, end, cp, name, sizeof(name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
CHECKCP(10);
GETSHORT(type, cp);
GETSHORT(class, cp);
GETLONG(ttl, cp);
GETSHORT(dlen, cp);
CHECKCP(dlen);
if (type_to_fetch != T_ANY && type != type_to_fetch) {
cp += dlen;
return cp;
}
if (!store) {
cp += dlen;
return cp;
}
ALLOC_INIT_ZVAL(*subarray);
array_init(*subarray);
add_assoc_string(*subarray, "host", name, 1);
add_assoc_string(*subarray, "class", "IN", 1);
add_assoc_long(*subarray, "ttl", ttl);
if (raw) {
add_assoc_long(*subarray, "type", type);
add_assoc_stringl(*subarray, "data", (char*) cp, (uint) dlen, 1);
cp += dlen;
return cp;
}
switch (type) {
case DNS_T_A:
CHECKCP(4);
add_assoc_string(*subarray, "type", "A", 1);
snprintf(name, sizeof(name), "%d.%d.%d.%d", cp[0], cp[1], cp[2], cp[3]);
add_assoc_string(*subarray, "ip", name, 1);
cp += dlen;
break;
case DNS_T_MX:
CHECKCP(2);
add_assoc_string(*subarray, "type", "MX", 1);
GETSHORT(n, cp);
add_assoc_long(*subarray, "pri", n);
/* no break; */
case DNS_T_CNAME:
if (type == DNS_T_CNAME) {
add_assoc_string(*subarray, "type", "CNAME", 1);
}
/* no break; */
case DNS_T_NS:
if (type == DNS_T_NS) {
add_assoc_string(*subarray, "type", "NS", 1);
}
/* no break; */
case DNS_T_PTR:
if (type == DNS_T_PTR) {
add_assoc_string(*subarray, "type", "PTR", 1);
}
n = dn_expand(answer->qb2, end, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "target", name, 1);
break;
case DNS_T_HINFO:
/* See RFC 1010 for values */
add_assoc_string(*subarray, "type", "HINFO", 1);
CHECKCP(1);
n = *cp & 0xFF;
cp++;
CHECKCP(n);
add_assoc_stringl(*subarray, "cpu", (char*)cp, n, 1);
cp += n;
CHECKCP(1);
n = *cp & 0xFF;
cp++;
CHECKCP(n);
add_assoc_stringl(*subarray, "os", (char*)cp, n, 1);
cp += n;
break;
case DNS_T_TXT:
{
int l1 = 0, l2 = 0;
zval *entries = NULL;
add_assoc_string(*subarray, "type", "TXT", 1);
tp = emalloc(dlen + 1);
MAKE_STD_ZVAL(entries);
array_init(entries);
while (l1 < dlen) {
n = cp[l1];
if ((l1 + n) >= dlen) {
n = dlen - (l1 + 1);
}
if (n) {
memcpy(tp + l2 , cp + l1 + 1, n);
add_next_index_stringl(entries, cp + l1 + 1, n, 1);
}
l1 = l1 + n + 1;
l2 = l2 + n;
}
tp[l2] = '\0';
cp += dlen;
add_assoc_stringl(*subarray, "txt", tp, l2, 0);
add_assoc_zval(*subarray, "entries", entries);
}
break;
case DNS_T_SOA:
add_assoc_string(*subarray, "type", "SOA", 1);
n = dn_expand(answer->qb2, end, cp, name, (sizeof name) -2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "mname", name, 1);
n = dn_expand(answer->qb2, end, cp, name, (sizeof name) -2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "rname", name, 1);
CHECKCP(5*4);
GETLONG(n, cp);
add_assoc_long(*subarray, "serial", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "refresh", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "retry", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "expire", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "minimum-ttl", n);
break;
case DNS_T_AAAA:
tp = (u_char*)name;
CHECKCP(8*2);
for(i=0; i < 8; i++) {
GETSHORT(s, cp);
if (s != 0) {
if (tp > (u_char *)name) {
in_v6_break = 0;
tp[0] = ':';
tp++;
}
tp += sprintf((char*)tp,"%x",s);
} else {
if (!have_v6_break) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
} else if (!in_v6_break) {
tp[0] = ':';
tp++;
tp[0] = '0';
tp++;
}
}
}
if (have_v6_break && in_v6_break) {
tp[0] = ':';
tp++;
}
tp[0] = '\0';
add_assoc_string(*subarray, "type", "AAAA", 1);
add_assoc_string(*subarray, "ipv6", name, 1);
break;
case DNS_T_A6:
p = cp;
add_assoc_string(*subarray, "type", "A6", 1);
CHECKCP(1);
n = ((int)cp[0]) & 0xFF;
cp++;
add_assoc_long(*subarray, "masklen", n);
tp = (u_char*)name;
if (n > 15) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
}
if (n % 16 > 8) {
/* Partial short */
if (cp[0] != 0) {
if (tp > (u_char *)name) {
in_v6_break = 0;
tp[0] = ':';
tp++;
}
sprintf((char*)tp, "%x", cp[0] & 0xFF);
} else {
if (!have_v6_break) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
} else if (!in_v6_break) {
tp[0] = ':';
tp++;
tp[0] = '0';
tp++;
}
}
cp++;
}
for (i = (n + 8) / 16; i < 8; i++) {
CHECKCP(2);
GETSHORT(s, cp);
if (s != 0) {
if (tp > (u_char *)name) {
in_v6_break = 0;
tp[0] = ':';
tp++;
}
tp += sprintf((char*)tp,"%x",s);
} else {
if (!have_v6_break) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
} else if (!in_v6_break) {
tp[0] = ':';
tp++;
tp[0] = '0';
tp++;
}
}
}
if (have_v6_break && in_v6_break) {
tp[0] = ':';
tp++;
}
tp[0] = '\0';
add_assoc_string(*subarray, "ipv6", name, 1);
if (cp < p + dlen) {
n = dn_expand(answer->qb2, end, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "chain", name, 1);
}
break;
case DNS_T_SRV:
CHECKCP(3*2);
add_assoc_string(*subarray, "type", "SRV", 1);
GETSHORT(n, cp);
add_assoc_long(*subarray, "pri", n);
GETSHORT(n, cp);
add_assoc_long(*subarray, "weight", n);
GETSHORT(n, cp);
add_assoc_long(*subarray, "port", n);
n = dn_expand(answer->qb2, end, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "target", name, 1);
break;
case DNS_T_NAPTR:
CHECKCP(2*2);
add_assoc_string(*subarray, "type", "NAPTR", 1);
GETSHORT(n, cp);
add_assoc_long(*subarray, "order", n);
GETSHORT(n, cp);
add_assoc_long(*subarray, "pref", n);
CHECKCP(1);
n = (cp[0] & 0xFF);
cp++;
CHECKCP(n);
add_assoc_stringl(*subarray, "flags", (char*)cp, n, 1);
cp += n;
CHECKCP(1);
n = (cp[0] & 0xFF);
cp++;
CHECKCP(n);
add_assoc_stringl(*subarray, "services", (char*)cp, n, 1);
cp += n;
CHECKCP(1);
n = (cp[0] & 0xFF);
cp++;
CHECKCP(n);
add_assoc_stringl(*subarray, "regex", (char*)cp, n, 1);
cp += n;
n = dn_expand(answer->qb2, end, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "replacement", name, 1);
break;
default:
zval_ptr_dtor(subarray);
*subarray = NULL;
cp += dlen;
break;
}
return cp;
}
|
static u_char *php_parserr(u_char *cp, querybuf *answer, int type_to_fetch, int store, int raw, zval **subarray)
{
u_short type, class, dlen;
u_long ttl;
long n, i;
u_short s;
u_char *tp, *p;
char name[MAXHOSTNAMELEN];
int have_v6_break = 0, in_v6_break = 0;
*subarray = NULL;
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, sizeof(name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
GETSHORT(type, cp);
GETSHORT(class, cp);
GETLONG(ttl, cp);
GETSHORT(dlen, cp);
if (type_to_fetch != T_ANY && type != type_to_fetch) {
cp += dlen;
return cp;
}
if (!store) {
cp += dlen;
return cp;
}
ALLOC_INIT_ZVAL(*subarray);
array_init(*subarray);
add_assoc_string(*subarray, "host", name, 1);
add_assoc_string(*subarray, "class", "IN", 1);
add_assoc_long(*subarray, "ttl", ttl);
if (raw) {
add_assoc_long(*subarray, "type", type);
add_assoc_stringl(*subarray, "data", (char*) cp, (uint) dlen, 1);
cp += dlen;
return cp;
}
switch (type) {
case DNS_T_A:
add_assoc_string(*subarray, "type", "A", 1);
snprintf(name, sizeof(name), "%d.%d.%d.%d", cp[0], cp[1], cp[2], cp[3]);
add_assoc_string(*subarray, "ip", name, 1);
cp += dlen;
break;
case DNS_T_MX:
add_assoc_string(*subarray, "type", "MX", 1);
GETSHORT(n, cp);
add_assoc_long(*subarray, "pri", n);
/* no break; */
case DNS_T_CNAME:
if (type == DNS_T_CNAME) {
add_assoc_string(*subarray, "type", "CNAME", 1);
}
/* no break; */
case DNS_T_NS:
if (type == DNS_T_NS) {
add_assoc_string(*subarray, "type", "NS", 1);
}
/* no break; */
case DNS_T_PTR:
if (type == DNS_T_PTR) {
add_assoc_string(*subarray, "type", "PTR", 1);
}
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "target", name, 1);
break;
case DNS_T_HINFO:
/* See RFC 1010 for values */
add_assoc_string(*subarray, "type", "HINFO", 1);
n = *cp & 0xFF;
cp++;
add_assoc_stringl(*subarray, "cpu", (char*)cp, n, 1);
cp += n;
n = *cp & 0xFF;
cp++;
add_assoc_stringl(*subarray, "os", (char*)cp, n, 1);
cp += n;
break;
case DNS_T_TXT:
{
int ll = 0;
zval *entries = NULL;
add_assoc_string(*subarray, "type", "TXT", 1);
tp = emalloc(dlen + 1);
MAKE_STD_ZVAL(entries);
array_init(entries);
while (ll < dlen) {
n = cp[ll];
if ((ll + n) >= dlen) {
n = dlen - (ll + 1);
}
memcpy(tp + ll , cp + ll + 1, n);
add_next_index_stringl(entries, cp + ll + 1, n, 1);
ll = ll + n + 1;
}
tp[dlen] = '\0';
cp += dlen;
add_assoc_stringl(*subarray, "txt", tp, (dlen>0)?dlen - 1:0, 0);
add_assoc_zval(*subarray, "entries", entries);
}
break;
case DNS_T_SOA:
add_assoc_string(*subarray, "type", "SOA", 1);
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, (sizeof name) -2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "mname", name, 1);
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, (sizeof name) -2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "rname", name, 1);
GETLONG(n, cp);
add_assoc_long(*subarray, "serial", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "refresh", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "retry", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "expire", n);
GETLONG(n, cp);
add_assoc_long(*subarray, "minimum-ttl", n);
break;
case DNS_T_AAAA:
tp = (u_char*)name;
for(i=0; i < 8; i++) {
GETSHORT(s, cp);
if (s != 0) {
if (tp > (u_char *)name) {
in_v6_break = 0;
tp[0] = ':';
tp++;
}
tp += sprintf((char*)tp,"%x",s);
} else {
if (!have_v6_break) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
} else if (!in_v6_break) {
tp[0] = ':';
tp++;
tp[0] = '0';
tp++;
}
}
}
if (have_v6_break && in_v6_break) {
tp[0] = ':';
tp++;
}
tp[0] = '\0';
add_assoc_string(*subarray, "type", "AAAA", 1);
add_assoc_string(*subarray, "ipv6", name, 1);
break;
case DNS_T_A6:
p = cp;
add_assoc_string(*subarray, "type", "A6", 1);
n = ((int)cp[0]) & 0xFF;
cp++;
add_assoc_long(*subarray, "masklen", n);
tp = (u_char*)name;
if (n > 15) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
}
if (n % 16 > 8) {
/* Partial short */
if (cp[0] != 0) {
if (tp > (u_char *)name) {
in_v6_break = 0;
tp[0] = ':';
tp++;
}
sprintf((char*)tp, "%x", cp[0] & 0xFF);
} else {
if (!have_v6_break) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
} else if (!in_v6_break) {
tp[0] = ':';
tp++;
tp[0] = '0';
tp++;
}
}
cp++;
}
for (i = (n + 8) / 16; i < 8; i++) {
GETSHORT(s, cp);
if (s != 0) {
if (tp > (u_char *)name) {
in_v6_break = 0;
tp[0] = ':';
tp++;
}
tp += sprintf((char*)tp,"%x",s);
} else {
if (!have_v6_break) {
have_v6_break = 1;
in_v6_break = 1;
tp[0] = ':';
tp++;
} else if (!in_v6_break) {
tp[0] = ':';
tp++;
tp[0] = '0';
tp++;
}
}
}
if (have_v6_break && in_v6_break) {
tp[0] = ':';
tp++;
}
tp[0] = '\0';
add_assoc_string(*subarray, "ipv6", name, 1);
if (cp < p + dlen) {
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "chain", name, 1);
}
break;
case DNS_T_SRV:
add_assoc_string(*subarray, "type", "SRV", 1);
GETSHORT(n, cp);
add_assoc_long(*subarray, "pri", n);
GETSHORT(n, cp);
add_assoc_long(*subarray, "weight", n);
GETSHORT(n, cp);
add_assoc_long(*subarray, "port", n);
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "target", name, 1);
break;
case DNS_T_NAPTR:
add_assoc_string(*subarray, "type", "NAPTR", 1);
GETSHORT(n, cp);
add_assoc_long(*subarray, "order", n);
GETSHORT(n, cp);
add_assoc_long(*subarray, "pref", n);
n = (cp[0] & 0xFF);
add_assoc_stringl(*subarray, "flags", (char*)++cp, n, 1);
cp += n;
n = (cp[0] & 0xFF);
add_assoc_stringl(*subarray, "services", (char*)++cp, n, 1);
cp += n;
n = (cp[0] & 0xFF);
add_assoc_stringl(*subarray, "regex", (char*)++cp, n, 1);
cp += n;
n = dn_expand(answer->qb2, answer->qb2+65536, cp, name, (sizeof name) - 2);
if (n < 0) {
return NULL;
}
cp += n;
add_assoc_string(*subarray, "replacement", name, 1);
break;
default:
zval_ptr_dtor(subarray);
*subarray = NULL;
cp += dlen;
break;
}
return cp;
}
|
C
|
php-src
| 1 |
CVE-2013-4282
|
https://www.cvedetails.com/cve/CVE-2013-4282/
|
CWE-119
|
https://cgit.freedesktop.org/spice/spice/commit/?id=8af619009660b24e0b41ad26b30289eea288fcc2
|
8af619009660b24e0b41ad26b30289eea288fcc2
| null |
SPICE_GNUC_VISIBLE int spice_server_migrate_client_state(SpiceServer *s)
{
spice_assert(reds == s);
if (!reds_main_channel_connected()) {
return SPICE_MIGRATE_CLIENT_NONE;
} else if (reds->mig_wait_connect) {
return SPICE_MIGRATE_CLIENT_WAITING;
} else {
return SPICE_MIGRATE_CLIENT_READY;
}
return 0;
}
|
SPICE_GNUC_VISIBLE int spice_server_migrate_client_state(SpiceServer *s)
{
spice_assert(reds == s);
if (!reds_main_channel_connected()) {
return SPICE_MIGRATE_CLIENT_NONE;
} else if (reds->mig_wait_connect) {
return SPICE_MIGRATE_CLIENT_WAITING;
} else {
return SPICE_MIGRATE_CLIENT_READY;
}
return 0;
}
|
C
|
spice
| 0 |
CVE-2019-5827
|
https://www.cvedetails.com/cve/CVE-2019-5827/
|
CWE-190
|
https://github.com/chromium/chromium/commit/517ac71c9ee27f856f9becde8abea7d1604af9d4
|
517ac71c9ee27f856f9becde8abea7d1604af9d4
|
sqlite: backport bugfixes for dbfuzz2
Bug: 952406
Change-Id: Icbec429742048d6674828726c96d8e265c41b595
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1568152
Reviewed-by: Chris Mumford <cmumford@google.com>
Commit-Queue: Darwin Huang <huangdarwin@chromium.org>
Cr-Commit-Position: refs/heads/master@{#651030}
|
static int btreeInitPage(MemPage *pPage){
int pc; /* Address of a freeblock within pPage->aData[] */
u8 hdr; /* Offset to beginning of page header */
u8 *data; /* Equal to pPage->aData */
BtShared *pBt; /* The main btree structure */
int usableSize; /* Amount of usable space on each page */
u16 cellOffset; /* Offset from start of page to first cell pointer */
int nFree; /* Number of unused bytes on the page */
int top; /* First byte of the cell content area */
int iCellFirst; /* First allowable cell or freeblock offset */
int iCellLast; /* Last possible cell or freeblock offset */
assert( pPage->pBt!=0 );
assert( pPage->pBt->db!=0 );
assert( sqlite3_mutex_held(pPage->pBt->mutex) );
assert( pPage->pgno==sqlite3PagerPagenumber(pPage->pDbPage) );
assert( pPage == sqlite3PagerGetExtra(pPage->pDbPage) );
assert( pPage->aData == sqlite3PagerGetData(pPage->pDbPage) );
assert( pPage->isInit==0 );
pBt = pPage->pBt;
hdr = pPage->hdrOffset;
data = pPage->aData;
/* EVIDENCE-OF: R-28594-02890 The one-byte flag at offset 0 indicating
** the b-tree page type. */
if( decodeFlags(pPage, data[hdr]) ){
return SQLITE_CORRUPT_PAGE(pPage);
}
assert( pBt->pageSize>=512 && pBt->pageSize<=65536 );
pPage->maskPage = (u16)(pBt->pageSize - 1);
pPage->nOverflow = 0;
usableSize = pBt->usableSize;
pPage->cellOffset = cellOffset = hdr + 8 + pPage->childPtrSize;
pPage->aDataEnd = &data[usableSize];
pPage->aCellIdx = &data[cellOffset];
pPage->aDataOfst = &data[pPage->childPtrSize];
/* EVIDENCE-OF: R-58015-48175 The two-byte integer at offset 5 designates
** the start of the cell content area. A zero value for this integer is
** interpreted as 65536. */
top = get2byteNotZero(&data[hdr+5]);
/* EVIDENCE-OF: R-37002-32774 The two-byte integer at offset 3 gives the
** number of cells on the page. */
pPage->nCell = get2byte(&data[hdr+3]);
if( pPage->nCell>MX_CELL(pBt) ){
/* To many cells for a single page. The page must be corrupt */
return SQLITE_CORRUPT_PAGE(pPage);
}
testcase( pPage->nCell==MX_CELL(pBt) );
/* EVIDENCE-OF: R-24089-57979 If a page contains no cells (which is only
** possible for a root page of a table that contains no rows) then the
** offset to the cell content area will equal the page size minus the
** bytes of reserved space. */
assert( pPage->nCell>0 || top==usableSize || CORRUPT_DB );
/* A malformed database page might cause us to read past the end
** of page when parsing a cell.
**
** The following block of code checks early to see if a cell extends
** past the end of a page boundary and causes SQLITE_CORRUPT to be
** returned if it does.
*/
iCellFirst = cellOffset + 2*pPage->nCell;
iCellLast = usableSize - 4;
if( pBt->db->flags & SQLITE_CellSizeCk ){
int i; /* Index into the cell pointer array */
int sz; /* Size of a cell */
if( !pPage->leaf ) iCellLast--;
for(i=0; i<pPage->nCell; i++){
pc = get2byteAligned(&data[cellOffset+i*2]);
testcase( pc==iCellFirst );
testcase( pc==iCellLast );
if( pc<iCellFirst || pc>iCellLast ){
return SQLITE_CORRUPT_PAGE(pPage);
}
sz = pPage->xCellSize(pPage, &data[pc]);
testcase( pc+sz==usableSize );
if( pc+sz>usableSize ){
return SQLITE_CORRUPT_PAGE(pPage);
}
}
if( !pPage->leaf ) iCellLast++;
}
/* Compute the total free space on the page
** EVIDENCE-OF: R-23588-34450 The two-byte integer at offset 1 gives the
** start of the first freeblock on the page, or is zero if there are no
** freeblocks. */
pc = get2byte(&data[hdr+1]);
nFree = data[hdr+7] + top; /* Init nFree to non-freeblock free space */
if( pc>0 ){
u32 next, size;
if( pc<iCellFirst ){
/* EVIDENCE-OF: R-55530-52930 In a well-formed b-tree page, there will
** always be at least one cell before the first freeblock.
*/
return SQLITE_CORRUPT_PAGE(pPage);
}
while( 1 ){
if( pc>iCellLast ){
/* Freeblock off the end of the page */
return SQLITE_CORRUPT_PAGE(pPage);
}
next = get2byte(&data[pc]);
size = get2byte(&data[pc+2]);
nFree = nFree + size;
if( next<=pc+size+3 ) break;
pc = next;
}
if( next>0 ){
/* Freeblock not in ascending order */
return SQLITE_CORRUPT_PAGE(pPage);
}
if( pc+size>(unsigned int)usableSize ){
/* Last freeblock extends past page end */
return SQLITE_CORRUPT_PAGE(pPage);
}
}
/* At this point, nFree contains the sum of the offset to the start
** of the cell-content area plus the number of free bytes within
** the cell-content area. If this is greater than the usable-size
** of the page, then the page must be corrupted. This check also
** serves to verify that the offset to the start of the cell-content
** area, according to the page header, lies within the page.
*/
if( nFree>usableSize ){
return SQLITE_CORRUPT_PAGE(pPage);
}
pPage->nFree = (u16)(nFree - iCellFirst);
pPage->isInit = 1;
return SQLITE_OK;
}
|
static int btreeInitPage(MemPage *pPage){
int pc; /* Address of a freeblock within pPage->aData[] */
u8 hdr; /* Offset to beginning of page header */
u8 *data; /* Equal to pPage->aData */
BtShared *pBt; /* The main btree structure */
int usableSize; /* Amount of usable space on each page */
u16 cellOffset; /* Offset from start of page to first cell pointer */
int nFree; /* Number of unused bytes on the page */
int top; /* First byte of the cell content area */
int iCellFirst; /* First allowable cell or freeblock offset */
int iCellLast; /* Last possible cell or freeblock offset */
assert( pPage->pBt!=0 );
assert( pPage->pBt->db!=0 );
assert( sqlite3_mutex_held(pPage->pBt->mutex) );
assert( pPage->pgno==sqlite3PagerPagenumber(pPage->pDbPage) );
assert( pPage == sqlite3PagerGetExtra(pPage->pDbPage) );
assert( pPage->aData == sqlite3PagerGetData(pPage->pDbPage) );
assert( pPage->isInit==0 );
pBt = pPage->pBt;
hdr = pPage->hdrOffset;
data = pPage->aData;
/* EVIDENCE-OF: R-28594-02890 The one-byte flag at offset 0 indicating
** the b-tree page type. */
if( decodeFlags(pPage, data[hdr]) ){
return SQLITE_CORRUPT_PAGE(pPage);
}
assert( pBt->pageSize>=512 && pBt->pageSize<=65536 );
pPage->maskPage = (u16)(pBt->pageSize - 1);
pPage->nOverflow = 0;
usableSize = pBt->usableSize;
pPage->cellOffset = cellOffset = hdr + 8 + pPage->childPtrSize;
pPage->aDataEnd = &data[usableSize];
pPage->aCellIdx = &data[cellOffset];
pPage->aDataOfst = &data[pPage->childPtrSize];
/* EVIDENCE-OF: R-58015-48175 The two-byte integer at offset 5 designates
** the start of the cell content area. A zero value for this integer is
** interpreted as 65536. */
top = get2byteNotZero(&data[hdr+5]);
/* EVIDENCE-OF: R-37002-32774 The two-byte integer at offset 3 gives the
** number of cells on the page. */
pPage->nCell = get2byte(&data[hdr+3]);
if( pPage->nCell>MX_CELL(pBt) ){
/* To many cells for a single page. The page must be corrupt */
return SQLITE_CORRUPT_PAGE(pPage);
}
testcase( pPage->nCell==MX_CELL(pBt) );
/* EVIDENCE-OF: R-24089-57979 If a page contains no cells (which is only
** possible for a root page of a table that contains no rows) then the
** offset to the cell content area will equal the page size minus the
** bytes of reserved space. */
assert( pPage->nCell>0 || top==usableSize || CORRUPT_DB );
/* A malformed database page might cause us to read past the end
** of page when parsing a cell.
**
** The following block of code checks early to see if a cell extends
** past the end of a page boundary and causes SQLITE_CORRUPT to be
** returned if it does.
*/
iCellFirst = cellOffset + 2*pPage->nCell;
iCellLast = usableSize - 4;
if( pBt->db->flags & SQLITE_CellSizeCk ){
int i; /* Index into the cell pointer array */
int sz; /* Size of a cell */
if( !pPage->leaf ) iCellLast--;
for(i=0; i<pPage->nCell; i++){
pc = get2byteAligned(&data[cellOffset+i*2]);
testcase( pc==iCellFirst );
testcase( pc==iCellLast );
if( pc<iCellFirst || pc>iCellLast ){
return SQLITE_CORRUPT_PAGE(pPage);
}
sz = pPage->xCellSize(pPage, &data[pc]);
testcase( pc+sz==usableSize );
if( pc+sz>usableSize ){
return SQLITE_CORRUPT_PAGE(pPage);
}
}
if( !pPage->leaf ) iCellLast++;
}
/* Compute the total free space on the page
** EVIDENCE-OF: R-23588-34450 The two-byte integer at offset 1 gives the
** start of the first freeblock on the page, or is zero if there are no
** freeblocks. */
pc = get2byte(&data[hdr+1]);
nFree = data[hdr+7] + top; /* Init nFree to non-freeblock free space */
if( pc>0 ){
u32 next, size;
if( pc<iCellFirst ){
/* EVIDENCE-OF: R-55530-52930 In a well-formed b-tree page, there will
** always be at least one cell before the first freeblock.
*/
return SQLITE_CORRUPT_PAGE(pPage);
}
while( 1 ){
if( pc>iCellLast ){
/* Freeblock off the end of the page */
return SQLITE_CORRUPT_PAGE(pPage);
}
next = get2byte(&data[pc]);
size = get2byte(&data[pc+2]);
nFree = nFree + size;
if( next<=pc+size+3 ) break;
pc = next;
}
if( next>0 ){
/* Freeblock not in ascending order */
return SQLITE_CORRUPT_PAGE(pPage);
}
if( pc+size>(unsigned int)usableSize ){
/* Last freeblock extends past page end */
return SQLITE_CORRUPT_PAGE(pPage);
}
}
/* At this point, nFree contains the sum of the offset to the start
** of the cell-content area plus the number of free bytes within
** the cell-content area. If this is greater than the usable-size
** of the page, then the page must be corrupted. This check also
** serves to verify that the offset to the start of the cell-content
** area, according to the page header, lies within the page.
*/
if( nFree>usableSize ){
return SQLITE_CORRUPT_PAGE(pPage);
}
pPage->nFree = (u16)(nFree - iCellFirst);
pPage->isInit = 1;
return SQLITE_OK;
}
|
C
|
Chrome
| 0 |
CVE-2016-9540
|
https://www.cvedetails.com/cve/CVE-2016-9540/
|
CWE-787
|
https://github.com/vadz/libtiff/commit/5ad9d8016fbb60109302d558f7edb2cb2a3bb8e3
|
5ad9d8016fbb60109302d558f7edb2cb2a3bb8e3
|
* tools/tiffcp.c: fix out-of-bounds write on tiled images with odd
tile width vs image width. Reported as MSVR 35103
by Axel Souchet and Vishal Chauhan from the MSRC Vulnerabilities &
Mitigations team.
|
cpTag(TIFF* in, TIFF* out, uint16 tag, uint16 count, TIFFDataType type)
{
switch (type) {
case TIFF_SHORT:
if (count == 1) {
uint16 shortv;
CopyField(tag, shortv);
} else if (count == 2) {
uint16 shortv1, shortv2;
CopyField2(tag, shortv1, shortv2);
} else if (count == 4) {
uint16 *tr, *tg, *tb, *ta;
CopyField4(tag, tr, tg, tb, ta);
} else if (count == (uint16) -1) {
uint16 shortv1;
uint16* shortav;
CopyField2(tag, shortv1, shortav);
}
break;
case TIFF_LONG:
{ uint32 longv;
CopyField(tag, longv);
}
break;
case TIFF_RATIONAL:
if (count == 1) {
float floatv;
CopyField(tag, floatv);
} else if (count == (uint16) -1) {
float* floatav;
CopyField(tag, floatav);
}
break;
case TIFF_ASCII:
{ char* stringv;
CopyField(tag, stringv);
}
break;
case TIFF_DOUBLE:
if (count == 1) {
double doublev;
CopyField(tag, doublev);
} else if (count == (uint16) -1) {
double* doubleav;
CopyField(tag, doubleav);
}
break;
default:
TIFFError(TIFFFileName(in),
"Data type %d is not supported, tag %d skipped.",
tag, type);
}
}
|
cpTag(TIFF* in, TIFF* out, uint16 tag, uint16 count, TIFFDataType type)
{
switch (type) {
case TIFF_SHORT:
if (count == 1) {
uint16 shortv;
CopyField(tag, shortv);
} else if (count == 2) {
uint16 shortv1, shortv2;
CopyField2(tag, shortv1, shortv2);
} else if (count == 4) {
uint16 *tr, *tg, *tb, *ta;
CopyField4(tag, tr, tg, tb, ta);
} else if (count == (uint16) -1) {
uint16 shortv1;
uint16* shortav;
CopyField2(tag, shortv1, shortav);
}
break;
case TIFF_LONG:
{ uint32 longv;
CopyField(tag, longv);
}
break;
case TIFF_RATIONAL:
if (count == 1) {
float floatv;
CopyField(tag, floatv);
} else if (count == (uint16) -1) {
float* floatav;
CopyField(tag, floatav);
}
break;
case TIFF_ASCII:
{ char* stringv;
CopyField(tag, stringv);
}
break;
case TIFF_DOUBLE:
if (count == 1) {
double doublev;
CopyField(tag, doublev);
} else if (count == (uint16) -1) {
double* doubleav;
CopyField(tag, doubleav);
}
break;
default:
TIFFError(TIFFFileName(in),
"Data type %d is not supported, tag %d skipped.",
tag, type);
}
}
|
C
|
libtiff
| 0 |
CVE-2018-20182
|
https://www.cvedetails.com/cve/CVE-2018-20182/
|
CWE-119
|
https://github.com/rdesktop/rdesktop/commit/4dca546d04321a610c1835010b5dad85163b65e1
|
4dca546d04321a610c1835010b5dad85163b65e1
|
Malicious RDP server security fixes
This commit includes fixes for a set of 21 vulnerabilities in
rdesktop when a malicious RDP server is used.
All vulnerabilities was identified and reported by Eyal Itkin.
* Add rdp_protocol_error function that is used in several fixes
* Refactor of process_bitmap_updates
* Fix possible integer overflow in s_check_rem() on 32bit arch
* Fix memory corruption in process_bitmap_data - CVE-2018-8794
* Fix remote code execution in process_bitmap_data - CVE-2018-8795
* Fix remote code execution in process_plane - CVE-2018-8797
* Fix Denial of Service in mcs_recv_connect_response - CVE-2018-20175
* Fix Denial of Service in mcs_parse_domain_params - CVE-2018-20175
* Fix Denial of Service in sec_parse_crypt_info - CVE-2018-20176
* Fix Denial of Service in sec_recv - CVE-2018-20176
* Fix minor information leak in rdpdr_process - CVE-2018-8791
* Fix Denial of Service in cssp_read_tsrequest - CVE-2018-8792
* Fix remote code execution in cssp_read_tsrequest - CVE-2018-8793
* Fix Denial of Service in process_bitmap_data - CVE-2018-8796
* Fix minor information leak in rdpsnd_process_ping - CVE-2018-8798
* Fix Denial of Service in process_secondary_order - CVE-2018-8799
* Fix remote code execution in in ui_clip_handle_data - CVE-2018-8800
* Fix major information leak in ui_clip_handle_data - CVE-2018-20174
* Fix memory corruption in rdp_in_unistr - CVE-2018-20177
* Fix Denial of Service in process_demand_active - CVE-2018-20178
* Fix remote code execution in lspci_process - CVE-2018-20179
* Fix remote code execution in rdpsnddbg_process - CVE-2018-20180
* Fix remote code execution in seamless_process - CVE-2018-20181
* Fix remote code execution in seamless_process_line - CVE-2018-20182
|
rdp_send_fonts(uint16 seq)
{
STREAM s;
logger(Protocol, Debug, "%s()", __func__);
s = rdp_init_data(8);
out_uint16(s, 0); /* number of fonts */
out_uint16_le(s, 0); /* pad? */
out_uint16_le(s, seq); /* unknown */
out_uint16_le(s, 0x32); /* entry size */
s_mark_end(s);
rdp_send_data(s, RDP_DATA_PDU_FONT2);
}
|
rdp_send_fonts(uint16 seq)
{
STREAM s;
logger(Protocol, Debug, "%s()", __func__);
s = rdp_init_data(8);
out_uint16(s, 0); /* number of fonts */
out_uint16_le(s, 0); /* pad? */
out_uint16_le(s, seq); /* unknown */
out_uint16_le(s, 0x32); /* entry size */
s_mark_end(s);
rdp_send_data(s, RDP_DATA_PDU_FONT2);
}
|
C
|
rdesktop
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/62b8b6e168a12263aab6b88dbef0b900cc37309f
|
62b8b6e168a12263aab6b88dbef0b900cc37309f
|
Add partial magnifier to ash palette.
The partial magnifier will magnify a small portion of the screen, similar to a spyglass.
TEST=./out/Release/ash_unittests --gtest_filter=PartialMagnificationControllerTest.*
TBR=oshima@chromium.org
BUG=616112
Review-Url: https://codereview.chromium.org/2239553002
Cr-Commit-Position: refs/heads/master@{#414124}
|
ash::SystemTrayDelegate* ShellDelegateImpl::CreateSystemTrayDelegate() {
return new DefaultSystemTrayDelegate;
}
|
ash::SystemTrayDelegate* ShellDelegateImpl::CreateSystemTrayDelegate() {
return new DefaultSystemTrayDelegate;
}
|
C
|
Chrome
| 0 |
CVE-2016-10208
|
https://www.cvedetails.com/cve/CVE-2016-10208/
|
CWE-125
|
https://github.com/torvalds/linux/commit/3a4b77cd47bb837b8557595ec7425f281f2ca1fe
|
3a4b77cd47bb837b8557595ec7425f281f2ca1fe
|
ext4: validate s_first_meta_bg at mount time
Ralf Spenneberg reported that he hit a kernel crash when mounting a
modified ext4 image. And it turns out that kernel crashed when
calculating fs overhead (ext4_calculate_overhead()), this is because
the image has very large s_first_meta_bg (debug code shows it's
842150400), and ext4 overruns the memory in count_overhead() when
setting bitmap buffer, which is PAGE_SIZE.
ext4_calculate_overhead():
buf = get_zeroed_page(GFP_NOFS); <=== PAGE_SIZE buffer
blks = count_overhead(sb, i, buf);
count_overhead():
for (j = ext4_bg_num_gdb(sb, grp); j > 0; j--) { <=== j = 842150400
ext4_set_bit(EXT4_B2C(sbi, s++), buf); <=== buffer overrun
count++;
}
This can be reproduced easily for me by this script:
#!/bin/bash
rm -f fs.img
mkdir -p /mnt/ext4
fallocate -l 16M fs.img
mke2fs -t ext4 -O bigalloc,meta_bg,^resize_inode -F fs.img
debugfs -w -R "ssv first_meta_bg 842150400" fs.img
mount -o loop fs.img /mnt/ext4
Fix it by validating s_first_meta_bg first at mount time, and
refusing to mount if its value exceeds the largest possible meta_bg
number.
Reported-by: Ralf Spenneberg <ralf@os-t.de>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
|
static void __exit ext4_exit_fs(void)
{
ext4_destroy_lazyinit_thread();
unregister_as_ext2();
unregister_as_ext3();
unregister_filesystem(&ext4_fs_type);
destroy_inodecache();
ext4_exit_mballoc();
ext4_exit_sysfs();
ext4_exit_system_zone();
ext4_exit_pageio();
ext4_exit_es();
}
|
static void __exit ext4_exit_fs(void)
{
ext4_destroy_lazyinit_thread();
unregister_as_ext2();
unregister_as_ext3();
unregister_filesystem(&ext4_fs_type);
destroy_inodecache();
ext4_exit_mballoc();
ext4_exit_sysfs();
ext4_exit_system_zone();
ext4_exit_pageio();
ext4_exit_es();
}
|
C
|
linux
| 0 |
CVE-2017-7177
|
https://www.cvedetails.com/cve/CVE-2017-7177/
|
CWE-358
|
https://github.com/inliniac/suricata/commit/4a04f814b15762eb446a5ead4d69d021512df6f8
|
4a04f814b15762eb446a5ead4d69d021512df6f8
|
defrag - take protocol into account during re-assembly
The IP protocol was not being used to match fragments with
their packets allowing a carefully constructed packet
with a different protocol to be matched, allowing re-assembly
to complete, creating a packet that would not be re-assembled
by the destination host.
|
DefragDoSturgesNovakTest(int policy, u_char *expected, size_t expected_len)
{
int i;
int ret = 0;
DefragInit();
/*
* Build the packets.
*/
int id = 1;
Packet *packets[17];
memset(packets, 0x00, sizeof(packets));
/*
* Original fragments.
*/
/* A*24 at 0. */
packets[0] = BuildTestPacket(IPPROTO_ICMP, id, 0, 1, 'A', 24);
/* B*15 at 32. */
packets[1] = BuildTestPacket(IPPROTO_ICMP, id, 32 >> 3, 1, 'B', 16);
/* C*24 at 48. */
packets[2] = BuildTestPacket(IPPROTO_ICMP, id, 48 >> 3, 1, 'C', 24);
/* D*8 at 80. */
packets[3] = BuildTestPacket(IPPROTO_ICMP, id, 80 >> 3, 1, 'D', 8);
/* E*16 at 104. */
packets[4] = BuildTestPacket(IPPROTO_ICMP, id, 104 >> 3, 1, 'E', 16);
/* F*24 at 120. */
packets[5] = BuildTestPacket(IPPROTO_ICMP, id, 120 >> 3, 1, 'F', 24);
/* G*16 at 144. */
packets[6] = BuildTestPacket(IPPROTO_ICMP, id, 144 >> 3, 1, 'G', 16);
/* H*16 at 160. */
packets[7] = BuildTestPacket(IPPROTO_ICMP, id, 160 >> 3, 1, 'H', 16);
/* I*8 at 176. */
packets[8] = BuildTestPacket(IPPROTO_ICMP, id, 176 >> 3, 1, 'I', 8);
/*
* Overlapping subsequent fragments.
*/
/* J*32 at 8. */
packets[9] = BuildTestPacket(IPPROTO_ICMP, id, 8 >> 3, 1, 'J', 32);
/* K*24 at 48. */
packets[10] = BuildTestPacket(IPPROTO_ICMP, id, 48 >> 3, 1, 'K', 24);
/* L*24 at 72. */
packets[11] = BuildTestPacket(IPPROTO_ICMP, id, 72 >> 3, 1, 'L', 24);
/* M*24 at 96. */
packets[12] = BuildTestPacket(IPPROTO_ICMP, id, 96 >> 3, 1, 'M', 24);
/* N*8 at 128. */
packets[13] = BuildTestPacket(IPPROTO_ICMP, id, 128 >> 3, 1, 'N', 8);
/* O*8 at 152. */
packets[14] = BuildTestPacket(IPPROTO_ICMP, id, 152 >> 3, 1, 'O', 8);
/* P*8 at 160. */
packets[15] = BuildTestPacket(IPPROTO_ICMP, id, 160 >> 3, 1, 'P', 8);
/* Q*16 at 176. */
packets[16] = BuildTestPacket(IPPROTO_ICMP, id, 176 >> 3, 0, 'Q', 16);
default_policy = policy;
/* Send all but the last. */
for (i = 0; i < 9; i++) {
Packet *tp = Defrag(NULL, NULL, packets[i], NULL);
if (tp != NULL) {
SCFree(tp);
goto end;
}
if (ENGINE_ISSET_EVENT(packets[i], IPV4_FRAG_OVERLAP)) {
goto end;
}
}
int overlap = 0;
for (; i < 16; i++) {
Packet *tp = Defrag(NULL, NULL, packets[i], NULL);
if (tp != NULL) {
SCFree(tp);
goto end;
}
if (ENGINE_ISSET_EVENT(packets[i], IPV4_FRAG_OVERLAP)) {
overlap++;
}
}
if (!overlap) {
goto end;
}
/* And now the last one. */
Packet *reassembled = Defrag(NULL, NULL, packets[16], NULL);
if (reassembled == NULL) {
goto end;
}
if (IPV4_GET_HLEN(reassembled) != 20) {
goto end;
}
if (IPV4_GET_IPLEN(reassembled) != 20 + 192) {
goto end;
}
if (memcmp(GET_PKT_DATA(reassembled) + 20, expected, expected_len) != 0) {
goto end;
}
SCFree(reassembled);
/* Make sure all frags were returned back to the pool. */
if (defrag_context->frag_pool->outstanding != 0) {
goto end;
}
ret = 1;
end:
for (i = 0; i < 17; i++) {
SCFree(packets[i]);
}
DefragDestroy();
return ret;
}
|
DefragDoSturgesNovakTest(int policy, u_char *expected, size_t expected_len)
{
int i;
int ret = 0;
DefragInit();
/*
* Build the packets.
*/
int id = 1;
Packet *packets[17];
memset(packets, 0x00, sizeof(packets));
/*
* Original fragments.
*/
/* A*24 at 0. */
packets[0] = BuildTestPacket(id, 0, 1, 'A', 24);
/* B*15 at 32. */
packets[1] = BuildTestPacket(id, 32 >> 3, 1, 'B', 16);
/* C*24 at 48. */
packets[2] = BuildTestPacket(id, 48 >> 3, 1, 'C', 24);
/* D*8 at 80. */
packets[3] = BuildTestPacket(id, 80 >> 3, 1, 'D', 8);
/* E*16 at 104. */
packets[4] = BuildTestPacket(id, 104 >> 3, 1, 'E', 16);
/* F*24 at 120. */
packets[5] = BuildTestPacket(id, 120 >> 3, 1, 'F', 24);
/* G*16 at 144. */
packets[6] = BuildTestPacket(id, 144 >> 3, 1, 'G', 16);
/* H*16 at 160. */
packets[7] = BuildTestPacket(id, 160 >> 3, 1, 'H', 16);
/* I*8 at 176. */
packets[8] = BuildTestPacket(id, 176 >> 3, 1, 'I', 8);
/*
* Overlapping subsequent fragments.
*/
/* J*32 at 8. */
packets[9] = BuildTestPacket(id, 8 >> 3, 1, 'J', 32);
/* K*24 at 48. */
packets[10] = BuildTestPacket(id, 48 >> 3, 1, 'K', 24);
/* L*24 at 72. */
packets[11] = BuildTestPacket(id, 72 >> 3, 1, 'L', 24);
/* M*24 at 96. */
packets[12] = BuildTestPacket(id, 96 >> 3, 1, 'M', 24);
/* N*8 at 128. */
packets[13] = BuildTestPacket(id, 128 >> 3, 1, 'N', 8);
/* O*8 at 152. */
packets[14] = BuildTestPacket(id, 152 >> 3, 1, 'O', 8);
/* P*8 at 160. */
packets[15] = BuildTestPacket(id, 160 >> 3, 1, 'P', 8);
/* Q*16 at 176. */
packets[16] = BuildTestPacket(id, 176 >> 3, 0, 'Q', 16);
default_policy = policy;
/* Send all but the last. */
for (i = 0; i < 9; i++) {
Packet *tp = Defrag(NULL, NULL, packets[i], NULL);
if (tp != NULL) {
SCFree(tp);
goto end;
}
if (ENGINE_ISSET_EVENT(packets[i], IPV4_FRAG_OVERLAP)) {
goto end;
}
}
int overlap = 0;
for (; i < 16; i++) {
Packet *tp = Defrag(NULL, NULL, packets[i], NULL);
if (tp != NULL) {
SCFree(tp);
goto end;
}
if (ENGINE_ISSET_EVENT(packets[i], IPV4_FRAG_OVERLAP)) {
overlap++;
}
}
if (!overlap) {
goto end;
}
/* And now the last one. */
Packet *reassembled = Defrag(NULL, NULL, packets[16], NULL);
if (reassembled == NULL) {
goto end;
}
if (IPV4_GET_HLEN(reassembled) != 20) {
goto end;
}
if (IPV4_GET_IPLEN(reassembled) != 20 + 192) {
goto end;
}
if (memcmp(GET_PKT_DATA(reassembled) + 20, expected, expected_len) != 0) {
goto end;
}
SCFree(reassembled);
/* Make sure all frags were returned back to the pool. */
if (defrag_context->frag_pool->outstanding != 0) {
goto end;
}
ret = 1;
end:
for (i = 0; i < 17; i++) {
SCFree(packets[i]);
}
DefragDestroy();
return ret;
}
|
C
|
suricata
| 1 |
null | null | null |
https://github.com/chromium/chromium/commit/a03d4448faf2c40f4ef444a88cb9aace5b98e8c4
|
a03d4448faf2c40f4ef444a88cb9aace5b98e8c4
|
Introduce background.scripts feature for extension manifests.
This optimizes for the common use case where background pages
just include a reference to one or more script files and no
additional HTML.
BUG=107791
Review URL: http://codereview.chromium.org/9150008
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@117110 0039d316-1c4b-4281-b951-d872f2087c98
|
void TestingAutomationProvider::GetInstantInfo(Browser* browser,
DictionaryValue* args,
IPC::Message* reply_message) {
DictionaryValue* info = new DictionaryValue;
if (browser->instant()) {
InstantController* instant = browser->instant();
info->SetBoolean("enabled", true);
info->SetBoolean("active", (instant->GetPreviewContents() != NULL));
info->SetBoolean("current", instant->IsCurrent());
if (instant->GetPreviewContents() &&
instant->GetPreviewContents()->web_contents()) {
WebContents* contents = instant->GetPreviewContents()->web_contents();
info->SetBoolean("loading", contents->IsLoading());
info->SetString("location", contents->GetURL().spec());
info->SetString("title", contents->GetTitle());
}
} else {
info->SetBoolean("enabled", false);
}
scoped_ptr<DictionaryValue> return_value(new DictionaryValue);
return_value->Set("instant", info);
AutomationJSONReply(this, reply_message).SendSuccess(return_value.get());
}
|
void TestingAutomationProvider::GetInstantInfo(Browser* browser,
DictionaryValue* args,
IPC::Message* reply_message) {
DictionaryValue* info = new DictionaryValue;
if (browser->instant()) {
InstantController* instant = browser->instant();
info->SetBoolean("enabled", true);
info->SetBoolean("active", (instant->GetPreviewContents() != NULL));
info->SetBoolean("current", instant->IsCurrent());
if (instant->GetPreviewContents() &&
instant->GetPreviewContents()->web_contents()) {
WebContents* contents = instant->GetPreviewContents()->web_contents();
info->SetBoolean("loading", contents->IsLoading());
info->SetString("location", contents->GetURL().spec());
info->SetString("title", contents->GetTitle());
}
} else {
info->SetBoolean("enabled", false);
}
scoped_ptr<DictionaryValue> return_value(new DictionaryValue);
return_value->Set("instant", info);
AutomationJSONReply(this, reply_message).SendSuccess(return_value.get());
}
|
C
|
Chrome
| 0 |
CVE-2017-5093
|
https://www.cvedetails.com/cve/CVE-2017-5093/
|
CWE-20
|
https://github.com/chromium/chromium/commit/0720b02e4f303ea6b114d4ae9453e3a7ff55f8dc
|
0720b02e4f303ea6b114d4ae9453e3a7ff55f8dc
|
If JavaScript shows a dialog, cause the page to lose fullscreen.
BUG=670135, 550017, 726761, 728276
Review-Url: https://codereview.chromium.org/2906133004
Cr-Commit-Position: refs/heads/master@{#478884}
|
void NotifyCacheOnIO(
scoped_refptr<net::URLRequestContextGetter> request_context,
const GURL& url,
const std::string& http_method) {
net::HttpCache* cache = request_context->GetURLRequestContext()->
http_transaction_factory()->GetCache();
if (cache)
cache->OnExternalCacheHit(url, http_method);
}
|
void NotifyCacheOnIO(
scoped_refptr<net::URLRequestContextGetter> request_context,
const GURL& url,
const std::string& http_method) {
net::HttpCache* cache = request_context->GetURLRequestContext()->
http_transaction_factory()->GetCache();
if (cache)
cache->OnExternalCacheHit(url, http_method);
}
|
C
|
Chrome
| 0 |
CVE-2015-5706
|
https://www.cvedetails.com/cve/CVE-2015-5706/
| null |
https://github.com/torvalds/linux/commit/f15133df088ecadd141ea1907f2c96df67c729f0
|
f15133df088ecadd141ea1907f2c96df67c729f0
|
path_openat(): fix double fput()
path_openat() jumps to the wrong place after do_tmpfile() - it has
already done path_cleanup() (as part of path_lookupat() called by
do_tmpfile()), so doing that again can lead to double fput().
Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
int vfs_unlink(struct inode *dir, struct dentry *dentry, struct inode **delegated_inode)
{
struct inode *target = dentry->d_inode;
int error = may_delete(dir, dentry, 0);
if (error)
return error;
if (!dir->i_op->unlink)
return -EPERM;
mutex_lock(&target->i_mutex);
if (is_local_mountpoint(dentry))
error = -EBUSY;
else {
error = security_inode_unlink(dir, dentry);
if (!error) {
error = try_break_deleg(target, delegated_inode);
if (error)
goto out;
error = dir->i_op->unlink(dir, dentry);
if (!error) {
dont_mount(dentry);
detach_mounts(dentry);
}
}
}
out:
mutex_unlock(&target->i_mutex);
/* We don't d_delete() NFS sillyrenamed files--they still exist. */
if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) {
fsnotify_link_count(target);
d_delete(dentry);
}
return error;
}
|
int vfs_unlink(struct inode *dir, struct dentry *dentry, struct inode **delegated_inode)
{
struct inode *target = dentry->d_inode;
int error = may_delete(dir, dentry, 0);
if (error)
return error;
if (!dir->i_op->unlink)
return -EPERM;
mutex_lock(&target->i_mutex);
if (is_local_mountpoint(dentry))
error = -EBUSY;
else {
error = security_inode_unlink(dir, dentry);
if (!error) {
error = try_break_deleg(target, delegated_inode);
if (error)
goto out;
error = dir->i_op->unlink(dir, dentry);
if (!error) {
dont_mount(dentry);
detach_mounts(dentry);
}
}
}
out:
mutex_unlock(&target->i_mutex);
/* We don't d_delete() NFS sillyrenamed files--they still exist. */
if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) {
fsnotify_link_count(target);
d_delete(dentry);
}
return error;
}
|
C
|
linux
| 0 |
CVE-2018-16080
|
https://www.cvedetails.com/cve/CVE-2018-16080/
|
CWE-20
|
https://github.com/chromium/chromium/commit/c552cd7b8a0862f6b3c8c6a07f98bda3721101eb
|
c552cd7b8a0862f6b3c8c6a07f98bda3721101eb
|
Mac: turn popups into new tabs while in fullscreen.
It's platform convention to show popups as new tabs while in
non-HTML5 fullscreen. (Popups cause tabs to lose HTML5 fullscreen.)
This was implemented for Cocoa in a BrowserWindow override, but
it makes sense to just stick it into Browser and remove a ton
of override code put in just to support this.
BUG=858929, 868416
TEST=as in bugs
Change-Id: I43471f242813ec1159d9c690bab73dab3e610b7d
Reviewed-on: https://chromium-review.googlesource.com/1153455
Reviewed-by: Sidney San Martín <sdy@chromium.org>
Commit-Queue: Avi Drissman <avi@chromium.org>
Cr-Commit-Position: refs/heads/master@{#578755}
|
void BrowserView::WillCloseAllTabs(TabStripModel* tab_strip_model) {
web_contents_close_handler_->WillCloseAllTabs();
}
|
void BrowserView::WillCloseAllTabs(TabStripModel* tab_strip_model) {
web_contents_close_handler_->WillCloseAllTabs();
}
|
C
|
Chrome
| 0 |
CVE-2017-8933
|
https://www.cvedetails.com/cve/CVE-2017-8933/
|
CWE-20
|
https://git.lxde.org/gitweb/?p=lxde/menu-cache.git;a=commit;h=56f66684592abf257c4004e6e1fff041c64a12ce
|
56f66684592abf257c4004e6e1fff041c64a12ce
| null |
const char* menu_cache_item_get_file_dirname( MenuCacheItem* item )
{
return item->file_dir ? item->file_dir->dir + 1 : NULL;
}
|
const char* menu_cache_item_get_file_dirname( MenuCacheItem* item )
{
return item->file_dir ? item->file_dir->dir + 1 : NULL;
}
|
C
|
lxde
| 0 |
CVE-2013-7281
|
https://www.cvedetails.com/cve/CVE-2013-7281/
|
CWE-200
|
https://github.com/torvalds/linux/commit/bceaa90240b6019ed73b49965eac7d167610be69
|
bceaa90240b6019ed73b49965eac7d167610be69
|
inet: prevent leakage of uninitialized memory to user in recv syscalls
Only update *addr_len when we actually fill in sockaddr, otherwise we
can return uninitialized memory from the stack to the caller in the
recvfrom, recvmmsg and recvmsg syscalls. Drop the the (addr_len == NULL)
checks because we only get called with a valid addr_len pointer either
from sock_common_recvmsg or inet_recvmsg.
If a blocking read waits on a socket which is concurrently shut down we
now return zero and set msg_msgnamelen to 0.
Reported-by: mpb <mpb.mail@gmail.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
static int raw6_seq_show(struct seq_file *seq, void *v)
{
if (v == SEQ_START_TOKEN) {
seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
} else {
struct sock *sp = v;
__u16 srcp = inet_sk(sp)->inet_num;
ip6_dgram_sock_seq_show(seq, v, srcp, 0,
raw_seq_private(seq)->bucket);
}
return 0;
}
|
static int raw6_seq_show(struct seq_file *seq, void *v)
{
if (v == SEQ_START_TOKEN) {
seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
} else {
struct sock *sp = v;
__u16 srcp = inet_sk(sp)->inet_num;
ip6_dgram_sock_seq_show(seq, v, srcp, 0,
raw_seq_private(seq)->bucket);
}
return 0;
}
|
C
|
linux
| 0 |
CVE-2019-5827
|
https://www.cvedetails.com/cve/CVE-2019-5827/
|
CWE-190
|
https://github.com/chromium/chromium/commit/517ac71c9ee27f856f9becde8abea7d1604af9d4
|
517ac71c9ee27f856f9becde8abea7d1604af9d4
|
sqlite: backport bugfixes for dbfuzz2
Bug: 952406
Change-Id: Icbec429742048d6674828726c96d8e265c41b595
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1568152
Reviewed-by: Chris Mumford <cmumford@google.com>
Commit-Queue: Darwin Huang <huangdarwin@chromium.org>
Cr-Commit-Position: refs/heads/master@{#651030}
|
static int afpCheckReservedLock(sqlite3_file *id, int *pResOut){
int rc = SQLITE_OK;
int reserved = 0;
unixFile *pFile = (unixFile*)id;
afpLockingContext *context;
SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; );
assert( pFile );
context = (afpLockingContext *) pFile->lockingContext;
if( context->reserved ){
*pResOut = 1;
return SQLITE_OK;
}
sqlite3_mutex_enter(pFile->pInode->pLockMutex);
/* Check if a thread in this process holds such a lock */
if( pFile->pInode->eFileLock>SHARED_LOCK ){
reserved = 1;
}
/* Otherwise see if some other process holds it.
*/
if( !reserved ){
/* lock the RESERVED byte */
int lrc = afpSetLock(context->dbPath, pFile, RESERVED_BYTE, 1,1);
if( SQLITE_OK==lrc ){
/* if we succeeded in taking the reserved lock, unlock it to restore
** the original state */
lrc = afpSetLock(context->dbPath, pFile, RESERVED_BYTE, 1, 0);
} else {
/* if we failed to get the lock then someone else must have it */
reserved = 1;
}
if( IS_LOCK_ERROR(lrc) ){
rc=lrc;
}
}
sqlite3_mutex_leave(pFile->pInode->pLockMutex);
OSTRACE(("TEST WR-LOCK %d %d %d (afp)\n", pFile->h, rc, reserved));
*pResOut = reserved;
return rc;
}
|
static int afpCheckReservedLock(sqlite3_file *id, int *pResOut){
int rc = SQLITE_OK;
int reserved = 0;
unixFile *pFile = (unixFile*)id;
afpLockingContext *context;
SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; );
assert( pFile );
context = (afpLockingContext *) pFile->lockingContext;
if( context->reserved ){
*pResOut = 1;
return SQLITE_OK;
}
sqlite3_mutex_enter(pFile->pInode->pLockMutex);
/* Check if a thread in this process holds such a lock */
if( pFile->pInode->eFileLock>SHARED_LOCK ){
reserved = 1;
}
/* Otherwise see if some other process holds it.
*/
if( !reserved ){
/* lock the RESERVED byte */
int lrc = afpSetLock(context->dbPath, pFile, RESERVED_BYTE, 1,1);
if( SQLITE_OK==lrc ){
/* if we succeeded in taking the reserved lock, unlock it to restore
** the original state */
lrc = afpSetLock(context->dbPath, pFile, RESERVED_BYTE, 1, 0);
} else {
/* if we failed to get the lock then someone else must have it */
reserved = 1;
}
if( IS_LOCK_ERROR(lrc) ){
rc=lrc;
}
}
sqlite3_mutex_leave(pFile->pInode->pLockMutex);
OSTRACE(("TEST WR-LOCK %d %d %d (afp)\n", pFile->h, rc, reserved));
*pResOut = reserved;
return rc;
}
|
C
|
Chrome
| 0 |
CVE-2016-0846
|
https://www.cvedetails.com/cve/CVE-2016-0846/
|
CWE-264
|
https://android.googlesource.com/platform/frameworks/native/+/f3199c228aced7858b75a8070b8358c155ae0149
|
f3199c228aced7858b75a8070b8358c155ae0149
|
Sanity check IMemory access versus underlying mmap
Bug 26877992
Change-Id: Ibbf4b1061e4675e4e96bc944a865b53eaf6984fe
|
sp<IMemoryHeap> HeapCache::get_heap(const sp<IBinder>& binder)
{
sp<IMemoryHeap> realHeap;
Mutex::Autolock _l(mHeapCacheLock);
ssize_t i = mHeapCache.indexOfKey(binder);
if (i>=0) realHeap = mHeapCache.valueAt(i).heap;
else realHeap = interface_cast<IMemoryHeap>(binder);
return realHeap;
}
|
sp<IMemoryHeap> HeapCache::get_heap(const sp<IBinder>& binder)
{
sp<IMemoryHeap> realHeap;
Mutex::Autolock _l(mHeapCacheLock);
ssize_t i = mHeapCache.indexOfKey(binder);
if (i>=0) realHeap = mHeapCache.valueAt(i).heap;
else realHeap = interface_cast<IMemoryHeap>(binder);
return realHeap;
}
|
C
|
Android
| 0 |
CVE-2018-6096
|
https://www.cvedetails.com/cve/CVE-2018-6096/
| null |
https://github.com/chromium/chromium/commit/36f801fdbec07d116a6f4f07bb363f10897d6a51
|
36f801fdbec07d116a6f4f07bb363f10897d6a51
|
If a page calls |window.focus()|, kick it out of fullscreen.
BUG=776418, 800056
Change-Id: I1880fe600e4814c073f247c43b1c1ac80c8fc017
Reviewed-on: https://chromium-review.googlesource.com/852378
Reviewed-by: Nasko Oskov <nasko@chromium.org>
Reviewed-by: Philip Jägenstedt <foolip@chromium.org>
Commit-Queue: Avi Drissman <avi@chromium.org>
Cr-Commit-Position: refs/heads/master@{#533790}
|
void ChromeClientImpl::StartDragging(LocalFrame* frame,
const WebDragData& drag_data,
WebDragOperationsMask mask,
const WebImage& drag_image,
const WebPoint& drag_image_offset) {
WebLocalFrameImpl* web_frame = WebLocalFrameImpl::FromFrame(frame);
WebReferrerPolicy policy = web_frame->GetDocument().GetReferrerPolicy();
web_frame->LocalRoot()->FrameWidget()->StartDragging(
policy, drag_data, mask, drag_image, drag_image_offset);
}
|
void ChromeClientImpl::StartDragging(LocalFrame* frame,
const WebDragData& drag_data,
WebDragOperationsMask mask,
const WebImage& drag_image,
const WebPoint& drag_image_offset) {
WebLocalFrameImpl* web_frame = WebLocalFrameImpl::FromFrame(frame);
WebReferrerPolicy policy = web_frame->GetDocument().GetReferrerPolicy();
web_frame->LocalRoot()->FrameWidget()->StartDragging(
policy, drag_data, mask, drag_image, drag_image_offset);
}
|
C
|
Chrome
| 0 |
CVE-2013-6663
|
https://www.cvedetails.com/cve/CVE-2013-6663/
|
CWE-399
|
https://github.com/chromium/chromium/commit/fb5dce12f0462056fc9f66967b0f7b2b7bcd88f5
|
fb5dce12f0462056fc9f66967b0f7b2b7bcd88f5
|
One polymer_config.js to rule them all.
R=michaelpg@chromium.org,fukino@chromium.org,mfoltz@chromium.org
BUG=425626
Review URL: https://codereview.chromium.org/1224783005
Cr-Commit-Position: refs/heads/master@{#337882}
|
HostPairingScreenActor* OobeUI::GetHostPairingScreenActor() {
return host_pairing_screen_actor_;
}
|
HostPairingScreenActor* OobeUI::GetHostPairingScreenActor() {
return host_pairing_screen_actor_;
}
|
C
|
Chrome
| 0 |
CVE-2012-2896
|
https://www.cvedetails.com/cve/CVE-2012-2896/
|
CWE-189
|
https://github.com/chromium/chromium/commit/3aad1a37affb1ab70d1897f2b03eb8c077264984
|
3aad1a37affb1ab70d1897f2b03eb8c077264984
|
Fix SafeAdd and SafeMultiply
BUG=145648,145544
Review URL: https://chromiumcodereview.appspot.com/10916165
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@155478 0039d316-1c4b-4281-b951-d872f2087c98
|
void GLES2DecoderImpl::DoTexParameterfv(
GLenum target, GLenum pname, const GLfloat* params) {
TextureManager::TextureInfo* info = GetTextureInfoForTarget(target);
if (!info) {
SetGLError(GL_INVALID_VALUE, "glTexParameterfv", "unknown texture");
return;
}
if (!texture_manager()->SetParameter(
info, pname, static_cast<GLint>(params[0]))) {
SetGLErrorInvalidEnum("glTexParameterfv", pname, "pname");
return;
}
glTexParameterfv(target, pname, params);
}
|
void GLES2DecoderImpl::DoTexParameterfv(
GLenum target, GLenum pname, const GLfloat* params) {
TextureManager::TextureInfo* info = GetTextureInfoForTarget(target);
if (!info) {
SetGLError(GL_INVALID_VALUE, "glTexParameterfv", "unknown texture");
return;
}
if (!texture_manager()->SetParameter(
info, pname, static_cast<GLint>(params[0]))) {
SetGLErrorInvalidEnum("glTexParameterfv", pname, "pname");
return;
}
glTexParameterfv(target, pname, params);
}
|
C
|
Chrome
| 0 |
CVE-2011-1476
|
https://www.cvedetails.com/cve/CVE-2011-1476/
|
CWE-189
|
https://github.com/torvalds/linux/commit/b769f49463711205d57286e64cf535ed4daf59e9
|
b769f49463711205d57286e64cf535ed4daf59e9
|
sound/oss: remove offset from load_patch callbacks
Was: [PATCH] sound/oss/midi_synth: prevent underflow, use of
uninitialized value, and signedness issue
The offset passed to midi_synth_load_patch() can be essentially
arbitrary. If it's greater than the header length, this will result in
a copy_from_user(dst, src, negative_val). While this will just return
-EFAULT on x86, on other architectures this may cause memory corruption.
Additionally, the length field of the sysex_info structure may not be
initialized prior to its use. Finally, a signed comparison may result
in an unintentionally large loop.
On suggestion by Takashi Iwai, version two removes the offset argument
from the load_patch callbacks entirely, which also resolves similar
issues in opl3. Compile tested only.
v3 adjusts comments and hopefully gets copy offsets right.
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
static int seq_sync(void)
{
if (qlen && !seq_playing && !signal_pending(current))
seq_startplay();
if (qlen > 0)
interruptible_sleep_on_timeout(&seq_sleeper, HZ);
return qlen;
}
|
static int seq_sync(void)
{
if (qlen && !seq_playing && !signal_pending(current))
seq_startplay();
if (qlen > 0)
interruptible_sleep_on_timeout(&seq_sleeper, HZ);
return qlen;
}
|
C
|
linux
| 0 |
CVE-2016-5214
|
https://www.cvedetails.com/cve/CVE-2016-5214/
|
CWE-19
|
https://github.com/chromium/chromium/commit/fff73016a86f9a5990d080dc76058f8528a423f9
|
fff73016a86f9a5990d080dc76058f8528a423f9
|
Revert "Enable camera blob stream when needed"
This reverts commit 10f4b93635e12f9fa0cba1641a10938ca38ed448.
Reason for revert:
Findit (https://goo.gl/kROfz5) identified CL at revision 601492 as the
culprit for failures in the build cycles as shown on:
https://findit-for-me.appspot.com/waterfall/culprit?key=ag9zfmZpbmRpdC1mb3ItbWVyRAsSDVdmU3VzcGVjdGVkQ0wiMWNocm9taXVtLzEwZjRiOTM2MzVlMTJmOWZhMGNiYTE2NDFhMTA5MzhjYTM4ZWQ0NDgM
Sample Failed Build: https://ci.chromium.org/buildbot/chromium.memory/Linux%20ChromiumOS%20MSan%20Tests/9190
Sample Failed Step: capture_unittests
Original change's description:
> Enable camera blob stream when needed
>
> Since blob stream needs higher resolution, it causes higher cpu loading
> to require higher resolution and resize to smaller resolution.
> In hangout app, we don't need blob stream. Enabling blob stream when
> needed can save a lot of cpu usage.
>
> BUG=b:114676133
> TEST=manually test in apprtc and CCA. make sure picture taking still
> works in CCA.
>
> Change-Id: I9144461bc76627903d0b3b359ce9cf962ff3628c
> Reviewed-on: https://chromium-review.googlesource.com/c/1261242
> Commit-Queue: Heng-ruey Hsu <henryhsu@chromium.org>
> Reviewed-by: Ricky Liang <jcliang@chromium.org>
> Reviewed-by: Xiaohan Wang <xhwang@chromium.org>
> Reviewed-by: Robert Sesek <rsesek@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#601492}
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
BUG=b:114676133
Change-Id: If173ffe9259f7eca849b184806bd56e2a9fbaac4
Reviewed-on: https://chromium-review.googlesource.com/c/1292256
Cr-Commit-Position: refs/heads/master@{#601538}
|
FromMojom(media::mojom::VideoCaptureBufferType input,
media::VideoCaptureBufferType* output) {
switch (input) {
case media::mojom::VideoCaptureBufferType::kSharedMemory:
*output = media::VideoCaptureBufferType::kSharedMemory;
return true;
case media::mojom::VideoCaptureBufferType::
kSharedMemoryViaRawFileDescriptor:
*output =
media::VideoCaptureBufferType::kSharedMemoryViaRawFileDescriptor;
return true;
case media::mojom::VideoCaptureBufferType::kMailboxHolder:
*output = media::VideoCaptureBufferType::kMailboxHolder;
return true;
}
NOTREACHED();
return false;
}
|
FromMojom(media::mojom::VideoCaptureBufferType input,
media::VideoCaptureBufferType* output) {
switch (input) {
case media::mojom::VideoCaptureBufferType::kSharedMemory:
*output = media::VideoCaptureBufferType::kSharedMemory;
return true;
case media::mojom::VideoCaptureBufferType::
kSharedMemoryViaRawFileDescriptor:
*output =
media::VideoCaptureBufferType::kSharedMemoryViaRawFileDescriptor;
return true;
case media::mojom::VideoCaptureBufferType::kMailboxHolder:
*output = media::VideoCaptureBufferType::kMailboxHolder;
return true;
}
NOTREACHED();
return false;
}
|
C
|
Chrome
| 0 |
CVE-2008-7316
|
https://www.cvedetails.com/cve/CVE-2008-7316/
|
CWE-20
|
https://github.com/torvalds/linux/commit/124d3b7041f9a0ca7c43a6293e1cae4576c32fd5
|
124d3b7041f9a0ca7c43a6293e1cae4576c32fd5
|
fix writev regression: pan hanging unkillable and un-straceable
Frederik Himpe reported an unkillable and un-straceable pan process.
Zero length iovecs can go into an infinite loop in writev, because the
iovec iterator does not always advance over them.
The sequence required to trigger this is not trivial. I think it
requires that a zero-length iovec be followed by a non-zero-length iovec
which causes a pagefault in the atomic usercopy. This causes the writev
code to drop back into single-segment copy mode, which then tries to
copy the 0 bytes of the zero-length iovec; a zero length copy looks like
a failure though, so it loops.
Put a test into iov_iter_advance to catch zero-length iovecs. We could
just put the test in the fallback path, but I feel it is more robust to
skip over zero-length iovecs throughout the code (iovec iterator may be
used in filesystems too, so it should be robust).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
pgoff_t offset, gfp_t gfp_mask)
{
int ret = add_to_page_cache(page, mapping, offset, gfp_mask);
if (ret == 0)
lru_cache_add(page);
return ret;
}
|
int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
pgoff_t offset, gfp_t gfp_mask)
{
int ret = add_to_page_cache(page, mapping, offset, gfp_mask);
if (ret == 0)
lru_cache_add(page);
return ret;
}
|
C
|
linux
| 0 |
CVE-2016-5217
|
https://www.cvedetails.com/cve/CVE-2016-5217/
|
CWE-284
|
https://github.com/chromium/chromium/commit/0d68cbd77addd38909101f76847deea56de00524
|
0d68cbd77addd38909101f76847deea56de00524
|
Fix PIP window being blank after minimize/show
DesktopWindowTreeHostX11::SetVisible only made the call into
OnNativeWidgetVisibilityChanged when transitioning from shown
to minimized and not vice versa. This is because this change
https://chromium-review.googlesource.com/c/chromium/src/+/1437263
considered IsVisible to be true when minimized, which made
IsVisible always true in this case. This caused layers to be hidden
but never shown again.
This is a reland of:
https://chromium-review.googlesource.com/c/chromium/src/+/1580103
Bug: 949199
Change-Id: I2151cd09e537d8ce8781897f43a3b8e9cec75996
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1584617
Reviewed-by: Scott Violet <sky@chromium.org>
Commit-Queue: enne <enne@chromium.org>
Cr-Commit-Position: refs/heads/master@{#654280}
|
ui::EventSource* DesktopWindowTreeHostX11::GetEventSource() {
return this;
}
|
ui::EventSource* DesktopWindowTreeHostX11::GetEventSource() {
return this;
}
|
C
|
Chrome
| 0 |
Subsets and Splits
CWE-119 Function Changes
This query retrieves specific examples (before and after code changes) of vulnerabilities with CWE-119, providing basic filtering but limited insight.
Vulnerable Code with CWE IDs
The query filters and combines records from multiple datasets to list specific vulnerability details, providing a basic overview of vulnerable functions but lacking deeper insights.
Vulnerable Functions in BigVul
Retrieves details of vulnerable functions from both validation and test datasets where vulnerabilities are present, providing a basic set of data points for further analysis.
Vulnerable Code Functions
This query filters and shows raw data for vulnerable functions, which provides basic insight into specific vulnerabilities but lacks broader analytical value.
Top 100 Vulnerable Functions
Retrieves 100 samples of vulnerabilities from the training dataset, showing the CVE ID, CWE ID, and code changes before and after the vulnerability, which is a basic filtering of vulnerability data.