id
int64 11
59.9k
| original
stringlengths 33
150k
| modified
stringlengths 37
150k
|
---|---|---|
1,514 |
def plot_partial_dependence(estimator, X, features, feature_names=None,
target=None, response_method='auto', n_cols=3,
grid_resolution=100, percentiles=(0.05, 0.95),
method='auto', n_jobs=None, verbose=0, fig=None,
line_kw=None, contour_kw=None, ax=None):
"""Partial dependence plots.
The ``len(features)`` plots are arranged in a grid with ``n_cols``
columns. Two-way partial dependence plots are plotted as contour plots. The
deciles of the feature values will be shown with tick marks on the x-axes
for one-way plots, and on both axes for two-way plots.
.. note::
:func:`plot_partial_dependence` does not support using the same axes
with multiple calls. To plot the the partial dependence for multiple
estimators, please pass the axes created by the first call to the
second call::
>>> from sklearn.inspection import plot_partial_dependence
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.linear_model import LinearRegression
>>> X, y = make_friedman1()
>>> est = LinearRegression().fit(X, y)
>>> disp1 = plot_partial_dependence(est, X) #doctest: +SKIP
>>> disp2 = plot_partial_dependence(est, X,
... ax=disp1.axes_) #doctest: +SKIP
Read more in the :ref:`User Guide <partial_dependence>`.
Parameters
----------
estimator : BaseEstimator
A fitted estimator object implementing :term:`predict`,
:term:`predict_proba`, or :term:`decision_function`.
Multioutput-multiclass classifiers are not supported.
X : {array-like or dataframe} of shape (n_samples, n_features)
The data to use to build the grid of values on which the dependence
will be evaluated. This is usually the training data.
features : list of {int, str, pair of int, pair of str}
The target features for which to create the PDPs.
If features[i] is an int or a string, a one-way PDP is created; if
features[i] is a tuple, a two-way PDP is created. Each tuple must be
of size 2.
if any entry is a string, then it must be in ``feature_names``.
feature_names : array-like of shape (n_features,), dtype=str, default=None
Name of each feature; feature_names[i] holds the name of the feature
with index i.
By default, the name of the feature corresponds to their numerical
index for NumPy array and their column name for pandas dataframe.
target : int, optional (default=None)
- In a multiclass setting, specifies the class for which the PDPs
should be computed. Note that for binary classification, the
positive class (index 1) is always used.
- In a multioutput setting, specifies the task for which the PDPs
should be computed.
Ignored in binary classification or classical regression settings.
response_method : 'auto', 'predict_proba' or 'decision_function', \
optional (default='auto')
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. For regressors
this parameter is ignored and the response is always the output of
:term:`predict`. By default, :term:`predict_proba` is tried first
and we revert to :term:`decision_function` if it doesn't exist. If
``method`` is 'recursion', the response is always the output of
:term:`decision_function`.
n_cols : int, optional (default=3)
The maximum number of columns in the grid plot. Only active when `ax`
is a single axis or `None`.
grid_resolution : int, optional (default=100)
The number of equally spaced points on the axes of the plots, for each
target feature.
percentiles : tuple of float, optional (default=(0.05, 0.95))
The lower and upper percentile used to create the extreme values
for the PDP axes. Must be in [0, 1].
method : str, optional (default='auto')
The method to use to calculate the partial dependence predictions:
- 'recursion' is only supported for gradient boosting estimator (namely
:class:`GradientBoostingClassifier<sklearn.ensemble.GradientBoostingClassifier>`,
:class:`GradientBoostingRegressor<sklearn.ensemble.GradientBoostingRegressor>`,
:class:`HistGradientBoostingClassifier<sklearn.ensemble.HistGradientBoostingClassifier>`,
:class:`HistGradientBoostingRegressor<sklearn.ensemble.HistGradientBoostingRegressor>`)
but is more efficient in terms of speed.
With this method, ``X`` is optional and is only used to build the
grid and the partial dependences are computed using the training
data. This method does not account for the ``init`` predictor of
the boosting process, which may lead to incorrect values (see
warning below. With this method, the target response of a
classifier is always the decision function, not the predicted
probabilities.
- 'brute' is supported for any estimator, but is more
computationally intensive.
- 'auto':
- 'recursion' is used for estimators that supports it.
- 'brute' is used for all other estimators.
n_jobs : int, optional (default=None)
The number of CPUs to use to compute the partial dependences.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
verbose : int, optional (default=0)
Verbose output during PD computations.
fig : Matplotlib figure object, optional (default=None)
A figure object onto which the plots will be drawn, after the figure
has been cleared. By default, a new one is created.
.. deprecated:: 0.22
``fig`` will be removed in 0.24.
line_kw : dict, optional
Dict with keywords passed to the ``matplotlib.pyplot.plot`` call.
For one-way partial dependence plots.
contour_kw : dict, optional
Dict with keywords passed to the ``matplotlib.pyplot.contourf`` call.
For two-way partial dependence plots.
ax : Matplotlib axes or array-like of Matplotlib axes, default=None
- If a single axis is passed in, it is treated as a bounding axes
and a grid of partial dependence plots will be drawn within
these bounds. The `n_cols` parameter controls the number of
columns in the grid.
- If an array-like of axes are passed in, the partial dependence
plots will be drawn directly into these axes.
- If `None`, a figure and a bounding axes is created and treated
as the single axes case.
.. versionadded:: 0.22
Returns
-------
display: :class:`~sklearn.inspection.PartialDependenceDisplay`
Examples
--------
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_friedman1()
>>> clf = GradientBoostingRegressor(n_estimators=10).fit(X, y)
>>> plot_partial_dependence(clf, X, [0, (0, 1)]) #doctest: +SKIP
See also
--------
sklearn.inspection.partial_dependence: Return raw partial
dependence values
Warnings
--------
The 'recursion' method only works for gradient boosting estimators, and
unlike the 'brute' method, it does not account for the ``init``
predictor of the boosting process. In practice this will produce the
same values as 'brute' up to a constant offset in the target response,
provided that ``init`` is a consant estimator (which is the default).
However, as soon as ``init`` is not a constant estimator, the partial
dependence values are incorrect for 'recursion'. This is not relevant for
:class:`HistGradientBoostingClassifier
<sklearn.ensemble.HistGradientBoostingClassifier>` and
:class:`HistGradientBoostingRegressor
<sklearn.ensemble.HistGradientBoostingRegressor>`, which do not have an
``init`` parameter.
"""
check_matplotlib_support('plot_partial_dependence') # noqa
import matplotlib.pyplot as plt # noqa
from matplotlib import transforms # noqa
from matplotlib.ticker import MaxNLocator # noqa
from matplotlib.ticker import ScalarFormatter # noqa
# set target_idx for multi-class estimators
if hasattr(estimator, 'classes_') and np.size(estimator.classes_) > 2:
if target is None:
raise ValueError('target must be specified for multi-class')
target_idx = np.searchsorted(estimator.classes_, target)
if (not (0 <= target_idx < len(estimator.classes_)) or
estimator.classes_[target_idx] != target):
raise ValueError('target not in est.classes_, got {}'.format(
target))
else:
# regression and binary classification
target_idx = 0
# Use check_array only on lists and other non-array-likes / sparse. Do not
# convert DataFrame into a NumPy array.
if not(hasattr(X, '__array__') or sparse.issparse(X)):
X = check_array(X, force_all_finite='allow-nan', dtype=np.object)
n_features = X.shape[1]
# convert feature_names to list
if feature_names is None:
if hasattr(X, "loc"):
# get the column names for a pandas dataframe
feature_names = X.columns.tolist()
else:
# define a list of numbered indices for a numpy array
feature_names = [str(i) for i in range(n_features)]
elif isinstance(feature_names, np.ndarray):
feature_names = feature_names.tolist()
if len(set(feature_names)) != len(feature_names):
raise ValueError('feature_names should not contain duplicates.')
def convert_feature(fx):
if isinstance(fx, str):
try:
fx = feature_names.index(fx)
except ValueError:
raise ValueError('Feature %s not in feature_names' % fx)
return int(fx)
# convert features into a seq of int tuples
tmp_features = []
for fxs in features:
if isinstance(fxs, (numbers.Integral, str)):
fxs = (fxs,)
try:
fxs = tuple(convert_feature(fx) for fx in fxs)
except TypeError:
raise ValueError('Each entry in features must be either an int, '
'a string, or an iterable of size at most 2.')
if not 1 <= np.size(fxs) <= 2:
raise ValueError('Each entry in features must be either an int, '
'a string, or an iterable of size at most 2.')
tmp_features.append(fxs)
features = tmp_features
if isinstance(ax, list):
if len(ax) != len(features):
raise ValueError("Expected len(ax) == len(features), "
"got len(ax) = {}".format(len(ax)))
for i in chain.from_iterable(features):
if i >= len(feature_names):
raise ValueError('All entries of features must be less than '
'len(feature_names) = {0}, got {1}.'
.format(len(feature_names), i))
# compute averaged predictions
pd_results = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(partial_dependence)(estimator, X, fxs,
response_method=response_method,
method=method,
grid_resolution=grid_resolution,
percentiles=percentiles)
for fxs in features)
# For multioutput regression, we can only check the validity of target
# now that we have the predictions.
# Also note: as multiclass-multioutput classifiers are not supported,
# multiclass and multioutput scenario are mutually exclusive. So there is
# no risk of overwriting target_idx here.
avg_preds, _ = pd_results[0] # checking the first result is enough
if is_regressor(estimator) and avg_preds.shape[0] > 1:
if target is None:
raise ValueError(
'target must be specified for multi-output regressors')
if not 0 <= target <= avg_preds.shape[0]:
raise ValueError(
'target must be in [0, n_tasks], got {}.'.format(target))
target_idx = target
# get global min and max average predictions of PD grouped by plot type
pdp_lim = {}
for avg_preds, values in pd_results:
min_pd = avg_preds[target_idx].min()
max_pd = avg_preds[target_idx].max()
n_fx = len(values)
old_min_pd, old_max_pd = pdp_lim.get(n_fx, (min_pd, max_pd))
min_pd = min(min_pd, old_min_pd)
max_pd = max(max_pd, old_max_pd)
pdp_lim[n_fx] = (min_pd, max_pd)
deciles = {}
for fx in chain.from_iterable(features):
if fx not in deciles:
X_col = _safe_indexing(X, fx, axis=1)
deciles[fx] = mquantiles(X_col, prob=np.arange(0.1, 1.0, 0.1))
if fig is not None:
warnings.warn("The fig parameter is deprecated in version "
"0.22 and will be removed in version 0.24",
FutureWarning)
fig.clear()
ax = fig.gca()
display = PartialDependenceDisplay(pd_results, features, feature_names,
target_idx, pdp_lim, deciles)
return display.plot(ax=ax, n_cols=n_cols, line_kw=line_kw,
contour_kw=contour_kw)
|
def plot_partial_dependence(estimator, X, features, feature_names=None,
target=None, response_method='auto', n_cols=3,
grid_resolution=100, percentiles=(0.05, 0.95),
method='auto', n_jobs=None, verbose=0, fig=None,
line_kw=None, contour_kw=None, ax=None):
"""Partial dependence plots.
The ``len(features)`` plots are arranged in a grid with ``n_cols``
columns. Two-way partial dependence plots are plotted as contour plots. The
deciles of the feature values will be shown with tick marks on the x-axes
for one-way plots, and on both axes for two-way plots.
.. note::
:func:`plot_partial_dependence` does not support using the same axes
with multiple calls. To plot the the partial dependence for multiple
estimators, please pass the axes created by the first call to the
second call::
>>> from sklearn.inspection import plot_partial_dependence
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.linear_model import LinearRegression
>>> X, y = make_friedman1()
>>> est = LinearRegression().fit(X, y)
>>> disp1 = plot_partial_dependence(est, X) # doctest: +SKIP
>>> disp2 = plot_partial_dependence(est, X,
... ax=disp1.axes_) #doctest: +SKIP
Read more in the :ref:`User Guide <partial_dependence>`.
Parameters
----------
estimator : BaseEstimator
A fitted estimator object implementing :term:`predict`,
:term:`predict_proba`, or :term:`decision_function`.
Multioutput-multiclass classifiers are not supported.
X : {array-like or dataframe} of shape (n_samples, n_features)
The data to use to build the grid of values on which the dependence
will be evaluated. This is usually the training data.
features : list of {int, str, pair of int, pair of str}
The target features for which to create the PDPs.
If features[i] is an int or a string, a one-way PDP is created; if
features[i] is a tuple, a two-way PDP is created. Each tuple must be
of size 2.
if any entry is a string, then it must be in ``feature_names``.
feature_names : array-like of shape (n_features,), dtype=str, default=None
Name of each feature; feature_names[i] holds the name of the feature
with index i.
By default, the name of the feature corresponds to their numerical
index for NumPy array and their column name for pandas dataframe.
target : int, optional (default=None)
- In a multiclass setting, specifies the class for which the PDPs
should be computed. Note that for binary classification, the
positive class (index 1) is always used.
- In a multioutput setting, specifies the task for which the PDPs
should be computed.
Ignored in binary classification or classical regression settings.
response_method : 'auto', 'predict_proba' or 'decision_function', \
optional (default='auto')
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. For regressors
this parameter is ignored and the response is always the output of
:term:`predict`. By default, :term:`predict_proba` is tried first
and we revert to :term:`decision_function` if it doesn't exist. If
``method`` is 'recursion', the response is always the output of
:term:`decision_function`.
n_cols : int, optional (default=3)
The maximum number of columns in the grid plot. Only active when `ax`
is a single axis or `None`.
grid_resolution : int, optional (default=100)
The number of equally spaced points on the axes of the plots, for each
target feature.
percentiles : tuple of float, optional (default=(0.05, 0.95))
The lower and upper percentile used to create the extreme values
for the PDP axes. Must be in [0, 1].
method : str, optional (default='auto')
The method to use to calculate the partial dependence predictions:
- 'recursion' is only supported for gradient boosting estimator (namely
:class:`GradientBoostingClassifier<sklearn.ensemble.GradientBoostingClassifier>`,
:class:`GradientBoostingRegressor<sklearn.ensemble.GradientBoostingRegressor>`,
:class:`HistGradientBoostingClassifier<sklearn.ensemble.HistGradientBoostingClassifier>`,
:class:`HistGradientBoostingRegressor<sklearn.ensemble.HistGradientBoostingRegressor>`)
but is more efficient in terms of speed.
With this method, ``X`` is optional and is only used to build the
grid and the partial dependences are computed using the training
data. This method does not account for the ``init`` predictor of
the boosting process, which may lead to incorrect values (see
warning below. With this method, the target response of a
classifier is always the decision function, not the predicted
probabilities.
- 'brute' is supported for any estimator, but is more
computationally intensive.
- 'auto':
- 'recursion' is used for estimators that supports it.
- 'brute' is used for all other estimators.
n_jobs : int, optional (default=None)
The number of CPUs to use to compute the partial dependences.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
verbose : int, optional (default=0)
Verbose output during PD computations.
fig : Matplotlib figure object, optional (default=None)
A figure object onto which the plots will be drawn, after the figure
has been cleared. By default, a new one is created.
.. deprecated:: 0.22
``fig`` will be removed in 0.24.
line_kw : dict, optional
Dict with keywords passed to the ``matplotlib.pyplot.plot`` call.
For one-way partial dependence plots.
contour_kw : dict, optional
Dict with keywords passed to the ``matplotlib.pyplot.contourf`` call.
For two-way partial dependence plots.
ax : Matplotlib axes or array-like of Matplotlib axes, default=None
- If a single axis is passed in, it is treated as a bounding axes
and a grid of partial dependence plots will be drawn within
these bounds. The `n_cols` parameter controls the number of
columns in the grid.
- If an array-like of axes are passed in, the partial dependence
plots will be drawn directly into these axes.
- If `None`, a figure and a bounding axes is created and treated
as the single axes case.
.. versionadded:: 0.22
Returns
-------
display: :class:`~sklearn.inspection.PartialDependenceDisplay`
Examples
--------
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_friedman1()
>>> clf = GradientBoostingRegressor(n_estimators=10).fit(X, y)
>>> plot_partial_dependence(clf, X, [0, (0, 1)]) #doctest: +SKIP
See also
--------
sklearn.inspection.partial_dependence: Return raw partial
dependence values
Warnings
--------
The 'recursion' method only works for gradient boosting estimators, and
unlike the 'brute' method, it does not account for the ``init``
predictor of the boosting process. In practice this will produce the
same values as 'brute' up to a constant offset in the target response,
provided that ``init`` is a consant estimator (which is the default).
However, as soon as ``init`` is not a constant estimator, the partial
dependence values are incorrect for 'recursion'. This is not relevant for
:class:`HistGradientBoostingClassifier
<sklearn.ensemble.HistGradientBoostingClassifier>` and
:class:`HistGradientBoostingRegressor
<sklearn.ensemble.HistGradientBoostingRegressor>`, which do not have an
``init`` parameter.
"""
check_matplotlib_support('plot_partial_dependence') # noqa
import matplotlib.pyplot as plt # noqa
from matplotlib import transforms # noqa
from matplotlib.ticker import MaxNLocator # noqa
from matplotlib.ticker import ScalarFormatter # noqa
# set target_idx for multi-class estimators
if hasattr(estimator, 'classes_') and np.size(estimator.classes_) > 2:
if target is None:
raise ValueError('target must be specified for multi-class')
target_idx = np.searchsorted(estimator.classes_, target)
if (not (0 <= target_idx < len(estimator.classes_)) or
estimator.classes_[target_idx] != target):
raise ValueError('target not in est.classes_, got {}'.format(
target))
else:
# regression and binary classification
target_idx = 0
# Use check_array only on lists and other non-array-likes / sparse. Do not
# convert DataFrame into a NumPy array.
if not(hasattr(X, '__array__') or sparse.issparse(X)):
X = check_array(X, force_all_finite='allow-nan', dtype=np.object)
n_features = X.shape[1]
# convert feature_names to list
if feature_names is None:
if hasattr(X, "loc"):
# get the column names for a pandas dataframe
feature_names = X.columns.tolist()
else:
# define a list of numbered indices for a numpy array
feature_names = [str(i) for i in range(n_features)]
elif isinstance(feature_names, np.ndarray):
feature_names = feature_names.tolist()
if len(set(feature_names)) != len(feature_names):
raise ValueError('feature_names should not contain duplicates.')
def convert_feature(fx):
if isinstance(fx, str):
try:
fx = feature_names.index(fx)
except ValueError:
raise ValueError('Feature %s not in feature_names' % fx)
return int(fx)
# convert features into a seq of int tuples
tmp_features = []
for fxs in features:
if isinstance(fxs, (numbers.Integral, str)):
fxs = (fxs,)
try:
fxs = tuple(convert_feature(fx) for fx in fxs)
except TypeError:
raise ValueError('Each entry in features must be either an int, '
'a string, or an iterable of size at most 2.')
if not 1 <= np.size(fxs) <= 2:
raise ValueError('Each entry in features must be either an int, '
'a string, or an iterable of size at most 2.')
tmp_features.append(fxs)
features = tmp_features
if isinstance(ax, list):
if len(ax) != len(features):
raise ValueError("Expected len(ax) == len(features), "
"got len(ax) = {}".format(len(ax)))
for i in chain.from_iterable(features):
if i >= len(feature_names):
raise ValueError('All entries of features must be less than '
'len(feature_names) = {0}, got {1}.'
.format(len(feature_names), i))
# compute averaged predictions
pd_results = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(partial_dependence)(estimator, X, fxs,
response_method=response_method,
method=method,
grid_resolution=grid_resolution,
percentiles=percentiles)
for fxs in features)
# For multioutput regression, we can only check the validity of target
# now that we have the predictions.
# Also note: as multiclass-multioutput classifiers are not supported,
# multiclass and multioutput scenario are mutually exclusive. So there is
# no risk of overwriting target_idx here.
avg_preds, _ = pd_results[0] # checking the first result is enough
if is_regressor(estimator) and avg_preds.shape[0] > 1:
if target is None:
raise ValueError(
'target must be specified for multi-output regressors')
if not 0 <= target <= avg_preds.shape[0]:
raise ValueError(
'target must be in [0, n_tasks], got {}.'.format(target))
target_idx = target
# get global min and max average predictions of PD grouped by plot type
pdp_lim = {}
for avg_preds, values in pd_results:
min_pd = avg_preds[target_idx].min()
max_pd = avg_preds[target_idx].max()
n_fx = len(values)
old_min_pd, old_max_pd = pdp_lim.get(n_fx, (min_pd, max_pd))
min_pd = min(min_pd, old_min_pd)
max_pd = max(max_pd, old_max_pd)
pdp_lim[n_fx] = (min_pd, max_pd)
deciles = {}
for fx in chain.from_iterable(features):
if fx not in deciles:
X_col = _safe_indexing(X, fx, axis=1)
deciles[fx] = mquantiles(X_col, prob=np.arange(0.1, 1.0, 0.1))
if fig is not None:
warnings.warn("The fig parameter is deprecated in version "
"0.22 and will be removed in version 0.24",
FutureWarning)
fig.clear()
ax = fig.gca()
display = PartialDependenceDisplay(pd_results, features, feature_names,
target_idx, pdp_lim, deciles)
return display.plot(ax=ax, n_cols=n_cols, line_kw=line_kw,
contour_kw=contour_kw)
|
9,425 |
def set_facts_for_distribution_id_and_alias(details, facts, distribution_id, aliases):
facts[distribution_id].update(details)
# also have a fixed key for accessing results/details returned
facts['result'] = details
facts['result']['DistributionId'] = distribution_id
for alias in aliases:
facts[alias].update(details)
return facts
|
def set_facts_for_distribution_id_and_alias(details, facts, distribution_id, aliases):
facts[distribution_id].update(details)
# also have a fixed key for accessing results/details returned
facts['result'] = details
facts['result']['distribution_id'] = distribution_id
for alias in aliases:
facts[alias].update(details)
return facts
|
40,534 |
def load_arguments(self, _):
from azure.cli.core.commands.parameters import tags_type
with self.argument_context('connectedk8s connect') as c:
c.argument('tags', tags_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), validator=get_default_location_from_resource_group)
c.argument('cluster_name', options_list=['--name', '-n'], help='The name of the connected cluster.')
c.argument('kube_config', options_list=['--kube-config'], help='Path to the kube config file.')
c.argument('kube_context', options_list=['--kube-context'], help='Kubconfig context from current machine.')
c.argument('https_proxy', options_list=['--proxy-https'], help='Https proxy URL to be used.')
c.argument('http_proxy', options_list=['--proxy-http'], help='Http proxy URL to be used.')
c.argument('no_proxy', options_list=['--proxy-skip-range'], help='List of URLs/CIDRs for which proxy should not to be used.')
c.argument('proxy_cert', options_list=['--proxy-cert'], type=file_type, completer=FilesCompleter(), help='Path to the certificate file for proxy')
with self.argument_context('connectedk8s update') as c:
c.argument('cluster_name', options_list=['--name', '-n'], id_part='name', help='The name of the connected cluster.')
c.argument('kube_config', options_list=['--kube-config'], help='Path to the kube config file.')
c.argument('kube_context', options_list=['--kube-context'], help='Kubconfig context from current machine.')
c.argument('https_proxy', options_list=['--proxy-https'], help='Https proxy URL to be used.')
c.argument('http_proxy', options_list=['--proxy-http'], help='Http proxy URL to be used.')
c.argument('no_proxy', options_list=['--proxy-skip-range'], help='List of URLs/CIDRs for which proxy should not to be used.')
c.argument('proxy_cert', options_list=['--proxy-cert'], type=file_type, completer=FilesCompleter(), help='Path to the certificate file for proxy')
c.argument('disable_proxy', options_list=['--disable-proxy'], action='store_true', help='Disables applying proxy to agents')
with self.argument_context('connectedk8s list') as c:
pass
with self.argument_context('connectedk8s show') as c:
c.argument('cluster_name', options_list=['--name', '-n'], id_part='name', help='The name of the connected cluster.')
with self.argument_context('connectedk8s delete') as c:
c.argument('cluster_name', options_list=['--name', '-n'], id_part='name', help='The name of the connected cluster.')
c.argument('kube_config', options_list=['--kube-config'], help='Path to the kube config file.')
c.argument('kube_context', options_list=['--kube-context'], help='Kubconfig context from current machine.')
|
def load_arguments(self, _):
from azure.cli.core.commands.parameters import tags_type
with self.argument_context('connectedk8s connect') as c:
c.argument('tags', tags_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), validator=get_default_location_from_resource_group)
c.argument('cluster_name', options_list=['--name', '-n'], help='The name of the connected cluster.')
c.argument('kube_config', options_list=['--kube-config'], help='Path to the kube config file.')
c.argument('kube_context', options_list=['--kube-context'], help='Kubconfig context from current machine.')
c.argument('https_proxy', options_list=['--proxy-https'], help='Https proxy URL to be used.')
c.argument('http_proxy', options_list=['--proxy-http'], help='Http proxy URL to be used.')
c.argument('no_proxy', options_list=['--proxy-skip-range'], help='List of URLs/CIDRs for which proxy should not to be used.')
c.argument('proxy_cert', options_list=['--proxy-cert'], type=file_type, completer=FilesCompleter(), help='Path to the certificate file for proxy')
with self.argument_context('connectedk8s update') as c:
c.argument('cluster_name', options_list=['--name', '-n'], id_part='name', help='The name of the connected cluster.')
c.argument('kube_config', options_list=['--kube-config'], help='Path to the kube config file.')
c.argument('kube_context', options_list=['--kube-context'], help='Kubconfig context from current machine.')
c.argument('https_proxy', options_list=['--proxy-https'], help='Https proxy URL to be used.')
c.argument('http_proxy', options_list=['--proxy-http'], help='Http proxy URL to be used.')
c.argument('no_proxy', options_list=['--proxy-skip-range'], help='List of URLs/CIDRs for which proxy should not to be used.')
c.argument('proxy_cert', options_list=['--proxy-cert'], type=file_type, completer=FilesCompleter(), help='Path to the certificate file for proxy')
c.argument('disable_proxy', options_list=['--disable-proxy'], action='store_true', help='Disables proxy settings for agents')
with self.argument_context('connectedk8s list') as c:
pass
with self.argument_context('connectedk8s show') as c:
c.argument('cluster_name', options_list=['--name', '-n'], id_part='name', help='The name of the connected cluster.')
with self.argument_context('connectedk8s delete') as c:
c.argument('cluster_name', options_list=['--name', '-n'], id_part='name', help='The name of the connected cluster.')
c.argument('kube_config', options_list=['--kube-config'], help='Path to the kube config file.')
c.argument('kube_context', options_list=['--kube-context'], help='Kubconfig context from current machine.')
|
20,561 |
def main(argv=None):
# Ensure that the "-list-labels" argument is always parsed last. That way, if `-f` is passed, then `-list-labels`
# will see the new location and look there. (https://github.com/spinalcordtoolbox/spinalcordtoolbox/issues/3634)
if "-list-labels" in argv:
argv.append(argv.pop(argv.index("-list-labels")))
parser = get_parser()
arguments = parser.parse_args(argv)
verbose = arguments.v
set_loglevel(verbose=verbose)
param_default = Param()
fname_data = get_absolute_path(arguments.i)
path_label = arguments.f
method = arguments.method
fname_output = arguments.o
append_csv = arguments.append
combine_labels = arguments.combine
labels_user = arguments.l
slices = parse_num_list(arguments.z)
levels = parse_num_list(arguments.vert)
fname_vertebral_labeling = arguments.vertfile
perslice = arguments.perslice
perlevel = arguments.perlevel
# check if path_label is a file (e.g., single binary mask) instead of a folder (e.g., SCT atlas structure which
# contains info_label.txt file)
if os.path.isfile(path_label):
# Label is a single file
indiv_labels_ids = [0]
indiv_labels_files = [path_label]
combined_labels_ids = []
label_struc = {0: LabelStruc(id=0,
name=extract_fname(path_label)[1],
filename=path_label)}
# set path_label to empty string, because indiv_labels_files will replace it from now on
path_label = ''
elif os.path.isdir(path_label):
# Labels is an SCT atlas folder structure
# Parse labels according to the file info_label.txt
# Note: the "combined_labels_*" is a list of single labels that are defined in the section defined by the keyword
# "# Keyword=CombinedLabels" in info_label.txt.
# TODO: redirect to appropriate Sphinx documentation
# TODO: output Class instead of multiple variables.
# Example 1:
# label_struc[2].id = (2)
# label_struc[2].name = "left fasciculus cuneatus"
# label_struc[2].filename = "PAM50_atlas_02.nii.gz"
# Example 2:
# label_struc[51].id = (1, 2, 3, ..., 29)
# label_struc[51].name = "White Matter"
# label_struc[51].filename = "" # no name because it is combined
indiv_labels_ids, indiv_labels_names, indiv_labels_files, \
combined_labels_ids, combined_labels_names, combined_labels_id_groups, map_clusters \
= read_label_file(path_label, param_default.file_info_label)
label_struc = {}
# fill IDs for indiv labels
for i_label in range(len(indiv_labels_ids)):
label_struc[indiv_labels_ids[i_label]] = LabelStruc(id=indiv_labels_ids[i_label],
name=indiv_labels_names[i_label],
filename=indiv_labels_files[i_label],
map_cluster=[indiv_labels_ids[i_label] in map_cluster for
map_cluster in map_clusters].index(True))
# fill IDs for combined labels
# TODO: problem for defining map_cluster: if labels overlap two regions, e.g. WM and GM (e.g. id=50),
# map_cluster will take value 0, which is wrong.
for i_label in range(len(combined_labels_ids)):
label_struc[combined_labels_ids[i_label]] = LabelStruc(id=combined_labels_id_groups[i_label],
name=combined_labels_names[i_label],
map_cluster=[indiv_labels_ids[i_label] in map_cluster for
map_cluster in map_clusters].index(True))
else:
raise RuntimeError(path_label + ' does not exist')
# check syntax of labels asked by user
labels_id_user = check_labels(indiv_labels_ids + combined_labels_ids, parse_num_list(labels_user))
nb_labels = len(indiv_labels_files)
# Load data and systematically reorient to RPI because we need the 3rd dimension to be z
printv('\nLoad metric image...', verbose)
input_im = Image(fname_data).change_orientation("RPI")
data = Metric(data=input_im.data, label='')
# Load labels
labels_tmp = np.empty([nb_labels], dtype=object)
for i_label in range(nb_labels):
im_label = Image(os.path.join(path_label, indiv_labels_files[i_label])).change_orientation("RPI")
labels_tmp[i_label] = np.expand_dims(im_label.data, 3) # TODO: generalize to 2D input label
labels = np.concatenate(labels_tmp[:], 3) # labels: (x,y,z,label)
# Load vertebral levels
if not levels:
fname_vertebral_labeling = None
# Get dimensions of data and labels
nx, ny, nz = data.data.shape
nx_atlas, ny_atlas, nz_atlas, nt_atlas = labels.shape
# Check dimensions consistency between atlas and data
if (nx, ny, nz) != (nx_atlas, ny_atlas, nz_atlas):
printv('\nERROR: Metric data and labels DO NOT HAVE SAME DIMENSIONS.', 1, type='error')
# Combine individual labels for estimation
if combine_labels:
# Add entry with internal ID value (99) which corresponds to combined labels
label_struc[99] = LabelStruc(id=labels_id_user, name=','.join([str(i) for i in labels_id_user]),
map_cluster=None)
labels_id_user = [99]
for id_label in labels_id_user:
printv('Estimation for label: ' + label_struc[id_label].name, verbose)
agg_metric = extract_metric(data, labels=labels, slices=slices, levels=levels, perslice=perslice,
perlevel=perlevel, vert_level=fname_vertebral_labeling, method=method,
label_struc=label_struc, id_label=id_label, indiv_labels_ids=indiv_labels_ids)
save_as_csv(agg_metric, fname_output, fname_in=fname_data, append=append_csv)
append_csv = True # when looping across labels, need to append results in the same file
display_open(fname_output)
|
def main(argv=None):
# Ensure that the "-list-labels" argument is always parsed last. That way, if `-f` is passed, then `-list-labels`
# will see the new location and look there. (https://github.com/spinalcordtoolbox/spinalcordtoolbox/issues/3634)
if argv is not None and "-list-labels" in argv:
argv.append(argv.pop(argv.index("-list-labels")))
parser = get_parser()
arguments = parser.parse_args(argv)
verbose = arguments.v
set_loglevel(verbose=verbose)
param_default = Param()
fname_data = get_absolute_path(arguments.i)
path_label = arguments.f
method = arguments.method
fname_output = arguments.o
append_csv = arguments.append
combine_labels = arguments.combine
labels_user = arguments.l
slices = parse_num_list(arguments.z)
levels = parse_num_list(arguments.vert)
fname_vertebral_labeling = arguments.vertfile
perslice = arguments.perslice
perlevel = arguments.perlevel
# check if path_label is a file (e.g., single binary mask) instead of a folder (e.g., SCT atlas structure which
# contains info_label.txt file)
if os.path.isfile(path_label):
# Label is a single file
indiv_labels_ids = [0]
indiv_labels_files = [path_label]
combined_labels_ids = []
label_struc = {0: LabelStruc(id=0,
name=extract_fname(path_label)[1],
filename=path_label)}
# set path_label to empty string, because indiv_labels_files will replace it from now on
path_label = ''
elif os.path.isdir(path_label):
# Labels is an SCT atlas folder structure
# Parse labels according to the file info_label.txt
# Note: the "combined_labels_*" is a list of single labels that are defined in the section defined by the keyword
# "# Keyword=CombinedLabels" in info_label.txt.
# TODO: redirect to appropriate Sphinx documentation
# TODO: output Class instead of multiple variables.
# Example 1:
# label_struc[2].id = (2)
# label_struc[2].name = "left fasciculus cuneatus"
# label_struc[2].filename = "PAM50_atlas_02.nii.gz"
# Example 2:
# label_struc[51].id = (1, 2, 3, ..., 29)
# label_struc[51].name = "White Matter"
# label_struc[51].filename = "" # no name because it is combined
indiv_labels_ids, indiv_labels_names, indiv_labels_files, \
combined_labels_ids, combined_labels_names, combined_labels_id_groups, map_clusters \
= read_label_file(path_label, param_default.file_info_label)
label_struc = {}
# fill IDs for indiv labels
for i_label in range(len(indiv_labels_ids)):
label_struc[indiv_labels_ids[i_label]] = LabelStruc(id=indiv_labels_ids[i_label],
name=indiv_labels_names[i_label],
filename=indiv_labels_files[i_label],
map_cluster=[indiv_labels_ids[i_label] in map_cluster for
map_cluster in map_clusters].index(True))
# fill IDs for combined labels
# TODO: problem for defining map_cluster: if labels overlap two regions, e.g. WM and GM (e.g. id=50),
# map_cluster will take value 0, which is wrong.
for i_label in range(len(combined_labels_ids)):
label_struc[combined_labels_ids[i_label]] = LabelStruc(id=combined_labels_id_groups[i_label],
name=combined_labels_names[i_label],
map_cluster=[indiv_labels_ids[i_label] in map_cluster for
map_cluster in map_clusters].index(True))
else:
raise RuntimeError(path_label + ' does not exist')
# check syntax of labels asked by user
labels_id_user = check_labels(indiv_labels_ids + combined_labels_ids, parse_num_list(labels_user))
nb_labels = len(indiv_labels_files)
# Load data and systematically reorient to RPI because we need the 3rd dimension to be z
printv('\nLoad metric image...', verbose)
input_im = Image(fname_data).change_orientation("RPI")
data = Metric(data=input_im.data, label='')
# Load labels
labels_tmp = np.empty([nb_labels], dtype=object)
for i_label in range(nb_labels):
im_label = Image(os.path.join(path_label, indiv_labels_files[i_label])).change_orientation("RPI")
labels_tmp[i_label] = np.expand_dims(im_label.data, 3) # TODO: generalize to 2D input label
labels = np.concatenate(labels_tmp[:], 3) # labels: (x,y,z,label)
# Load vertebral levels
if not levels:
fname_vertebral_labeling = None
# Get dimensions of data and labels
nx, ny, nz = data.data.shape
nx_atlas, ny_atlas, nz_atlas, nt_atlas = labels.shape
# Check dimensions consistency between atlas and data
if (nx, ny, nz) != (nx_atlas, ny_atlas, nz_atlas):
printv('\nERROR: Metric data and labels DO NOT HAVE SAME DIMENSIONS.', 1, type='error')
# Combine individual labels for estimation
if combine_labels:
# Add entry with internal ID value (99) which corresponds to combined labels
label_struc[99] = LabelStruc(id=labels_id_user, name=','.join([str(i) for i in labels_id_user]),
map_cluster=None)
labels_id_user = [99]
for id_label in labels_id_user:
printv('Estimation for label: ' + label_struc[id_label].name, verbose)
agg_metric = extract_metric(data, labels=labels, slices=slices, levels=levels, perslice=perslice,
perlevel=perlevel, vert_level=fname_vertebral_labeling, method=method,
label_struc=label_struc, id_label=id_label, indiv_labels_ids=indiv_labels_ids)
save_as_csv(agg_metric, fname_output, fname_in=fname_data, append=append_csv)
append_csv = True # when looping across labels, need to append results in the same file
display_open(fname_output)
|
17,455 |
def unify_chunks(*objects: "T_DSorDA") -> Tuple["T_DSorDA", ...]:
"""
Given any number of Dataset and/or DataArray objects, returns
new objects with unified chunk size along all chunked dimensions.
Returns
-------
unified (DataArray or Dataset) – Tuple of objects with the same type as
*objects with consistent chunk sizes for all dask-array variables
See Also
--------
dask.array.core.unify_chunks
"""
from .dataarray import DataArray
# Convert chunked dataarrays to datasets
datasets = []
are_chunked = []
for i, obj in enumerate(objects):
ds = obj._to_temp_dataset() if isinstance(obj, DataArray) else obj.copy()
datasets.append(ds)
try:
are_chunked.append(True if obj.chunks else False)
except ValueError: # "inconsistent chunks"
are_chunked.append(True)
# Return input objects if no object is chunked
if not any(are_chunked):
return objects
# Unify chunks using dask.array.core.unify_chunks
import dask.array
dask_unify_args = []
for ds, is_chunked in zip(datasets, are_chunked):
if not is_chunked:
continue
dims_pos_map = {dim: index for index, dim in enumerate(ds.dims)}
for variable in ds.variables.values():
if isinstance(variable.data, dask.array.Array):
dims_tuple = [dims_pos_map[dim] for dim in variable.dims]
dask_unify_args.append(variable.data)
dask_unify_args.append(dims_tuple)
_, rechunked_arrays = dask.array.core.unify_chunks(*dask_unify_args)
# Substitute rechunked variables
unified = []
rechunked_arrays = list(rechunked_arrays)
for obj, ds, is_chunked in zip(objects, datasets, are_chunked):
if not is_chunked:
unified.append(obj)
else:
for name, variable in ds.variables.items():
if isinstance(variable.data, dask.array.Array):
ds.variables[name]._data = rechunked_arrays.pop(0)
unified.append(
obj._from_temp_dataset(ds) if isinstance(obj, DataArray) else ds
)
return tuple(unified)
|
def unify_chunks(*objects: "T_DSorDA") -> Tuple["T_DSorDA", ...]:
"""
Given any number of Dataset and/or DataArray objects, returns
new objects with unified chunk size along all chunked dimensions.
Returns
-------
unified (DataArray or Dataset) – Tuple of objects with the same type as
*objects with consistent chunk sizes for all dask-array variables
See Also
--------
dask.array.core.unify_chunks
"""
from .dataarray import DataArray
# Convert chunked dataarrays to datasets
datasets = []
are_chunked = []
for i, obj in enumerate(objects):
ds = obj._to_temp_dataset() if isinstance(obj, DataArray) else obj.copy()
datasets.append(ds)
try:
are_chunked.append(True if obj.chunks else False)
except ValueError: # "inconsistent chunks"
are_chunked.append(True)
# Return input objects if no object is chunked
if not any(are_chunked):
return objects
# Unify chunks using dask.array.core.unify_chunks
import dask.array as da
dask_unify_args = []
for ds, is_chunked in zip(datasets, are_chunked):
if not is_chunked:
continue
dims_pos_map = {dim: index for index, dim in enumerate(ds.dims)}
for variable in ds.variables.values():
if isinstance(variable.data, dask.array.Array):
dims_tuple = [dims_pos_map[dim] for dim in variable.dims]
dask_unify_args.append(variable.data)
dask_unify_args.append(dims_tuple)
_, rechunked_arrays = dask.array.core.unify_chunks(*dask_unify_args)
# Substitute rechunked variables
unified = []
rechunked_arrays = list(rechunked_arrays)
for obj, ds, is_chunked in zip(objects, datasets, are_chunked):
if not is_chunked:
unified.append(obj)
else:
for name, variable in ds.variables.items():
if isinstance(variable.data, dask.array.Array):
ds.variables[name]._data = rechunked_arrays.pop(0)
unified.append(
obj._from_temp_dataset(ds) if isinstance(obj, DataArray) else ds
)
return tuple(unified)
|
31,357 |
def list_persons_command(client: Client, args: Dict[str, Any]) -> CommandResults:
"""Get persons list from TOPdesk"""
persons = client.get_list_with_query(list_type="persons",
start=args.get('start', None),
page_size=args.get('page_size', None),
query=args.get('query', None))
if len(persons) == 0:
return CommandResults(readable_output='No persons found')
headers = ['id', 'name', 'telephone', 'job title', 'department', 'city',
'branch name', 'room']
readable_persons = []
for person in persons:
readable_person = {
'id': person.get('id', None),
'name': person.get('dynamicName', None),
'telephone': person.get('phoneNumber', None),
'job title': person.get('jobTitle', None),
'department': person.get('department', None),
'city': person.get('city', None),
'branch name': replace_none(person.get('branch', {}), {}).get('name', None),
'room': None
}
if person.get('location', None):
readable_person['room'] = person.get('location', None).get('room', None)
readable_persons.append(readable_person)
readable_output = tableToMarkdown(f'{INTEGRATION_NAME} persons',
readable_persons,
headers=headers,
removeNull=True)
return CommandResults(
readable_output=readable_output,
outputs_prefix=f'{INTEGRATION_NAME}.person',
outputs_key_field='id',
outputs=persons
)
|
def list_persons_command(client: Client, args: Dict[str, Any]) -> CommandResults:
"""Get persons list from TOPdesk"""
persons = client.get_list_with_query(list_type="persons",
start=args.get('start', None),
page_size=args.get('page_size', None),
query=args.get('query', None))
if len(persons) == 0:
return CommandResults(readable_output='No persons found')
headers = ['id', 'name', 'telephone', 'job title', 'department', 'city',
'branch name', 'room']
readable_persons = []
for person in persons:
readable_person = {
'id': person.get('id', None),
'name': person.get('dynamicName', None),
'telephone': person.get('phoneNumber', None),
'job title': person.get('jobTitle', None),
'department': person.get('department', None),
'city': person.get('city', None),
'branch name': replace_none(person.get('branch', {}), {}).get('name', None),
'room': None
}
if person.get('location', None):
readable_person['room'] = person.get('location', None).get('room', None)
readable_persons.append(readable_person)
readable_output = tableToMarkdown(f'{INTEGRATION_NAME} persons',
readable_persons,
headers=headers,
removeNull=True)
return CommandResults(
readable_output=readable_output,
outputs_prefix=f'{INTEGRATION_NAME}.Person',
outputs_key_field='id',
outputs=persons
)
|
35,049 |
def test_command(args):
user_box_dir = os.path.join(THIS_DIR, args.platform)
base_box_dir = os.path.join(THIS_DIR, args.platform, "base-box")
test_config_file = os.path.join(base_box_dir, "test-config.json")
with open(test_config_file) as f:
test_config = json.load(f)
# select microTVM test platform
microtvm_test_platform = test_config[args.microtvm_platform]
for key, expected_type in REQUIRED_TEST_CONFIG_KEYS.items():
assert key in microtvm_test_platform and isinstance(
microtvm_test_platform[key], expected_type
), f"Expected key {key} of type {expected_type} in {test_config_file}: {test_config!r}"
microtvm_test_platform["vid_hex"] = microtvm_test_platform["vid_hex"].lower()
microtvm_test_platform["pid_hex"] = microtvm_test_platform["pid_hex"].lower()
microtvm_test_platform["microtvm_platform"] = args.microtvm_platform
providers = args.provider
release_test_dir = os.path.join(THIS_DIR, "release-test")
if args.skip_build or args.skip_destroy:
assert (
len(providers) == 1
), "--skip-build and/or --skip_destroy was given, but >1 provider specified"
test_failed = False
for provider_name in providers:
try:
if not args.skip_build:
do_build_release_test_vm(
release_test_dir, user_box_dir, base_box_dir, provider_name
)
do_run_release_test(
release_test_dir, provider_name, microtvm_test_platform, args.test_device_serial
)
except subprocess.CalledProcessError:
test_failed = True
sys.exit(
f"\n\nERROR: Provider '{provider_name}' failed the release test. "
"You can re-run it to reproduce the issue without building everything "
"again by passing the --skip-build and specifying only the provider that failed. "
"The VM is still running in case you want to connect it via SSH to "
"investigate further the issue, thus it's necessary to destroy it manually "
"to release the resources back to the host, like a USB device attached to the VM."
)
finally:
# if we reached out here do_run_release_test() succeeded, hence we can
# destroy the VM and release the resources back to the host if user haven't
# requested to not destroy it.
if not (args.skip_destroy or test_failed):
subprocess.check_call(["vagrant", "destroy", "-f"], cwd=release_test_dir)
shutil.rmtree(release_test_dir)
print(f'\n\nThe release tests passed on all specified providers: {", ".join(providers)}.')
|
def test_command(args):
user_box_dir = os.path.join(THIS_DIR, args.platform)
base_box_dir = os.path.join(THIS_DIR, args.platform, "base-box")
test_config_file = os.path.join(base_box_dir, "test-config.json")
with open(test_config_file) as f:
test_config = json.load(f)
# select microTVM test platform
microtvm_test_platform = test_config[args.microtvm_platform]
for key, expected_type in REQUIRED_TEST_CONFIG_KEYS.items():
assert key in microtvm_test_platform and isinstance(
microtvm_test_platform[key], expected_type
), f"Expected key {key} of type {expected_type} in {test_config_file}: {test_config!r}"
microtvm_test_platform["vid_hex"] = microtvm_test_platform["vid_hex"].lower()
microtvm_test_platform["pid_hex"] = microtvm_test_platform["pid_hex"].lower()
microtvm_test_platform["microtvm_platform"] = args.microtvm_platform
providers = args.provider
release_test_dir = os.path.join(THIS_DIR, "release-test")
if args.skip_build or args.skip_destroy:
assert (
len(providers) == 1
), "--skip-build and/or --skip-destroy was given, but >1 provider specified"
test_failed = False
for provider_name in providers:
try:
if not args.skip_build:
do_build_release_test_vm(
release_test_dir, user_box_dir, base_box_dir, provider_name
)
do_run_release_test(
release_test_dir, provider_name, microtvm_test_platform, args.test_device_serial
)
except subprocess.CalledProcessError:
test_failed = True
sys.exit(
f"\n\nERROR: Provider '{provider_name}' failed the release test. "
"You can re-run it to reproduce the issue without building everything "
"again by passing the --skip-build and specifying only the provider that failed. "
"The VM is still running in case you want to connect it via SSH to "
"investigate further the issue, thus it's necessary to destroy it manually "
"to release the resources back to the host, like a USB device attached to the VM."
)
finally:
# if we reached out here do_run_release_test() succeeded, hence we can
# destroy the VM and release the resources back to the host if user haven't
# requested to not destroy it.
if not (args.skip_destroy or test_failed):
subprocess.check_call(["vagrant", "destroy", "-f"], cwd=release_test_dir)
shutil.rmtree(release_test_dir)
print(f'\n\nThe release tests passed on all specified providers: {", ".join(providers)}.')
|
43,333 |
def test_gcn_lstm_activations():
fx, fy, a = get_timeseries_graph_data()
gcn_lstm_model = GraphConvolutionLSTM(
seq_len=fx.shape[-2], adj=a, gc_layers=5, lstm_layer_size=[8, 16, 32, 64],
)
assert gcn_lstm_model.gc_activations == ["relu", "relu", "relu", "relu", "relu"]
assert gcn_lstm_model.lstm_activations == ["tanh", "tanh", "tanh", "tanh"]
with pytest.raises(ValueError):
# More regularisers than layers
gcn_lstm_model = GraphConvolutionLSTM(
seq_len=fx.shape[-2],
adj=a,
gc_layers=2,
gc_activations=["relu"],
lstm_layer_size=[8, 16, 32, 64],
)
with pytest.raises(ValueError):
# More regularisers than layers
gcn_lstm_model = GraphConvolutionLSTM(
seq_len=fx.shape[-2],
adj=a,
gc_layers=1,
gc_activations=["relu"],
lstm_layer_size=[32],
lstm_activations=["tanh", "tanh"],
)
|
def test_gcn_lstm_activations():
fx, fy, a = get_timeseries_graph_data()
gcn_lstm_model = GraphConvolutionLSTM(
seq_len=fx.shape[-2], adj=a, gc_layers=5, lstm_layer_size=[8, 16, 32, 64],
)
assert gcn_lstm_model.gc_activations == ["relu", "relu", "relu", "relu", "relu"]
assert gcn_lstm_model.lstm_activations == ["tanh", "tanh", "tanh", "tanh"]
with pytest.raises(ValueError, match="Invalid number of activations.* graph convolution layer"):
# More regularisers than layers
gcn_lstm_model = GraphConvolutionLSTM(
seq_len=fx.shape[-2],
adj=a,
gc_layers=2,
gc_activations=["relu"],
lstm_layer_size=[8, 16, 32, 64],
)
with pytest.raises(ValueError):
# More regularisers than layers
gcn_lstm_model = GraphConvolutionLSTM(
seq_len=fx.shape[-2],
adj=a,
gc_layers=1,
gc_activations=["relu"],
lstm_layer_size=[32],
lstm_activations=["tanh", "tanh"],
)
|
32,409 |
def generate_dbotscore(response: Dict) -> List:
"""Creates CommandResult object based on the contents of 'response' argument
and provides DBotScore objects.
Parameters
----------
response : dict
Object returned by ANYRUN API call in 'get_report' function.
Returns
-------
List
A list of CommandResults objects.
"""
data = response.get('data', {})
analysis = data.get('analysis', {})
main_object = analysis.get('content', {}).get('mainObject', {})
submission_type = main_object.get('type')
submission_type = 'hash' if submission_type in {'file', 'download'} else submission_type
threat_text = analysis.get('scores', {}).get('verdict', {}).get('threatLevelText', '').casefold()
reputation_map = {
"shared": Common.DBotScore.NONE,
"unknown": Common.DBotScore.NONE,
"whitelisted": Common.DBotScore.GOOD,
"malicious": Common.DBotScore.BAD,
"suspicious": Common.DBotScore.SUSPICIOUS
}
returned_data = []
main_entity = None
main_entity_type = None
# Add the hash or URL first
if submission_type == 'hash':
hashes = main_object.get('hashes', {})
info = main_object.get('info', {})
file_type = info.get('file')
exif = info.get('exif', {})
main_entity = hashes.get('sha256') or hashes.get('sha1') or hashes.get('md5')
main_entity_type = FeedIndicatorType.File
dbot_score = Common.DBotScore(
indicator=hashes.get('sha256') or hashes.get('sha1') or hashes.get('md5'),
indicator_type=DBotScoreType.FILE,
integration_name='ANYRUN',
score=THREAT_TEXT_TO_DBOTSCORE.get(threat_text) or Common.DBotScore.NONE
)
returned_data.append(CommandResults(
indicator=Common.File(
dbot_score=dbot_score,
md5=hashes.get('md5'),
sha1=hashes.get('sha1'),
sha256=hashes.get('sha256'),
file_type=file_type,
associated_file_names=exif.get('OriginalFileName')
)
))
else:
main_entity = main_object.get('url')
main_entity_type = FeedIndicatorType.URL
url_outputs = {
'Data': main_object.get('url')
}
dbot_score = Common.DBotScore(
indicator=main_object.get('url'),
indicator_type=DBotScoreType.URL,
integration_name='ANYRUN',
score=THREAT_TEXT_TO_DBOTSCORE.get(threat_text) or Common.DBotScore.NONE
)
if dbot_score.score >= 2:
url_outputs['Malicious'] = {
'Vendor': 'ANYRUN',
'Description': threat_text
}
returned_data.append(CommandResults(
outputs_prefix='URL',
outputs_key_field=['Data'],
outputs=url_outputs,
indicator=Common.URL(
url=main_object.get('url'),
dbot_score=dbot_score,
)
))
# Check if network information is available in the report
if 'network' in data:
network_data = data.get('network')
# Then add all the network-related indicators - 'connections'
if 'connections' in network_data:
connections = network_data.get('connections')
for current_connection in connections:
reputation = current_connection.get('Reputation')
if reputation in reputation_map.keys():
current_dbot_score = Common.DBotScore(
indicator=current_connection.get('IP'),
indicator_type=DBotScoreType.IP,
integration_name='ANYRUN',
score=reputation_map[reputation]
)
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.COMMUNICATED_WITH,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=current_connection.get('IP'),
entity_b_type=FeedIndicatorType.IP,
brand="ANYRUN"
)]
ip_indicator = Common.IP(
ip=current_connection.get('IP'),
asn=current_connection.get('ASN'),
port=current_connection.get('Port'),
geo_country=current_connection.get('Country'),
dbot_score=current_dbot_score,
relationships=relationships
)
if current_connection.get('IP') not in [
x.indicator.ip for x in returned_data if isinstance(x.indicator, Common.IP)
]:
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{current_connection.get('IP')}",
[{
"Description": f"This IP was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=ip_indicator,
relationships=relationships
))
# Then add all the network-related indicators - 'dnsRequests'
if 'dnsRequests' in network_data:
for current_dnsRequests in network_data.get('dnsRequests'):
reputation = current_dnsRequests.get('Reputation')
if reputation in reputation_map.keys():
current_dbot_score = Common.DBotScore(
indicator=current_dnsRequests.get('Domain'),
indicator_type=DBotScoreType.DOMAIN,
integration_name='ANYRUN',
score=reputation_map[reputation]
)
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.COMMUNICATED_WITH,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=current_dnsRequests.get('Domain'),
entity_b_type=FeedIndicatorType.Domain,
brand="ANYRUN"
)]
if "IP" in current_dnsRequests:
for ip in current_dnsRequests.get('IP', []):
relationships.append(
EntityRelationship(
name=EntityRelationship.Relationships.RESOLVES_TO,
entity_a=current_dnsRequests.get('Domain'),
entity_a_type=FeedIndicatorType.Domain,
entity_b=ip,
entity_b_type=FeedIndicatorType.IP
)
)
domain_ip_dbot_score = Common.DBotScore(
indicator=ip,
indicator_type=DBotScoreType.IP,
integration_name="ANYRUN",
score=Common.DBotScore.NONE
)
domain_ip_indicator = Common.IP(
ip=ip,
dbot_score=domain_ip_dbot_score
)
returned_data.append(CommandResults(
indicator=domain_ip_indicator,
readable_output=tableToMarkdown(
f"{ip}",
[{
"Description": f"This IP was resovled from {current_dnsRequests.get('Domain')}"
}]
)
))
domain_indicator = Common.Domain(
domain=current_dnsRequests.get('Domain'),
dbot_score=current_dbot_score,
relationships=relationships
)
if current_dnsRequests.get('Domain') not in [
x.indicator.domain for x in returned_data if isinstance(x.indicator, Common.Domain)
]:
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{current_dnsRequests.get('Domain')}",
[{
"Description": f"This domain was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=domain_indicator,
relationships=relationships
))
# Then add all the network-related indicators - 'httpRequests'
if 'httpRequests' in network_data:
for current_httpRequests in network_data.get('httpRequests'):
reputation = current_httpRequests['Reputation']
if reputation in reputation_map.keys():
current_dbot_score = Common.DBotScore(
indicator=current_httpRequests.get('URL'),
indicator_type=DBotScoreType.URL,
integration_name='ANYRUN',
score=reputation_map[reputation]
)
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.COMMUNICATED_WITH,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=current_httpRequests.get('URL'),
entity_b_type=FeedIndicatorType.URL,
brand="ANYRUN"
)]
url_indicator = Common.URL(
url=current_httpRequests.get('URL'),
geo_country=current_httpRequests.get('Country'),
port=current_httpRequests.get('Port'),
dbot_score=current_dbot_score,
relationships=relationships
)
if current_httpRequests.get('URL') not in [
x.indicator.url for x in returned_data if isinstance(x.indicator, Common.URL)
]:
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{current_httpRequests.get('URL')}",
[{
"Description": f"This URL was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=url_indicator,
relationships=relationships
))
if 'mitre' in data:
mitre_data = data.get('mitre')
for item in mitre_data:
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.RELATED_TO,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=item.get('name'),
entity_b_type='Attack Pattern'
)]
attack_indicator = Common.AttackPattern(
stix_id=None,
value=item.get('name'),
mitre_id=item.get('id')
)
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{item.get('name')}",
[{
"Description": f"This Attack Pattern was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=attack_indicator,
relationships=relationships
))
return returned_data
|
def generate_dbotscore(response: Dict) -> List:
"""Creates CommandResult object based on the contents of 'response' argument
and provides DBotScore objects.
Parameters
----------
response : dict
Object returned by ANYRUN API call in 'get_report' function.
Returns
-------
List
A list of CommandResults objects.
"""
data = response.get('data', {})
analysis = data.get('analysis', {})
main_object = analysis.get('content', {}).get('mainObject', {})
submission_type = main_object.get('type')
submission_type = 'hash' if submission_type in {'file', 'download'} else submission_type
threat_text = analysis.get('scores', {}).get('verdict', {}).get('threatLevelText', '').casefold()
reputation_map = {
"shared": Common.DBotScore.NONE,
"unknown": Common.DBotScore.NONE,
"whitelisted": Common.DBotScore.GOOD,
"malicious": Common.DBotScore.BAD,
"suspicious": Common.DBotScore.SUSPICIOUS
}
returned_data = []
main_entity = None
main_entity_type = None
# Add the hash or URL first
if submission_type == 'hash':
hashes = main_object.get('hashes', {})
info = main_object.get('info', {})
file_type = info.get('file')
exif = info.get('exif', {})
main_entity = hashes.get('sha256') or hashes.get('sha1') or hashes.get('md5')
main_entity_type = FeedIndicatorType.File
dbot_score = Common.DBotScore(
indicator=hashes.get('sha256') or hashes.get('sha1') or hashes.get('md5'),
indicator_type=DBotScoreType.FILE,
integration_name='ANYRUN',
score=THREAT_TEXT_TO_DBOTSCORE.get(threat_text) or Common.DBotScore.NONE
)
returned_data.append(CommandResults(
indicator=Common.File(
dbot_score=dbot_score,
md5=hashes.get('md5'),
sha1=hashes.get('sha1'),
sha256=hashes.get('sha256'),
file_type=file_type,
associated_file_names=exif.get('OriginalFileName')
)
))
else:
main_entity = main_object.get('url')
main_entity_type = FeedIndicatorType.URL
url_outputs = {
'Data': main_object.get('url')
}
dbot_score = Common.DBotScore(
indicator=main_object.get('url'),
indicator_type=DBotScoreType.URL,
integration_name='ANYRUN',
score=THREAT_TEXT_TO_DBOTSCORE.get(threat_text) or Common.DBotScore.NONE
)
if dbot_score.score >= 2:
url_outputs['Malicious'] = {
'Vendor': 'ANYRUN',
'Description': threat_text
}
returned_data.append(CommandResults(
outputs_prefix='URL',
outputs_key_field=['Data'],
outputs=url_outputs,
indicator=Common.URL(
url=main_entity,
dbot_score=dbot_score,
)
))
# Check if network information is available in the report
if 'network' in data:
network_data = data.get('network')
# Then add all the network-related indicators - 'connections'
if 'connections' in network_data:
connections = network_data.get('connections')
for current_connection in connections:
reputation = current_connection.get('Reputation')
if reputation in reputation_map.keys():
current_dbot_score = Common.DBotScore(
indicator=current_connection.get('IP'),
indicator_type=DBotScoreType.IP,
integration_name='ANYRUN',
score=reputation_map[reputation]
)
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.COMMUNICATED_WITH,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=current_connection.get('IP'),
entity_b_type=FeedIndicatorType.IP,
brand="ANYRUN"
)]
ip_indicator = Common.IP(
ip=current_connection.get('IP'),
asn=current_connection.get('ASN'),
port=current_connection.get('Port'),
geo_country=current_connection.get('Country'),
dbot_score=current_dbot_score,
relationships=relationships
)
if current_connection.get('IP') not in [
x.indicator.ip for x in returned_data if isinstance(x.indicator, Common.IP)
]:
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{current_connection.get('IP')}",
[{
"Description": f"This IP was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=ip_indicator,
relationships=relationships
))
# Then add all the network-related indicators - 'dnsRequests'
if 'dnsRequests' in network_data:
for current_dnsRequests in network_data.get('dnsRequests'):
reputation = current_dnsRequests.get('Reputation')
if reputation in reputation_map.keys():
current_dbot_score = Common.DBotScore(
indicator=current_dnsRequests.get('Domain'),
indicator_type=DBotScoreType.DOMAIN,
integration_name='ANYRUN',
score=reputation_map[reputation]
)
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.COMMUNICATED_WITH,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=current_dnsRequests.get('Domain'),
entity_b_type=FeedIndicatorType.Domain,
brand="ANYRUN"
)]
if "IP" in current_dnsRequests:
for ip in current_dnsRequests.get('IP', []):
relationships.append(
EntityRelationship(
name=EntityRelationship.Relationships.RESOLVES_TO,
entity_a=current_dnsRequests.get('Domain'),
entity_a_type=FeedIndicatorType.Domain,
entity_b=ip,
entity_b_type=FeedIndicatorType.IP
)
)
domain_ip_dbot_score = Common.DBotScore(
indicator=ip,
indicator_type=DBotScoreType.IP,
integration_name="ANYRUN",
score=Common.DBotScore.NONE
)
domain_ip_indicator = Common.IP(
ip=ip,
dbot_score=domain_ip_dbot_score
)
returned_data.append(CommandResults(
indicator=domain_ip_indicator,
readable_output=tableToMarkdown(
f"{ip}",
[{
"Description": f"This IP was resovled from {current_dnsRequests.get('Domain')}"
}]
)
))
domain_indicator = Common.Domain(
domain=current_dnsRequests.get('Domain'),
dbot_score=current_dbot_score,
relationships=relationships
)
if current_dnsRequests.get('Domain') not in [
x.indicator.domain for x in returned_data if isinstance(x.indicator, Common.Domain)
]:
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{current_dnsRequests.get('Domain')}",
[{
"Description": f"This domain was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=domain_indicator,
relationships=relationships
))
# Then add all the network-related indicators - 'httpRequests'
if 'httpRequests' in network_data:
for current_httpRequests in network_data.get('httpRequests'):
reputation = current_httpRequests['Reputation']
if reputation in reputation_map.keys():
current_dbot_score = Common.DBotScore(
indicator=current_httpRequests.get('URL'),
indicator_type=DBotScoreType.URL,
integration_name='ANYRUN',
score=reputation_map[reputation]
)
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.COMMUNICATED_WITH,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=current_httpRequests.get('URL'),
entity_b_type=FeedIndicatorType.URL,
brand="ANYRUN"
)]
url_indicator = Common.URL(
url=current_httpRequests.get('URL'),
geo_country=current_httpRequests.get('Country'),
port=current_httpRequests.get('Port'),
dbot_score=current_dbot_score,
relationships=relationships
)
if current_httpRequests.get('URL') not in [
x.indicator.url for x in returned_data if isinstance(x.indicator, Common.URL)
]:
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{current_httpRequests.get('URL')}",
[{
"Description": f"This URL was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=url_indicator,
relationships=relationships
))
if 'mitre' in data:
mitre_data = data.get('mitre')
for item in mitre_data:
relationships = [EntityRelationship(
name=EntityRelationship.Relationships.RELATED_TO,
entity_a=main_entity,
entity_a_type=main_entity_type,
entity_b=item.get('name'),
entity_b_type='Attack Pattern'
)]
attack_indicator = Common.AttackPattern(
stix_id=None,
value=item.get('name'),
mitre_id=item.get('id')
)
returned_data.append(CommandResults(
readable_output=tableToMarkdown(
f"{item.get('name')}",
[{
"Description": f"This Attack Pattern was observed after detonation of {main_entity} in ANYRUN"
}]
),
indicator=attack_indicator,
relationships=relationships
))
return returned_data
|
10,444 |
def diff_config(module, commands, config):
"""Diff the candidate commands against current config returning a list of
updates to be applied to remote edgeos device
:param module: ansible module for this type (edgeos)
:type module: ansible.module
:param commands: candidate commands passed through ansible
:type commands: list
:param config: [commands pulled from edgeos device]
:type config: list
:return: updates: changes to apply to remote device
:rtype: list
:return: unmanaged_config: config on device without matching candidate
commands passed to ansible
:rtype: list
:return: invalid: commands passed to ansible not starting with 'set' or
'delete' and therefore considered invalid
:rtype: list
"""
config = [to_native(check_command(module, c)) for c in config.splitlines()]
set_commands, delete_commands, invalid_commands = list(), list(), list()
updates, unmanaged_config = list(), list()
for line in commands:
line = to_native(check_command(module, line))
if line.startswith('delete '):
delete_commands.append(line)
elif line.startswith('set '):
set_commands.append(line)
else:
invalid_commands.append(line)
# Will always run the delete commands first to allow for resets
if delete_commands:
updates = delete_commands
# Removing all matching commands already in config
updates = updates + [line for line in set_commands if line not in config]
# Add back changes where a corresponding delete command exists
if delete_commands:
for line in set_commands:
search = re.sub('^set ', 'delete ', line)
for dline in delete_commands:
if search.startswith(dline):
updates.append(line)
# Unmanaged config (config without matching commands)
unmanaged_config = (list(set(config) - set(set_commands)))
matches = list()
# Remove if actually a change to config
for line in unmanaged_config:
search = line.rsplit(' ', 1)[0]
for update in updates:
if update.startswith(search):
matches.append(line)
break
unmanaged_config = [line for line in unmanaged_config if line not in matches]
return updates, unmanaged_config, invalid_commands
|
def diff_config(module, commands, config):
"""Diff the candidate commands against current config returning a list of
updates to be applied to remote edgeos device
:param module: ansible module for this type (edgeos)
:type module: ansible.module
:param commands: candidate commands passed through ansible
:type commands: list
:param config: [commands pulled from edgeos device]
:type config: list
:return: updates: changes to apply to remote device
:rtype: list
:return: unmanaged_config: config on device without matching candidate
commands passed to ansible
:rtype: list
:return: invalid: commands passed to ansible not starting with 'set' or
'delete' and therefore considered invalid
:rtype: list
"""
config = [to_native(check_command(module, c)) for c in config.splitlines()]
set_commands, delete_commands, invalid_commands = list(), list(), list()
updates, unmanaged_config = list(), list()
for line in commands:
line = to_native(check_command(module, line))
if line.startswith('delete '):
delete_commands.append(line)
elif line.startswith('set '):
set_commands.append(line)
else:
invalid_commands.append(line)
# Will always run the delete commands first to allow for resets
if delete_commands:
updates = delete_commands
# Removing all matching commands already in config
updates.extend(line for line in set_commands if line not in config)
# Add back changes where a corresponding delete command exists
if delete_commands:
for line in set_commands:
search = re.sub('^set ', 'delete ', line)
for dline in delete_commands:
if search.startswith(dline):
updates.append(line)
# Unmanaged config (config without matching commands)
unmanaged_config = (list(set(config) - set(set_commands)))
matches = list()
# Remove if actually a change to config
for line in unmanaged_config:
search = line.rsplit(' ', 1)[0]
for update in updates:
if update.startswith(search):
matches.append(line)
break
unmanaged_config = [line for line in unmanaged_config if line not in matches]
return updates, unmanaged_config, invalid_commands
|
55,165 |
def pattern_matching(circuit_dag, pattern_dag):
r"""Function that applies the pattern matching algorithm and returns the list of maximal matches.
Args:
circuit_dag (.CommutationDAG): A commutation DAG representing the circuit to be optimized.
pattern_dag(.CommutationDAG): A commutation DAG representing the pattern.
Returns:
list(Match): the list of maximal matches.
**Example**
First let's consider the following circuit
.. code-block:: python
def circuit():
qml.S(wires=0)
qml.PauliZ(wires=0)
qml.S(wires=1)
qml.CZ(wires=[0, 1])
qml.S(wires=1)
qml.S(wires=2)
qml.CZ(wires=[1, 2])
qml.S(wires=2)
return qml.expval(qml.PauliX(wires=0))
where we want to find all maximal matches of a pattern containing a sequence of two ``pennylane.S`` gates and
a ``pennylane.PauliZ`` gate:
.. code-block:: python
with qml.tape.QuantumTape() as pattern:
qml.S(wires=0)
qml.S(wires=0)
qml.PauliZ(wires=0)
>>> circuit_dag = qml.commutation_dag(circuit)()
>>> pattern_dag = qml.commutation_dag(pattern)()
>>> all_max_matches = qml.pattern_matching(circuit_dag, pattern_dag)
It is possible to access the matches by looping through the list. The first integers indices represent the gates
in the pattern and the second intergers the gates in the circuit (by order of appearance).
>>> for match_conf in all_max_matches:
... print(match_conf.match)
[[0, 0], [2, 1]]
[[0, 2], [1, 4]]
[[0, 4], [1, 2]]
[[0, 5], [1, 7]]
[[0, 7], [1, 5]]
**Reference:**
[1] Iten, R., Moyard, R., Metger, T., Sutter, D. and Woerner, S., 2022.
Exact and practical pattern matching for quantum circuit optimization.
`doi.org/10.1145/3498325 <https://dl.acm.org/doi/abs/10.1145/3498325>`_
"""
# Match list
match_list = []
# Loop through all possible initial matches
for node_c, node_p in itertools.product(circuit_dag.get_nodes(), pattern_dag.get_nodes()):
# Initial matches between two identical gates (No qubits comparison)
if _compare_operation_without_qubits(node_c[1], node_p[1]):
# Fix qubits from the first (target fixed and control restrained)
not_fixed_qubits_confs = _not_fixed_qubits(
circuit_dag.num_wires, node_c[1].wires, pattern_dag.num_wires - len(node_p[1].wires)
)
# Loop over all possible qubits configurations given the first match constrains
for not_fixed_qubits_conf in not_fixed_qubits_confs:
for not_fixed_qubits_conf_permuted in itertools.permutations(not_fixed_qubits_conf):
for first_match_qubits_conf in _first_match_qubits(
node_c[1], node_p[1], pattern_dag.num_wires
):
# Qubits mapping between circuit and pattern
qubits_conf = _merge_first_match_and_permutation(
first_match_qubits_conf, not_fixed_qubits_conf_permuted
)
# Update wires, target_wires, control_wires
wires, target_wires, control_wires = _update_qubits(
circuit_dag, qubits_conf
)
# Forward match part of the algorithm
forward = ForwardMatch(
circuit_dag,
pattern_dag,
node_c[0],
node_p[0],
wires,
target_wires,
control_wires,
)
forward.run_forward_match()
# Backward match part of the algorithm
backward = BackwardMatch(
circuit_dag,
pattern_dag,
qubits_conf,
forward.match,
forward.circuit_matched_with,
forward.circuit_blocked,
forward.pattern_matched_with,
node_c[0],
node_p[0],
wires,
control_wires,
target_wires,
)
backward.run_backward_match()
_add_match(match_list, backward.match_final)
match_list.sort(key=lambda x: len(x.match), reverse=True)
# Extract maximal matches and optimizes the circuit for compatible maximal matches
if match_list:
maximal = MaximalMatches(match_list)
maximal.run_maximal_matches()
max_matches = maximal.max_match_list
return max_matches
return match_list
|
def pattern_matching(circuit_dag, pattern_dag):
r"""Function that applies the pattern matching algorithm and returns the list of maximal matches.
Args:
circuit_dag (.CommutationDAG): A commutation DAG representing the circuit to be optimized.
pattern_dag(.CommutationDAG): A commutation DAG representing the pattern.
Returns:
list(Match): the list of maximal matches.
**Example**
First let's consider the following circuit
.. code-block:: python
def circuit():
qml.S(wires=0)
qml.PauliZ(wires=0)
qml.S(wires=1)
qml.CZ(wires=[0, 1])
qml.S(wires=1)
qml.S(wires=2)
qml.CZ(wires=[1, 2])
qml.S(wires=2)
return qml.expval(qml.PauliX(wires=0))
where we want to find all maximal matches of a pattern containing a sequence of two ``pennylane.S`` gates and
a ``pennylane.PauliZ`` gate:
.. code-block:: python
with qml.tape.QuantumTape() as pattern:
qml.S(wires=0)
qml.S(wires=0)
qml.PauliZ(wires=0)
>>> circuit_dag = qml.commutation_dag(circuit)()
>>> pattern_dag = qml.commutation_dag(pattern)()
>>> all_max_matches = qml.pattern_matching(circuit_dag, pattern_dag)
The matches are accessible by looping through the list outputted by `qml.pattern_matching`. This output is a list of two lists containing indices. The first list indexes the gates
of the pattern and the second list provides indices for the gates in the circuit (by order of appearance).
>>> for match_conf in all_max_matches:
... print(match_conf.match)
[[0, 0], [2, 1]]
[[0, 2], [1, 4]]
[[0, 4], [1, 2]]
[[0, 5], [1, 7]]
[[0, 7], [1, 5]]
**Reference:**
[1] Iten, R., Moyard, R., Metger, T., Sutter, D. and Woerner, S., 2022.
Exact and practical pattern matching for quantum circuit optimization.
`doi.org/10.1145/3498325 <https://dl.acm.org/doi/abs/10.1145/3498325>`_
"""
# Match list
match_list = []
# Loop through all possible initial matches
for node_c, node_p in itertools.product(circuit_dag.get_nodes(), pattern_dag.get_nodes()):
# Initial matches between two identical gates (No qubits comparison)
if _compare_operation_without_qubits(node_c[1], node_p[1]):
# Fix qubits from the first (target fixed and control restrained)
not_fixed_qubits_confs = _not_fixed_qubits(
circuit_dag.num_wires, node_c[1].wires, pattern_dag.num_wires - len(node_p[1].wires)
)
# Loop over all possible qubits configurations given the first match constrains
for not_fixed_qubits_conf in not_fixed_qubits_confs:
for not_fixed_qubits_conf_permuted in itertools.permutations(not_fixed_qubits_conf):
for first_match_qubits_conf in _first_match_qubits(
node_c[1], node_p[1], pattern_dag.num_wires
):
# Qubits mapping between circuit and pattern
qubits_conf = _merge_first_match_and_permutation(
first_match_qubits_conf, not_fixed_qubits_conf_permuted
)
# Update wires, target_wires, control_wires
wires, target_wires, control_wires = _update_qubits(
circuit_dag, qubits_conf
)
# Forward match part of the algorithm
forward = ForwardMatch(
circuit_dag,
pattern_dag,
node_c[0],
node_p[0],
wires,
target_wires,
control_wires,
)
forward.run_forward_match()
# Backward match part of the algorithm
backward = BackwardMatch(
circuit_dag,
pattern_dag,
qubits_conf,
forward.match,
forward.circuit_matched_with,
forward.circuit_blocked,
forward.pattern_matched_with,
node_c[0],
node_p[0],
wires,
control_wires,
target_wires,
)
backward.run_backward_match()
_add_match(match_list, backward.match_final)
match_list.sort(key=lambda x: len(x.match), reverse=True)
# Extract maximal matches and optimizes the circuit for compatible maximal matches
if match_list:
maximal = MaximalMatches(match_list)
maximal.run_maximal_matches()
max_matches = maximal.max_match_list
return max_matches
return match_list
|
2,241 |
def test_ovr_partial_fit_exceptions():
ovr = OneVsRestClassifier(MultinomialNB())
X = np.abs(np.random.randn(14, 2))
y = [1, 1, 1, 1, 2, 3, 3, 0, 0, 2, 3, 1, 2, 3]
ovr.partial_fit(X[:7], y[:7], np.unique(y))
# If a new class that was not in the first call of partial fit is seen
# It should raise Value Error
y1 = [5] + y[7:-1]
msg = r"Mini-batch contains \[.+\] while classes must be subset of \[.+\]"
with pytest.raises(ValueError, match=msg):
ovr.partial_fit(X=X[7:], y=y1)
|
def test_ovr_partial_fit_exceptions():
ovr = OneVsRestClassifier(MultinomialNB())
X = np.abs(np.random.randn(14, 2))
y = [1, 1, 1, 1, 2, 3, 3, 0, 0, 2, 3, 1, 2, 3]
ovr.partial_fit(X[:7], y[:7], np.unique(y))
# If a new class that was not in the first call of partial fit is seen
# it should raise ValueError
y1 = [5] + y[7:-1]
msg = r"Mini-batch contains \[.+\] while classes must be subset of \[.+\]"
with pytest.raises(ValueError, match=msg):
ovr.partial_fit(X=X[7:], y=y1)
|
25,995 |
def load_arguments(self, _):
# Model imports
StorageAccountTypes = self.get_models('StorageAccountTypes')
DiskStorageAccountTypes = self.get_models('DiskStorageAccountTypes,', operation_group='disks')
SnapshotStorageAccountTypes = self.get_models('SnapshotStorageAccountTypes', operation_group='snapshots')
UpgradeMode, CachingTypes, OperatingSystemTypes = self.get_models('UpgradeMode', 'CachingTypes', 'OperatingSystemTypes')
HyperVGenerationTypes, HyperVGeneration = self.get_models('HyperVGenerationTypes', 'HyperVGeneration')
DedicatedHostLicenseTypes = self.get_models('DedicatedHostLicenseTypes')
OrchestrationServiceNames, OrchestrationServiceStateAction = self.get_models('OrchestrationServiceNames', 'OrchestrationServiceStateAction', operation_group='virtual_machine_scale_sets')
RebootSetting, VMGuestPatchClassificationWindows, VMGuestPatchClassificationLinux = self.get_models('VMGuestPatchRebootSetting', 'VMGuestPatchClassificationWindows', 'VMGuestPatchClassificationLinux')
GallerySharingPermissionTypes = self.get_models('GallerySharingPermissionTypes', operation_group='shared_galleries')
ReplicationMode = self.get_models('ReplicationMode', operation_group='gallery_image_versions')
# REUSABLE ARGUMENT DEFINITIONS
name_arg_type = CLIArgumentType(options_list=['--name', '-n'], metavar='NAME')
multi_ids_type = CLIArgumentType(nargs='+')
existing_vm_name = CLIArgumentType(overrides=name_arg_type,
configured_default='vm',
help="The name of the Virtual Machine. You can configure the default using `az configure --defaults vm=<name>`",
completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines'), id_part='name')
existing_disk_name = CLIArgumentType(overrides=name_arg_type, help='The name of the managed disk', completer=get_resource_name_completion_list('Microsoft.Compute/disks'), id_part='name')
existing_snapshot_name = CLIArgumentType(overrides=name_arg_type, help='The name of the snapshot', completer=get_resource_name_completion_list('Microsoft.Compute/snapshots'), id_part='name')
vmss_name_type = CLIArgumentType(name_arg_type,
configured_default='vmss',
completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'),
help="Scale set name. You can configure the default using `az configure --defaults vmss=<name>`",
id_part='name')
extension_instance_name_type = CLIArgumentType(help="Name of extension instance, which can be customized. Default: name of the extension.")
image_template_name_type = CLIArgumentType(overrides=name_arg_type, id_part='name')
disk_encryption_set_name = CLIArgumentType(overrides=name_arg_type, help='Name of disk encryption set.', id_part='name')
ephemeral_placement_type = CLIArgumentType(options_list=['--ephemeral-os-disk-placement', '--ephemeral-placement'], arg_type=get_enum_type(['ResourceDisk', 'CacheDisk']), min_api='2019-12-01')
license_type = CLIArgumentType(
help="Specifies that the Windows image or disk was licensed on-premises. To enable Azure Hybrid Benefit for "
"Windows Server, use 'Windows_Server'. To enable Multi-tenant Hosting Rights for Windows 10, "
"use 'Windows_Client'. For more information see the Azure Windows VM online docs.",
arg_type=get_enum_type(['Windows_Server', 'Windows_Client', 'RHEL_BYOS', 'SLES_BYOS', 'RHEL_BASE',
'RHEL_SAPAPPS', 'RHEL_SAPHA', 'RHEL_EUS', 'SLES_BASE', 'SLES_SAP', 'SLES_HPC', 'None',
'RHEL_ELS_6']))
# StorageAccountTypes renamed to DiskStorageAccountTypes in 2018_06_01 of azure-mgmt-compute
DiskStorageAccountTypes = DiskStorageAccountTypes or StorageAccountTypes
if DiskStorageAccountTypes:
disk_sku = CLIArgumentType(arg_type=get_enum_type(DiskStorageAccountTypes))
else:
# StorageAccountTypes introduced in api version 2016_04_30_preview of Resource.MGMT.Compute package..
# However, 2017-03-09-profile targets version 2016-03-30 of compute package.
disk_sku = CLIArgumentType(arg_type=get_enum_type(['Premium_LRS', 'Standard_LRS']))
if SnapshotStorageAccountTypes:
snapshot_sku = CLIArgumentType(arg_type=get_enum_type(SnapshotStorageAccountTypes))
else:
# SnapshotStorageAccountTypes introduced in api version 2018_04_01 of Resource.MGMT.Compute package..
# However, 2017-03-09-profile targets version 2016-03-30 of compute package.
snapshot_sku = CLIArgumentType(arg_type=get_enum_type(['Premium_LRS', 'Standard_LRS']))
# special case for `network nic scale-set list` command alias
with self.argument_context('network nic scale-set list') as c:
c.argument('virtual_machine_scale_set_name', options_list=['--vmss-name'], completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'), id_part='name')
HyperVGenerationTypes = HyperVGenerationTypes or HyperVGeneration
if HyperVGenerationTypes:
hyper_v_gen_sku = CLIArgumentType(arg_type=get_enum_type(HyperVGenerationTypes, default="V1"))
else:
hyper_v_gen_sku = CLIArgumentType(arg_type=get_enum_type(["V1", "V2"], default="V1"))
ultra_ssd_enabled_type = CLIArgumentType(
arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Enables or disables the capability to have 1 or more managed data disks with UltraSSD_LRS storage account')
scale_in_policy_type = CLIArgumentType(
nargs='+', arg_type=get_enum_type(self.get_models('VirtualMachineScaleSetScaleInRules')),
help='Specify the scale-in policy (space delimited) that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled-in.'
)
edge_zone_type = CLIArgumentType(
help='The name of edge zone.',
min_api='2020-12-01',
is_preview=True
)
t_shared_to = self.get_models('SharedToValues', operation_group='shared_galleries')
shared_to_type = CLIArgumentType(
arg_type=get_enum_type(t_shared_to),
help='The query parameter to decide what shared galleries to fetch when doing listing operations. '
'If not specified, list by subscription id.'
)
marker_type = CLIArgumentType(
help='A string value that identifies the portion of the list of containers to be '
'returned with the next listing operation. The operation returns the NextMarker value within '
'the response body if the listing operation did not return all containers remaining to be listed '
'with the current page. If specified, this generator will begin returning results from the point '
'where the previous generator stopped.')
# region MixedScopes
for scope in ['vm', 'disk', 'snapshot', 'image', 'sig']:
with self.argument_context(scope) as c:
c.argument('tags', tags_type)
for scope in ['disk', 'snapshot']:
with self.argument_context(scope) as c:
c.ignore('source_blob_uri', 'source_disk', 'source_snapshot')
c.argument('source_storage_account_id', help='used when source blob is in a different subscription')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
c.argument('duration_in_seconds', help='Time duration in seconds until the SAS access expires', type=int)
if self.supported_api_version(min_api='2018-09-30', operation_group='disks'):
c.argument('access_level', arg_type=get_enum_type(['Read', 'Write']), default='Read', help='access level')
c.argument('for_upload', arg_type=get_three_state_flag(),
help='Create the {0} for uploading blobs later on through storage commands. Run "az {0} grant-access --access-level Write" to retrieve the {0}\'s SAS token.'.format(scope))
c.argument('hyper_v_generation', arg_type=hyper_v_gen_sku, help='The hypervisor generation of the Virtual Machine. Applicable to OS disks only.')
else:
c.ignore('access_level', 'for_upload', 'hyper_v_generation')
c.argument('encryption_type', min_api='2019-07-01', arg_type=get_enum_type(self.get_models('EncryptionType')),
help='Encryption type. EncryptionAtRestWithPlatformKey: Disk is encrypted with XStore managed key at rest. It is the default encryption type. EncryptionAtRestWithCustomerKey: Disk is encrypted with Customer managed key at rest.')
c.argument('disk_encryption_set', min_api='2019-07-01', help='Name or ID of disk encryption set that is used to encrypt the disk.')
c.argument('location', help='Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`. If location is not specified and no default location specified, location will be automatically set as same as the resource group.')
operation_group = 'disks' if scope == 'disk' else 'snapshots'
c.argument('network_access_policy', min_api='2020-05-01', help='Policy for accessing the disk via network.', arg_type=get_enum_type(self.get_models('NetworkAccessPolicy', operation_group=operation_group)))
c.argument('disk_access', min_api='2020-05-01', help='Name or ID of the disk access resource for using private endpoints on disks.')
c.argument('enable_bursting', arg_type=get_three_state_flag(), help='Enable bursting beyond the provisioned performance target of the disk. Bursting is disabled by default, and it does not apply to Ultra disks.')
c.argument('public_network_access', arg_type=get_enum_type(['Disabled', 'Enabled']), min_api='2021-04-01', is_preview=True, help='Customers can set on Managed Disks or Snapshots to control the export policy on the disk.')
c.argument('accelerated_network', arg_type=get_three_state_flag(), min_api='2021-04-01', is_preview=True, help='Customers can set on Managed Disks or Snapshots to enable the accelerated networking if the OS disk image support.')
for scope in ['disk create', 'snapshot create']:
with self.argument_context(scope) as c:
c.argument('source', help='source to create the disk/snapshot from, including unmanaged blob uri, managed disk id or name, or snapshot id or name')
# endregion
# region Disks
with self.argument_context('disk') as c:
c.argument('zone', zone_type, min_api='2017-03-30', options_list=['--zone']) # TODO: --size-gb currently has claimed -z. We can do a breaking change later if we want to.
c.argument('disk_name', existing_disk_name, completer=get_resource_name_completion_list('Microsoft.Compute/disks'))
c.argument('name', arg_type=name_arg_type)
c.argument('sku', arg_type=disk_sku, help='Underlying storage SKU')
c.argument('os_type', arg_type=get_enum_type(OperatingSystemTypes), help='The Operating System type of the Disk.')
c.argument('disk_iops_read_write', type=int, min_api='2018-06-01', help='The number of IOPS allowed for this disk. Only settable for UltraSSD disks. One operation can transfer between 4k and 256k bytes')
c.argument('disk_mbps_read_write', type=int, min_api='2018-06-01', help="The bandwidth allowed for this disk. Only settable for UltraSSD disks. MBps means millions of bytes per second with ISO notation of powers of 10")
c.argument('upload_size_bytes', type=int, min_api='2019-03-01',
help='The size (in bytes) of the contents of the upload including the VHD footer. Min value: 20972032. Max value: 35183298347520')
c.argument('max_shares', type=int, help='The maximum number of VMs that can attach to the disk at the same time. Value greater than one indicates a disk that can be mounted on multiple VMs at the same time')
c.argument('disk_iops_read_only', type=int, help='The total number of IOPS that will be allowed across all VMs mounting the shared disk as ReadOnly. One operation can transfer between 4k and 256k bytes')
c.argument('disk_mbps_read_only', type=int, help='The total throughput (MBps) that will be allowed across all VMs mounting the shared disk as ReadOnly. MBps means millions of bytes per second - MB here uses the ISO notation, of powers of 10')
c.argument('image_reference', help='ID or URN (publisher:offer:sku:version) of the image from which to create a disk')
c.argument('image_reference_lun', type=int, help='If the disk is created from an image\'s data disk, this is an index that indicates which of the data disks in the image to use. For OS disks, this field is null')
c.argument('gallery_image_reference', help='ID of the Compute Gallery image version from which to create a disk')
c.argument('gallery_image_reference_lun', type=int, help='If the disk is created from an image\'s data disk, this is an index that indicates which of the data disks in the image to use. For OS disks, this field is null')
c.argument('logical_sector_size', type=int, help='Logical sector size in bytes for Ultra disks. Supported values are 512 ad 4096. 4096 is the default.')
c.argument('tier', help='Performance tier of the disk (e.g, P4, S10) as described here: https://azure.microsoft.com/pricing/details/managed-disks/. Does not apply to Ultra disks.')
c.argument('edge_zone', edge_zone_type)
c.argument('security_type', choices=['TrustedLaunch'], help='The security type of the VM. Applicable for OS disks only.', min_api='2020-12-01')
c.argument('support_hibernation', arg_type=get_three_state_flag(), help='Indicate the OS on a disk supports hibernation.', min_api='2020-12-01')
# endregion
# region Snapshots
with self.argument_context('snapshot', resource_type=ResourceType.MGMT_COMPUTE, operation_group='snapshots') as c:
c.argument('snapshot_name', existing_snapshot_name, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/snapshots'))
c.argument('name', arg_type=name_arg_type)
c.argument('sku', arg_type=snapshot_sku)
c.argument('incremental', arg_type=get_three_state_flag(), min_api='2019-03-01',
help='Whether a snapshot is incremental. Incremental snapshots on the same disk occupy less space than full snapshots and can be diffed')
c.argument('edge_zone', edge_zone_type)
c.argument('copy_start', arg_type=get_three_state_flag(), min_api='2021-04-01',
help='Create snapshot by using a deep copy process, where the resource creation is considered complete only after all data has been copied from the source.')
# endregion
# region Images
with self.argument_context('image') as c:
c.argument('os_type', arg_type=get_enum_type(['Windows', 'Linux']))
c.argument('image_name', arg_type=name_arg_type, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/images'))
c.argument('tags', tags_type)
with self.argument_context('image create') as c:
# here we collpase all difference image sources to under 2 common arguments --os-disk-source --data-disk-sources
c.argument('name', arg_type=name_arg_type, help='new image name')
c.argument('source', help='OS disk source from the same region, including a virtual machine ID or name, OS disk blob URI, managed OS disk ID or name, or OS snapshot ID or name')
c.argument('data_disk_sources', nargs='+', help='Space-separated list of data disk sources, including unmanaged blob URI, managed disk ID or name, or snapshot ID or name')
c.argument('zone_resilient', min_api='2017-12-01', arg_type=get_three_state_flag(), help='Specifies whether an image is zone resilient or not. '
'Default is false. Zone resilient images can be created only in regions that provide Zone Redundant Storage')
c.argument('storage_sku', arg_type=disk_sku, help='The SKU of the storage account with which to create the VM image. Unused if source VM is specified.')
c.argument('os_disk_caching', arg_type=get_enum_type(CachingTypes), help="Storage caching type for the image's OS disk.")
c.argument('data_disk_caching', arg_type=get_enum_type(CachingTypes),
help="Storage caching type for the image's data disk.")
c.argument('hyper_v_generation', arg_type=hyper_v_gen_sku, min_api="2019-03-01", help='The hypervisor generation of the Virtual Machine created from the image.')
c.ignore('source_virtual_machine', 'os_blob_uri', 'os_disk', 'os_snapshot', 'data_blob_uris', 'data_disks', 'data_snapshots')
c.argument('edge_zone', edge_zone_type, )
# endregion
# region Image Templates
with self.argument_context('image builder') as c:
ib_output_name_help = "Name of the image builder run output."
c.argument('location', get_location_type(self.cli_ctx))
c.argument('scripts', nargs='+', help="Space-separated list of shell or powershell scripts to customize the image with. Each script must be a publicly accessible URL."
" Infers type of script from file extension ('.sh' or'.ps1') or from source type. More more customizer options and flexibility, see: 'az image template customizer add'")
c.argument('source', options_list=["--image-source", "-i"], help="The base image to customize. Must be a valid platform image URN, platform image alias, Red Hat ISO image URI, managed image name/ID, or shared image version ID.")
c.argument('image_template_name', image_template_name_type, help="The name of the image template.")
c.argument('checksum', help="The SHA256 checksum of the Red Hat ISO image")
c.argument('managed_image_destinations', nargs='+', help='Managed image output distributor information. Space-separated list of key-value pairs. E.g "image_1=westus2 image_2=westus". Each key is the name or resource ID of the managed image to be created. Each value is the location of the image.')
c.argument('shared_image_destinations', nargs='+', help='Shared image gallery (sig) output distributor information. Space-separated list of key-value pairs. E.g "my_gallery_1/image_def_1=eastus,westus my_gallery_2/image_def_2=uksouth,canadaeast,francesouth." '
'Each key is the sig image definition ID or sig gallery name and sig image definition delimited by a "/". Each value is a comma-delimited list of replica locations.')
c.argument('output_name', help=ib_output_name_help)
c.ignore('destinations_lists', 'scripts_list', 'source_dict')
with self.argument_context('image builder create') as c:
ib_source_type = CLIArgumentType(arg_group="Image Source")
ib_customizer_type = CLIArgumentType(arg_group="Customizer")
ib_cutput_type = CLIArgumentType(arg_group="Output")
c.argument('build_timeout', type=int, help="The Maximum duration to wait while building the image template, in minutes. Default is 60.")
c.argument('image_template', help='Local path or URL to an image template file. When using --image-template, all other parameters are ignored except -g and -n. Reference: https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json')
c.argument('identity', nargs='+', help='List of user assigned identities (name or ID, space delimited) of the image template.')
# VM profile
c.argument('vm_size', help='Size of the virtual machine used to build, customize and capture images. Omit or specify empty string to use the default (Standard_D1_v2)')
c.argument('os_disk_size', type=int, help='Size of the OS disk in GB. Omit or specify 0 to use Azure\'s default OS disk size')
c.argument('vnet', help='Name of VNET to deploy the build virtual machine. You should only specify it when subnet is a name')
c.argument('subnet', help='Name or ID of subnet to deploy the build virtual machine')
# Image Source Arguments
c.argument('source', arg_type=ib_source_type)
c.argument('checksum', arg_type=ib_source_type)
c.argument('', arg_type=ib_source_type)
# Image Customizer Arguments
c.argument('scripts', arg_type=ib_customizer_type)
c.argument('', arg_type=ib_customizer_type)
c.argument('', arg_type=ib_customizer_type)
# Image Output Arguments
c.argument('managed_image_destinations', arg_type=ib_cutput_type)
c.argument('shared_image_destinations', arg_type=ib_cutput_type)
c.argument('output_name', arg_type=ib_cutput_type)
with self.argument_context('image builder output') as c:
ib_sig_regions_help = "Space-separated list of regions to replicate the image version into."
ib_img_location_help = "Location where the customized image will be created."
c.argument('gallery_image_definition', arg_group="Shared Image Gallery", help="Name or ID of the existing SIG image definition to create the customized image version with.")
c.argument('gallery_name', arg_group="Shared Image Gallery", help="Shared image gallery name, if image definition name and not ID was provided.")
c.argument('gallery_replication_regions', arg_group="Shared Image Gallery", nargs='+', help=ib_sig_regions_help)
c.argument('managed_image', arg_group="Managed Image", help="Name or ID of the customized managed image to be created.")
c.argument('managed_image_location', arg_group="Managed Image", help=ib_img_location_help)
with self.argument_context('image builder output add') as c:
ib_artifact_tags_help = "Tags that will be applied to the output artifact once it has been created by the distributor. " + tags_type.settings['help']
ib_artifact_tags_type = CLIArgumentType(overrides=tags_type, help=ib_artifact_tags_help, options_list=["--artifact-tags"])
ib_default_loc_help = " Defaults to resource group's location."
c.argument('output_name', help=ib_output_name_help + " Defaults to the name of the managed image or sig image definition.")
c.argument('gallery_replication_regions', arg_group="Shared Image Gallery", nargs='+', help=ib_sig_regions_help + ib_default_loc_help)
c.argument('managed_image_location', arg_group="Managed Image", help=ib_img_location_help + ib_default_loc_help)
c.argument('is_vhd', arg_group="VHD", help="The output is a VHD distributor.", action='store_true')
c.argument('tags', arg_type=ib_artifact_tags_type)
c.ignore('location')
with self.argument_context('image builder customizer') as c:
ib_win_restart_type = CLIArgumentType(arg_group="Windows Restart")
ib_win_update_type = CLIArgumentType(arg_group="Windows Update")
ib_script_type = CLIArgumentType(arg_group="Shell and Powershell")
ib_powershell_type = CLIArgumentType(arg_group="Powershell")
ib_file_customizer_type = CLIArgumentType(arg_group="File")
c.argument('customizer_name', help="Name of the customizer.")
c.argument('customizer_type', options_list=['--type', '-t'], help="Type of customizer to be added to the image template.", arg_type=get_enum_type(ScriptType))
# Script Args
c.argument('script_url', arg_type=ib_script_type, help="URL of script to customize the image with. The URL must be publicly accessible.")
c.argument('inline_script', arg_type=ib_script_type, nargs='+', help="Space-separated list of inline script lines to customize the image with.")
# Powershell Specific Args
c.argument('valid_exit_codes', options_list=['--exit-codes', '-e'], arg_type=ib_powershell_type, nargs='+', help="Space-separated list of valid exit codes, as integers")
# Windows Restart Specific Args
c.argument('restart_command', arg_type=ib_win_restart_type, help="Command to execute the restart operation.")
c.argument('restart_check_command', arg_type=ib_win_restart_type, help="Command to verify that restart succeeded.")
c.argument('restart_timeout', arg_type=ib_win_restart_type, help="Restart timeout specified as a string consisting of a magnitude and unit, e.g. '5m' (5 minutes) or '2h' (2 hours)", default="5m")
# Windows Update Specific Args
c.argument('search_criteria', arg_type=ib_win_update_type, help='Criteria to search updates. Omit or specify empty string to use the default (search all). Refer to above link for examples and detailed description of this field.')
c.argument('filters', arg_type=ib_win_update_type, nargs='+', help='Space delimited filters to select updates to apply. Omit or specify empty array to use the default (no filter)')
c.argument('update_limit', arg_type=ib_win_update_type, help='Maximum number of updates to apply at a time. Omit or specify 0 to use the default (1000)')
# File Args
c.argument('file_source', arg_type=ib_file_customizer_type, help="The URI of the file to be downloaded into the image. It can be a github link, SAS URI for Azure Storage, etc.")
c.argument('dest_path', arg_type=ib_file_customizer_type, help="The absolute destination path where the file specified in --file-source will be downloaded to in the image")
# endregion
# region AvailabilitySets
with self.argument_context('vm availability-set') as c:
c.argument('availability_set_name', name_arg_type, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/availabilitySets'), help='Name of the availability set')
with self.argument_context('vm availability-set create') as c:
c.argument('availability_set_name', name_arg_type, validator=get_default_location_from_resource_group, help='Name of the availability set')
c.argument('platform_update_domain_count', type=int, help='Update Domain count. If unspecified, the server will pick the most optimal number like 5.')
c.argument('platform_fault_domain_count', type=int, help='Fault Domain count.')
c.argument('validate', help='Generate and validate the ARM template without creating any resources.', action='store_true')
c.argument('unmanaged', action='store_true', min_api='2016-04-30-preview', help='contained VMs should use unmanaged disks')
with self.argument_context('vm availability-set update') as c:
if self.supported_api_version(max_api='2016-04-30-preview', operation_group='virtual_machines'):
c.argument('name', name_arg_type, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/availabilitySets'), help='Name of the availability set')
c.argument('availability_set_name', options_list=['--availability-set-name'])
# endregion
# region VirtualMachines
with self.argument_context('vm') as c:
c.argument('vm_name', existing_vm_name)
c.argument('size', completer=get_vm_size_completion_list)
c.argument('name', arg_type=name_arg_type)
c.argument('zone', zone_type, min_api='2017-03-30')
c.argument('caching', help='Disk caching policy', arg_type=get_enum_type(CachingTypes))
c.argument('nsg', help='The name to use when creating a new Network Security Group (default) or referencing an existing one. Can also reference an existing NSG by ID or specify "" for none.', arg_group='Network')
c.argument('nsg_rule', help='NSG rule to create when creating a new NSG. Defaults to open ports for allowing RDP on Windows and allowing SSH on Linux.', arg_group='Network', arg_type=get_enum_type(['RDP', 'SSH']))
c.argument('application_security_groups', min_api='2017-09-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network')
c.argument('workspace', is_preview=True, arg_group='Monitor', help='Name or ID of Log Analytics Workspace. If you specify the workspace through its name, the workspace should be in the same resource group with the vm, otherwise a new workspace will be created.')
with self.argument_context('vm capture') as c:
c.argument('overwrite', action='store_true')
with self.argument_context('vm update') as c:
c.argument('os_disk', min_api='2017-12-01', help="Managed OS disk ID or name to swap to")
c.argument('write_accelerator', nargs='*', min_api='2017-12-01',
help="enable/disable disk write accelerator. Use singular value 'true/false' to apply across, or specify individual disks, e.g.'os=true 1=true 2=true' for os disk and data disks with lun of 1 & 2")
c.argument('disk_caching', nargs='*', help="Use singular value to apply across, or specify individual disks, e.g. 'os=ReadWrite 0=None 1=ReadOnly' should enable update os disk and 2 data disks")
c.argument('ultra_ssd_enabled', ultra_ssd_enabled_type)
c.argument('enable_secure_boot', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable secure boot.')
c.argument('enable_vtpm', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable vTPM.')
c.argument('size', help='The new size of the virtual machine. See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.', is_preview=True)
c.argument('ephemeral_os_disk_placement', arg_type=ephemeral_placement_type,
help='Only applicable when used with `--size`. Allows you to choose the Ephemeral OS disk provisioning location.', is_preview=True)
c.argument('enable_hibernation', arg_type=get_three_state_flag(), min_api='2021-03-01', help='The flag that enable or disable hibernation capability on the VM.')
c.argument('v_cpus_available', type=int, min_api='2021-07-01', help='Specify the number of vCPUs available for the VM')
c.argument('v_cpus_per_core', type=int, min_api='2021-07-01', help='Specify the vCPU to physical core ratio. Setting this property to 1 also means that hyper-threading is disabled.')
with self.argument_context('vm create') as c:
c.argument('name', name_arg_type, validator=_resource_not_exists(self.cli_ctx, 'Microsoft.Compute/virtualMachines'))
c.argument('vm_name', name_arg_type, id_part=None, help='Name of the virtual machine.', completer=None)
c.argument('os_disk_size_gb', type=int, help='the size of the os disk in GB', arg_group='Storage')
c.argument('availability_set', help='Name or ID of an existing availability set to add the VM to. None by default.')
c.argument('vmss', help='Name or ID of an existing virtual machine scale set that the virtual machine should be assigned to. None by default.')
c.argument('nsg', help='The name to use when creating a new Network Security Group (default) or referencing an existing one. Can also reference an existing NSG by ID or specify "" for none (\'""\' in Azure CLI using PowerShell or --% operator).', arg_group='Network')
c.argument('nsg_rule', help='NSG rule to create when creating a new NSG. Defaults to open ports for allowing RDP on Windows and allowing SSH on Linux. NONE represents no NSG rule', arg_group='Network', arg_type=get_enum_type(['RDP', 'SSH', 'NONE']))
c.argument('application_security_groups', resource_type=ResourceType.MGMT_NETWORK, min_api='2017-09-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network', validator=validate_asg_names_or_ids)
c.argument('boot_diagnostics_storage',
help='pre-existing storage account name or its blob uri to capture boot diagnostics. Its sku should be one of Standard_GRS, Standard_LRS and Standard_RAGRS')
c.argument('accelerated_networking', resource_type=ResourceType.MGMT_NETWORK, min_api='2016-09-01', arg_type=get_three_state_flag(), arg_group='Network',
help="enable accelerated networking. Unless specified, CLI will enable it based on machine image and size")
if self.supported_api_version(min_api='2019-03-01', resource_type=ResourceType.MGMT_COMPUTE):
VirtualMachineEvictionPolicyTypes = self.get_models('VirtualMachineEvictionPolicyTypes', resource_type=ResourceType.MGMT_COMPUTE)
c.argument('eviction_policy', resource_type=ResourceType.MGMT_COMPUTE, min_api='2019-03-01',
arg_type=get_enum_type(VirtualMachineEvictionPolicyTypes, default=None),
help="The eviction policy for the Spot priority virtual machine. Default eviction policy is Deallocate for a Spot priority virtual machine")
c.argument('enable_agent', arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Indicates whether virtual machine agent should be provisioned on the virtual machine. When this property is not specified, default behavior is to set it to true. This will ensure that VM Agent is installed on the VM so that extensions can be added to the VM later')
c.argument('enable_auto_update', arg_type=get_three_state_flag(), min_api='2020-06-01',
help='Indicate whether Automatic Updates is enabled for the Windows virtual machine')
c.argument('patch_mode', arg_type=get_enum_type(['AutomaticByOS', 'AutomaticByPlatform', 'Manual', 'ImageDefault']), min_api='2020-12-01',
help='Mode of in-guest patching to IaaS virtual machine. Allowed values for Windows VM: AutomaticByOS, AutomaticByPlatform, Manual. Allowed values for Linux VM: AutomaticByPlatform, ImageDefault. Manual - You control the application of patches to a virtual machine. You do this by applying patches manually inside the VM. In this mode, automatic updates are disabled; the paramater --enable-auto-update must be false. AutomaticByOS - The virtual machine will automatically be updated by the OS. The parameter --enable-auto-update must be true. AutomaticByPlatform - the virtual machine will automatically updated by the OS. ImageDefault - The virtual machine\'s default patching configuration is used. The parameter --enable-agent and --enable-auto-update must be true')
c.argument('ssh_key_name', help='Use it as public key in virtual machine. It should be an existing SSH key resource in Azure.')
c.argument('enable_hotpatching', arg_type=get_three_state_flag(), help='Patch VMs without requiring a reboot. --enable-agent must be set and --patch-mode must be set to AutomaticByPlatform', min_api='2020-12-01')
c.argument('platform_fault_domain', min_api='2020-06-01',
help='Specify the scale set logical fault domain into which the virtual machine will be created. By default, the virtual machine will be automatically assigned to a fault domain that best maintains balance across available fault domains. This is applicable only if the virtualMachineScaleSet property of this virtual machine is set. The virtual machine scale set that is referenced, must have platform fault domain count. This property cannot be updated once the virtual machine is created. Fault domain assignment can be viewed in the virtual machine instance view')
c.argument('count', type=int, is_preview=True,
help='Number of virtual machines to create. Value range is [2, 250], inclusive. Don\'t specify this parameter if you want to create a normal single VM. The VMs are created in parallel. The output of this command is an array of VMs instead of one single VM. Each VM has its own public IP, NIC. VNET and NSG are shared. It is recommended that no existing public IP, NIC, VNET and NSG are in resource group. When --count is specified, --attach-data-disks, --attach-os-disk, --boot-diagnostics-storage, --computer-name, --host, --host-group, --nics, --os-disk-name, --private-ip-address, --public-ip-address, --public-ip-address-dns-name, --storage-account, --storage-container-name, --subnet, --use-unmanaged-disk, --vnet-name are not allowed.')
c.argument('security_type', arg_type=get_enum_type(['TrustedLaunch']), min_api='2020-12-01',
help='Specify if the VM is Trusted Launch enabled. See https://docs.microsoft.com/azure/virtual-machines/trusted-launch.')
c.argument('enable_secure_boot', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable secure boot. It is part of trusted launch.')
c.argument('enable_vtpm', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable vTPM. It is part of trusted launch.')
c.argument('user_data', help='UserData for the VM. It can be passed in as file or string.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
c.argument('enable_hibernation', arg_type=get_three_state_flag(), min_api='2021-03-01', help='The flag that enable or disable hibernation capability on the VM.')
c.argument('v_cpus_available', type=int, min_api='2021-07-01', help='Specify the number of vCPUs available for the VM')
c.argument('v_cpus_per_core', type=int, min_api='2021-07-01', help='Specify the vCPU to physical core ratio. Setting this property to 1 also means that hyper-threading is disabled.')
with self.argument_context('vm create', arg_group='Storage') as c:
c.argument('attach_os_disk', help='Attach an existing OS disk to the VM. Can use the name or ID of a managed disk or the URI to an unmanaged disk VHD.')
c.argument('attach_data_disks', nargs='+', help='Attach existing data disks to the VM. Can use the name or ID of a managed disk or the URI to an unmanaged disk VHD.')
with self.argument_context('vm create', arg_group='Dedicated Host', min_api='2019-03-01') as c:
c.argument('dedicated_host_group', options_list=['--host-group'], is_preview=True, help="Name or ID of the dedicated host group that the VM will reside in. --host and --host-group can't be used together.")
c.argument('dedicated_host', options_list=['--host'], is_preview=True, help="ID of the dedicated host that the VM will reside in. --host and --host-group can't be used together.")
with self.argument_context('vm update', arg_group='Dedicated Host', min_api='2019-03-01') as c:
c.argument('dedicated_host_group', options_list=['--host-group'], is_preview=True, help="Name or ID of the dedicated host group that the VM will reside in. --host and --host-group can't be used together. You should deallocate the VM before update, and start the VM after update. Please check out help for more examples.")
c.argument('dedicated_host', options_list=['--host'], is_preview=True, help="ID of the dedicated host that the VM will reside in. --host and --host-group can't be used together. You should deallocate the VM before update, and start the VM after update. Please check out help for more examples.")
with self.argument_context('vm open-port') as c:
c.argument('vm_name', name_arg_type, help='The name of the virtual machine to open inbound traffic on.')
c.argument('network_security_group_name', options_list=('--nsg-name',), help='The name of the network security group to create if one does not exist. Ignored if an NSG already exists.', validator=validate_nsg_name)
c.argument('apply_to_subnet', help='Allow inbound traffic on the subnet instead of the NIC', action='store_true')
c.argument('port', help="The port or port range (ex: 80-100) to open inbound traffic to. Use '*' to allow traffic to all ports. Use comma separated values to specify more than one port or port range.")
c.argument('priority', help='Rule priority, between 100 (highest priority) and 4096 (lowest priority). Must be unique for each rule in the collection.', type=int)
for scope in ['vm show', 'vm list']:
with self.argument_context(scope) as c:
c.argument('show_details', action='store_true', options_list=['--show-details', '-d'], help='show public ip address, FQDN, and power states. command will run slow')
for scope in ['vm show', 'vmss show']:
with self.argument_context(scope) as c:
c.argument('include_user_data', action='store_true', options_list=['--include-user-data', '-u'], help='Include the user data properties in the query result.', min_api='2021-03-01')
for scope in ['vm get-instance-view', 'vm wait', 'vmss wait']:
with self.argument_context(scope) as c:
c.ignore('include_user_data')
with self.argument_context('vm diagnostics') as c:
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'])
with self.argument_context('vm diagnostics set') as c:
c.argument('storage_account', completer=get_resource_name_completion_list('Microsoft.Storage/storageAccounts'))
with self.argument_context('vm install-patches') as c:
c.argument('maximum_duration', type=str, help='Specify the maximum amount of time that the operation will run. It must be an ISO 8601-compliant duration string such as PT4H (4 hours)')
c.argument('reboot_setting', arg_type=get_enum_type(RebootSetting), help='Define when it is acceptable to reboot a VM during a software update operation.')
c.argument('classifications_to_include_win', nargs='+', arg_type=get_enum_type(VMGuestPatchClassificationWindows), help='Space-separated list of classifications to include for Windows VM.')
c.argument('classifications_to_include_linux', nargs='+', arg_type=get_enum_type(VMGuestPatchClassificationLinux), help='Space-separated list of classifications to include for Linux VM.')
c.argument('kb_numbers_to_include', nargs='+', help='Space-separated list of KBs to include in the patch operation. Applicable to Windows VM only')
c.argument('kb_numbers_to_exclude', nargs='+', help='Space-separated list of KBs to exclude in the patch operation. Applicable to Windows VM only')
c.argument('exclude_kbs_requiring_reboot', arg_type=get_three_state_flag(), help="Filter out KBs that don't have a reboot behavior of 'NeverReboots' when this is set. Applicable to Windows VM only")
c.argument('package_name_masks_to_include', nargs='+', help='Space-separated list of packages to include in the patch operation. Format: packageName_packageVersion. Applicable to Linux VM only')
c.argument('package_name_masks_to_exclude', nargs='+', help='Space-separated list of packages to exclude in the patch operation. Format: packageName_packageVersion. Applicable to Linux VM only')
with self.argument_context('vm disk') as c:
c.argument('vm_name', options_list=['--vm-name'], id_part=None, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines'))
c.argument('new', action='store_true', help='create a new disk')
c.argument('sku', arg_type=disk_sku, help='Underlying storage SKU')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
c.argument('lun', type=int, help='0-based logical unit number (LUN). Max value depends on the Virtual Machine size.')
with self.argument_context('vm disk attach') as c:
c.argument('enable_write_accelerator', min_api='2017-12-01', action='store_true', help='enable write accelerator')
c.argument('disk', options_list=['--name', '-n', c.deprecate(target='--disk', redirect='--name', hide=True)],
help="The name or ID of the managed disk", validator=validate_vm_disk, id_part='name',
completer=get_resource_name_completion_list('Microsoft.Compute/disks'))
with self.argument_context('vm disk detach') as c:
c.argument('disk_name', arg_type=name_arg_type, help='The data disk name.')
with self.argument_context('vm encryption enable') as c:
c.argument('encrypt_format_all', action='store_true', help='Encrypts-formats data disks instead of encrypting them. Encrypt-formatting is a lot faster than in-place encryption but wipes out the partition getting encrypt-formatted.')
# Place aad arguments in their own group
aad_arguments = 'Azure Active Directory'
c.argument('aad_client_id', arg_group=aad_arguments)
c.argument('aad_client_secret', arg_group=aad_arguments)
c.argument('aad_client_cert_thumbprint', arg_group=aad_arguments)
with self.argument_context('vm extension') as c:
c.argument('vm_extension_name', name_arg_type, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines/extensions'), help='Name of the extension.', id_part='child_name_1')
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'], id_part='name')
c.argument('expand', deprecate_info=c.deprecate(expiration='3.0.0', hide=True))
with self.argument_context('vm extension list') as c:
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'], id_part=None)
with self.argument_context('vm secret') as c:
c.argument('secrets', multi_ids_type, options_list=['--secrets', '-s'], help='Space-separated list of key vault secret URIs. Perhaps, produced by \'az keyvault secret list-versions --vault-name vaultname -n cert1 --query "[?attributes.enabled].id" -o tsv\'')
c.argument('keyvault', help='Name or ID of the key vault.', validator=validate_keyvault)
c.argument('certificate', help='key vault certificate name or its full secret URL')
c.argument('certificate_store', help='Windows certificate store names. Default: My')
with self.argument_context('vm secret list') as c:
c.argument('vm_name', arg_type=existing_vm_name, id_part=None)
with self.argument_context('vm image') as c:
c.argument('publisher_name', options_list=['--publisher', '-p'], help='image publisher')
c.argument('publisher', options_list=['--publisher', '-p'], help='image publisher')
c.argument('offer', options_list=['--offer', '-f'], help='image offer')
c.argument('plan', help='image billing plan')
c.argument('sku', options_list=['--sku', '-s'], help='image sku')
c.argument('version', help="image sku's version")
c.argument('urn', help="URN, in format of 'publisher:offer:sku:version' or 'publisher:offer:sku:edge_zone:version'. If specified, other argument values can be omitted")
with self.argument_context('vm image list') as c:
c.argument('image_location', get_location_type(self.cli_ctx))
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image list-offers') as c:
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image list-skus') as c:
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image list-publishers') as c:
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image show') as c:
c.argument('skus', options_list=['--sku', '-s'])
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image terms') as c:
c.argument('urn', help='URN, in the format of \'publisher:offer:sku:version\'. If specified, other argument values can be omitted')
c.argument('publisher', help='Image publisher')
c.argument('offer', help='Image offer')
c.argument('plan', help='Image billing plan')
with self.argument_context('vm nic') as c:
c.argument('vm_name', existing_vm_name, options_list=['--vm-name'], id_part=None)
c.argument('nics', nargs='+', help='Names or IDs of NICs.', validator=validate_vm_nics)
c.argument('primary_nic', help='Name or ID of the primary NIC. If missing, the first NIC in the list will be the primary.')
with self.argument_context('vm nic show') as c:
c.argument('nic', help='NIC name or ID.', validator=validate_vm_nic)
with self.argument_context('vm unmanaged-disk') as c:
c.argument('new', action='store_true', help='Create a new disk.')
c.argument('lun', type=int, help='0-based logical unit number (LUN). Max value depends on the Virtual Machine size.')
c.argument('vhd_uri', help="Virtual hard disk URI. For example: https://mystorage.blob.core.windows.net/vhds/d1.vhd")
with self.argument_context('vm unmanaged-disk attach') as c:
c.argument('disk_name', options_list=['--name', '-n'], help='The data disk name.')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
with self.argument_context('vm unmanaged-disk detach') as c:
c.argument('disk_name', options_list=['--name', '-n'], help='The data disk name.')
for scope in ['vm unmanaged-disk attach', 'vm unmanaged-disk detach']:
with self.argument_context(scope) as c:
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'], id_part=None)
with self.argument_context('vm unmanaged-disk list') as c:
c.argument('vm_name', options_list=['--vm-name', '--name', '-n'], arg_type=existing_vm_name, id_part=None)
with self.argument_context('vm user') as c:
c.argument('username', options_list=['--username', '-u'], help='The user name')
c.argument('password', options_list=['--password', '-p'], help='The user password')
with self.argument_context('vm list-skus') as c:
c.argument('size', options_list=['--size', '-s'], help="size name, partial name is accepted")
c.argument('zone', options_list=['--zone', '-z'], arg_type=get_three_state_flag(), help="show skus supporting availability zones")
c.argument('show_all', options_list=['--all'], arg_type=get_three_state_flag(),
help="show all information including vm sizes not available under the current subscription")
c.argument('resource_type', options_list=['--resource-type', '-r'], help='resource types e.g. "availabilitySets", "snapshots", "disks", etc')
with self.argument_context('vm restart') as c:
c.argument('force', action='store_true', help='Force the VM to restart by redeploying it. Use if the VM is unresponsive.')
with self.argument_context('vm host') as c:
c.argument('host_group_name', options_list=['--host-group'], id_part='name', help="Name of the Dedicated Host Group")
c.argument('host_name', name_arg_type, id_part='child_name_1', help="Name of the Dedicated Host")
c.ignore('expand')
with self.argument_context('vm host create') as c:
c.argument('platform_fault_domain', options_list=['--platform-fault-domain', '-d'], type=int,
help="Fault domain of the host within a group. Allowed values: 0, 1, 2")
c.argument('auto_replace_on_failure', options_list=['--auto-replace'], arg_type=get_three_state_flag(),
help="Replace the host automatically if a failure occurs")
c.argument('license_type', arg_type=get_enum_type(DedicatedHostLicenseTypes),
help="The software license type that will be applied to the VMs deployed on the dedicated host.")
c.argument('sku', help="SKU of the dedicated host. Available SKUs: https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/")
with self.argument_context('vm host list') as c:
c.argument('host_group_name', id_part=None)
with self.argument_context('vm host group') as c:
c.argument('host_group_name', name_arg_type, id_part='name', help="Name of the Dedicated Host Group")
c.argument('automatic_placement', arg_type=get_three_state_flag(), min_api='2020-06-01',
help='Specify whether virtual machines or virtual machine scale sets can be placed automatically '
'on the dedicated host group. Automatic placement means resources are allocated on dedicated '
'hosts, that are chosen by Azure, under the dedicated host group. The value is defaulted to '
'false when not provided.')
with self.argument_context('vm host group create') as c:
c.argument('platform_fault_domain_count', options_list=["--platform-fault-domain-count", "-c"], type=int,
help="Number of fault domains that the host group can span.")
c.argument('zones', zone_type)
for scope in ["vm host", "vm host group"]:
with self.argument_context("{} create".format(scope)) as c:
location_type = get_location_type(self.cli_ctx)
custom_location_msg = " Otherwise, location will default to the resource group's location"
custom_location_type = CLIArgumentType(overrides=location_type,
help=location_type.settings["help"] + custom_location_msg)
c.argument('location', arg_type=custom_location_type)
# endregion
# region VMSS
scaleset_name_aliases = ['vm_scale_set_name', 'virtual_machine_scale_set_name', 'name']
with self.argument_context('vmss') as c:
c.argument('zones', zones_type, min_api='2017-03-30')
c.argument('instance_id', id_part='child_name_1')
c.argument('instance_ids', multi_ids_type, help='Space-separated list of IDs (ex: 1 2 3 ...) or * for all instances. If not provided, the action will be applied on the scaleset itself')
c.argument('tags', tags_type)
c.argument('caching', help='Disk caching policy', arg_type=get_enum_type(CachingTypes))
for dest in scaleset_name_aliases:
c.argument(dest, vmss_name_type)
c.argument('host_group', min_api='2020-06-01',
help='Name or ID of dedicated host group that the virtual machine scale set resides in')
for scope in ['vmss deallocate', 'vmss delete-instances', 'vmss restart', 'vmss start', 'vmss stop', 'vmss show', 'vmss update-instances', 'vmss simulate-eviction']:
with self.argument_context(scope) as c:
for dest in scaleset_name_aliases:
c.argument(dest, vmss_name_type, id_part=None) # due to instance-ids parameter
with self.argument_context('vmss create', operation_group='virtual_machine_scale_sets') as c:
VirtualMachineEvictionPolicyTypes = self.get_models('VirtualMachineEvictionPolicyTypes', resource_type=ResourceType.MGMT_COMPUTE)
c.argument('name', name_arg_type)
c.argument('nat_backend_port', default=None, help='Backend port to open with NAT rules. Defaults to 22 on Linux and 3389 on Windows.')
c.argument('single_placement_group', arg_type=get_three_state_flag(), help="Limit the scale set to a single placement group."
" See https://docs.microsoft.com/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups for details.")
c.argument('platform_fault_domain_count', type=int, help='Fault Domain count for each placement group in the availability zone', min_api='2017-12-01')
c.argument('vmss_name', name_arg_type, id_part=None, help='Name of the virtual machine scale set.')
c.argument('instance_count', help='Number of VMs in the scale set.', type=int)
c.argument('disable_overprovision', help='Overprovision option (see https://azure.microsoft.com/documentation/articles/virtual-machine-scale-sets-overview/ for details).', action='store_true')
c.argument('upgrade_policy_mode', help=None, arg_type=get_enum_type(UpgradeMode))
c.argument('health_probe', help='Probe name from the existing load balancer, mainly used for rolling upgrade or automatic repairs')
c.argument('vm_sku', help='Size of VMs in the scale set. Default to "Standard_DS1_v2". See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.')
c.argument('nsg', help='Name or ID of an existing Network Security Group.', arg_group='Network')
c.argument('eviction_policy', resource_type=ResourceType.MGMT_COMPUTE, min_api='2017-12-01', arg_type=get_enum_type(VirtualMachineEvictionPolicyTypes, default=None),
help="The eviction policy for virtual machines in a Spot priority scale set. Default eviction policy is Deallocate for a Spot priority scale set")
c.argument('application_security_groups', resource_type=ResourceType.MGMT_COMPUTE, min_api='2018-06-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network', validator=validate_asg_names_or_ids)
c.argument('computer_name_prefix', help='Computer name prefix for all of the virtual machines in the scale set. Computer name prefixes must be 1 to 15 characters long')
c.argument('orchestration_mode', help='Choose how virtual machines are managed by the scale set. In Uniform mode, you define a virtual machine model and Azure will generate identical instances based on that model. In Flexible mode, you manually create and add a virtual machine of any configuration to the scale set or generate identical instances based on virtual machine model defined for the scale set.',
arg_type=get_enum_type(['Uniform', 'Flexible']))
c.argument('scale_in_policy', scale_in_policy_type)
c.argument('automatic_repairs_grace_period', min_api='2018-10-01',
help='The amount of time (in minutes, between 30 and 90) for which automatic repairs are suspended due to a state change on VM.')
c.argument('user_data', help='UserData for the virtual machines in the scale set. It can be passed in as file or string.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
c.argument('network_api_version', min_api='2021-03-01',
help="Specify the Microsoft.Network API version used when creating networking resources in the Network "
"Interface Configurations for Virtual Machine Scale Set with orchestration mode 'Flexible'. Default "
"value is 2020-11-01.")
c.argument('enable_spot_restore', arg_type=get_three_state_flag(), min_api='2021-04-01', help='Enable the Spot-Try-Restore feature where evicted VMSS SPOT instances will be tried to be restored opportunistically based on capacity availability and pricing constraints')
c.argument('spot_restore_timeout', min_api='2021-04-01', help='Timeout value expressed as an ISO 8601 time duration after which the platform will not try to restore the VMSS SPOT instances')
c.argument('enable_agent', arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Indicate whether virtual machine agent should be provisioned on the virtual machine. When this property is not specified, default behavior is to set it to true. This will ensure that VM Agent is installed on the VM so that extensions can be added to the VM later')
c.argument('enable_auto_update', arg_type=get_three_state_flag(), min_api='2020-06-01',
help='Indicate whether Automatic Updates is enabled for the Windows virtual machine')
c.argument('patch_mode', arg_type=get_enum_type(['AutomaticByOS', 'AutomaticByPlatform', 'Manual', 'ImageDefault']), min_api='2020-12-01',
help='Mode of in-guest patching to IaaS virtual machine. Allowed values for Windows VM: AutomaticByOS, AutomaticByPlatform, Manual. Allowed values for Linux VM: AutomaticByPlatform, ImageDefault. Manual - You control the application of patches to a virtual machine. You do this by applying patches manually inside the VM. In this mode, automatic updates are disabled; the paramater --enable-auto-update must be false. AutomaticByOS - The virtual machine will automatically be updated by the OS. The parameter --enable-auto-update must be true. AutomaticByPlatform - the virtual machine will automatically updated by the OS. ImageDefault - The virtual machine\'s default patching configuration is used. The parameter --enable-agent and --enable-auto-update must be true')
with self.argument_context('vmss create', arg_group='Network Balancer') as c:
LoadBalancerSkuName = self.get_models('LoadBalancerSkuName', resource_type=ResourceType.MGMT_NETWORK)
c.argument('application_gateway', help='Name to use when creating a new application gateway (default) or referencing an existing one. Can also reference an existing application gateway by ID or specify "" for none.', options_list=['--app-gateway'])
c.argument('app_gateway_capacity', help='The number of instances to use when creating a new application gateway.')
c.argument('app_gateway_sku', help='SKU when creating a new application gateway.')
c.argument('app_gateway_subnet_address_prefix', help='The subnet IP address prefix to use when creating a new application gateway in CIDR format.')
c.argument('backend_pool_name', help='Name to use for the backend pool when creating a new load balancer or application gateway.')
c.argument('backend_port', help='When creating a new load balancer, backend port to open with NAT rules (Defaults to 22 on Linux and 3389 on Windows). When creating an application gateway, the backend port to use for the backend HTTP settings.', type=int)
c.argument('load_balancer', help='Name to use when creating a new load balancer (default) or referencing an existing one. Can also reference an existing load balancer by ID or specify "" for none.', options_list=['--load-balancer', '--lb'])
c.argument('load_balancer_sku', resource_type=ResourceType.MGMT_NETWORK, min_api='2017-08-01', options_list=['--lb-sku'], arg_type=get_enum_type(LoadBalancerSkuName),
help="Sku of the Load Balancer to create. Default to 'Standard' when single placement group is turned off; otherwise, default to 'Basic'. The public IP is supported to be created on edge zone only when it is 'Standard'")
c.argument('nat_pool_name', help='Name to use for the NAT pool when creating a new load balancer.', options_list=['--lb-nat-pool-name', '--nat-pool-name'])
with self.argument_context('vmss create', min_api='2017-03-30', arg_group='Network') as c:
c.argument('public_ip_per_vm', action='store_true', help="Each VM instance will have a public ip. For security, you can use '--nsg' to apply appropriate rules")
c.argument('vm_domain_name', help="domain name of VM instances, once configured, the FQDN is `vm<vm-index>.<vm-domain-name>.<..rest..>`")
c.argument('dns_servers', nargs='+', help="space-separated IP addresses of DNS servers, e.g. 10.0.0.5 10.0.0.6")
c.argument('accelerated_networking', arg_type=get_three_state_flag(),
help="enable accelerated networking. Unless specified, CLI will enable it based on machine image and size")
with self.argument_context('vmss update') as c:
protection_policy_type = CLIArgumentType(overrides=get_three_state_flag(), arg_group="Protection Policy", min_api='2019-03-01')
c.argument('protect_from_scale_in', arg_type=protection_policy_type, help="Protect the VM instance from scale-in operations.")
c.argument('protect_from_scale_set_actions', arg_type=protection_policy_type, help="Protect the VM instance from scale set actions (including scale-in).")
c.argument('enable_terminate_notification', min_api='2019-03-01', arg_type=get_three_state_flag(),
help='Enable terminate notification')
c.argument('ultra_ssd_enabled', ultra_ssd_enabled_type)
c.argument('scale_in_policy', scale_in_policy_type)
c.argument('user_data', help='UserData for the virtual machines in the scale set. It can be passed in as file or string. If empty string is passed in, the existing value will be deleted.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
c.argument('enable_spot_restore', arg_type=get_three_state_flag(), min_api='2021-04-01',
help='Enable the Spot-Try-Restore feature where evicted VMSS SPOT instances will be tried to be restored opportunistically based on capacity availability and pricing constraints')
c.argument('spot_restore_timeout', min_api='2021-04-01',
help='Timeout value expressed as an ISO 8601 time duration after which the platform will not try to restore the VMSS SPOT instances')
c.argument('vm_sku', help='The new size of the virtual machine instances in the scale set. Default to "Standard_DS1_v2". See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.', is_preview=True)
c.argument('ephemeral_os_disk_placement', arg_type=ephemeral_placement_type,
help='Only applicable when used with `--vm-sku`. Allows you to choose the Ephemeral OS disk provisioning location.', is_preview=True)
with self.argument_context('vmss update', min_api='2018-10-01', arg_group='Automatic Repairs') as c:
c.argument('enable_automatic_repairs', arg_type=get_three_state_flag(), help='Enable automatic repairs')
c.argument(
'automatic_repairs_grace_period',
help='The amount of time (in minutes, between 30 and 90) for which automatic repairs are suspended due to a state change on VM.'
)
for scope in ['vmss create', 'vmss update']:
with self.argument_context(scope) as c:
c.argument('terminate_notification_time', min_api='2019-03-01',
help='Length of time (in minutes, between 5 and 15) a notification to be sent to the VM on the instance metadata server till the VM gets deleted')
c.argument('max_batch_instance_percent', type=int, min_api='2020-12-01',
help='The maximum percent of total virtual machine instances that will be upgraded simultaneously by the rolling upgrade in one batch. Default: 20%')
c.argument('max_unhealthy_instance_percent', type=int, min_api='2020-12-01',
help='The maximum percentage of the total virtual machine instances in the scale set that can be simultaneously unhealthy. Default: 20%')
c.argument('max_unhealthy_upgraded_instance_percent', type=int, min_api='2020-12-01',
help='The maximum percentage of upgraded virtual machine instances that can be found to be in an unhealthy state. Default: 20%')
c.argument('pause_time_between_batches', min_api='2020-12-01',
help='The wait time between completing the update for all virtual machines in one batch and starting the next batch. Default: 0 seconds')
c.argument('enable_cross_zone_upgrade', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Set this Boolean property will allow VMSS to ignore AZ boundaries when constructing upgrade batches, and only consider Update Domain and maxBatchInstancePercent to determine the batch size')
c.argument('prioritize_unhealthy_instances', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Set this Boolean property will lead to all unhealthy instances in a scale set getting upgraded before any healthy instances')
for scope, help_prefix in [('vmss update', 'Update the'), ('vmss wait', 'Wait on the')]:
with self.argument_context(scope) as c:
c.argument('instance_id', id_part='child_name_1', help="{0} VM instance with this ID. If missing, {0} VMSS.".format(help_prefix))
for scope in ['vmss update-instances', 'vmss delete-instances']:
with self.argument_context(scope) as c:
c.argument('instance_ids', multi_ids_type, help='Space-separated list of IDs (ex: 1 2 3 ...) or * for all instances.')
with self.argument_context('vmss diagnostics') as c:
c.argument('vmss_name', id_part=None, help='Scale set name')
with self.argument_context('vmss disk') as c:
options_list = ['--vmss-name'] + [c.deprecate(target=opt, redirect='--vmss-name', hide=True)for opt in name_arg_type.settings['options_list']]
new_vmss_name_type = CLIArgumentType(overrides=vmss_name_type, options_list=options_list)
c.argument('lun', type=int, help='0-based logical unit number (LUN). Max value depends on the Virtual Machine instance size.')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
c.argument('vmss_name', new_vmss_name_type, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'))
c.argument('disk', validator=validate_vmss_disk, help='existing disk name or ID to attach or detach from VM instances',
min_api='2017-12-01', completer=get_resource_name_completion_list('Microsoft.Compute/disks'))
c.argument('instance_id', help='Scale set VM instance id', min_api='2017-12-01')
c.argument('sku', arg_type=disk_sku, help='Underlying storage SKU')
with self.argument_context('vmss encryption') as c:
c.argument('vmss_name', vmss_name_type, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'))
with self.argument_context('vmss extension') as c:
c.argument('extension_name', name_arg_type, help='Name of the extension.')
c.argument('vmss_name', vmss_name_type, options_list=['--vmss-name'], id_part=None)
with self.argument_context('vmss nic') as c:
c.argument('virtual_machine_scale_set_name', options_list=['--vmss-name'], help='Scale set name.', completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'), id_part='name')
c.argument('virtualmachine_index', options_list=['--instance-id'], id_part='child_name_1')
c.argument('network_interface_name', options_list=['--name', '-n'], metavar='NIC_NAME', help='The network interface (NIC).', completer=get_resource_name_completion_list('Microsoft.Network/networkInterfaces'), id_part='child_name_2')
with self.argument_context('vmss nic list') as c:
c.argument('virtual_machine_scale_set_name', arg_type=vmss_name_type, options_list=['--vmss-name'], id_part=None)
with self.argument_context('vmss set-orchestration-service-state') as c:
c.argument('service_name', arg_type=get_enum_type(OrchestrationServiceNames), help='The name of the orchestration service.')
c.argument('action', arg_type=get_enum_type(OrchestrationServiceStateAction), help='The action to be performed.')
# endregion
# region VM & VMSS Shared
for scope in ['vm', 'vmss']:
with self.argument_context(scope) as c:
c.argument('no_auto_upgrade',
options_list=['--no-auto-upgrade-minor-version', c.deprecate(target='--no-auto-upgrade', redirect='--no-auto-upgrade-minor-version')],
arg_type=get_three_state_flag(),
help='If set, the extension service will not automatically pick or upgrade to the latest minor version, even if the extension is redeployed.')
with self.argument_context('{} run-command'.format(scope)) as c:
c.argument('command_id', completer=get_vm_run_command_completion_list, help="The command id. Use 'az {} run-command list' to get the list".format(scope))
if scope == 'vmss':
c.argument('vmss_name', vmss_name_type)
with self.argument_context('{} run-command invoke'.format(scope)) as c:
c.argument('parameters', nargs='+', help="space-separated parameters in the format of '[name=]value'")
c.argument('scripts', nargs='+', help="Space-separated script lines. Use @{file} to load script from a file")
with self.argument_context('{} stop'.format(scope)) as c:
c.argument('skip_shutdown', action='store_true', help='Skip shutdown and power-off immediately.', min_api='2019-03-01')
run_cmd_name_type = CLIArgumentType(options_list=['--name', '--run-command-name'], help='The name of the virtual machine run command.')
run_cmd_vm_name = CLIArgumentType(options_list=['--vm-name'], help='The name of the virtual machine')
for scope in ['create', 'update']:
with self.argument_context('vm run-command {}'.format(scope)) as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('script', help='Contain the powershell or bash script to execute on the VM.')
c.argument('script_uri', help='Contain a uri to the script to execute on the VM. Uri can be any link accessible from the VM or a storage blob without SAS. If subscription has access to the storage blob, then SAS will be auto-generated. ')
c.argument('command_id', help='Specify a command id of predefined script. All command ids can be listed using "list" command.')
c.argument('parameters', nargs='+', help='Set custom parameters in a name-value pair.')
c.argument('protected_parameters', nargs='+', help='Set custom parameters in a name-value pair. These parameters will be encrypted during transmission and will not be logged.')
c.argument('async_execution', arg_type=get_three_state_flag(), help='Optional. If set to true, provisioning '
'will complete as soon as the script starts and will not wait for script to complete.')
c.argument('run_as_user', help='By default script process runs under system/root user. Specify custom user to host the process.')
c.argument('run_as_password', help='Password if needed for using run-as-user parameter. It will be encrypted and not logged. ')
c.argument('timeout_in_seconds', type=int, help='The timeout in seconds to execute the run command.')
c.argument('output_blob_uri', help='Specify the Azure storage blob where script output stream will be uploaded.')
c.argument('error_blob_uri', help='Specify the Azure storage blob where script error stream will be uploaded.')
with self.argument_context('vm run-command delete') as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
with self.argument_context('vm run-command list') as c:
c.argument('vm_name', run_cmd_vm_name, id_part=None)
c.argument('expand', help='The expand expression to apply on the operation.')
c.argument('location', arg_type=get_location_type(self.cli_ctx))
with self.argument_context('vm run-command show') as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
c.argument('expand', help='The expand expression to apply on the operation.', deprecate_info=c.deprecate(hide=True))
c.argument('instance_view', action='store_true', help='Track the run command progress')
c.argument('location', arg_type=get_location_type(self.cli_ctx))
c.argument('command_id', help='The command id.')
with self.argument_context('vm run-command wait') as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
c.argument('expand', help='The expand expression to apply on the operation.', deprecate_info=c.deprecate(hide=True))
c.argument('instance_view', action='store_true', help='Track the run command progress')
c.argument('location', arg_type=get_location_type(self.cli_ctx))
c.argument('command_id', help='The command id.')
run_cmd_vmss_name = CLIArgumentType(options_list=['--vmss-name'], help='The name of the VM scale set.')
for scope in ['create', 'update']:
with self.argument_context('vmss run-command {}'.format(scope)) as c:
c.argument('vmss_name', run_cmd_vmss_name)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('run_command_name', run_cmd_name_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('script', help='Contain the powershell or bash script to execute on the VM.')
c.argument('script_uri',
help='Contain a uri to the script to execute on the VM. Uri can be any link accessible from the VM or a storage blob without SAS. If subscription has access to the storage blob, then SAS will be auto-generated. ')
c.argument('command_id',
help='Specify a command id of predefined script. All command ids can be listed using "list" command.')
c.argument('parameters', nargs='+', help='Set custom parameters in a name-value pair.')
c.argument('protected_parameters', nargs='+',
help='Set custom parameters in a name-value pair. These parameters will be encrypted during transmission and will not be logged.')
c.argument('async_execution', arg_type=get_three_state_flag(), help='Optional. If set to true, provisioning '
'will complete as soon as the script starts and will not wait for script to complete.')
c.argument('run_as_user',
help='By default script process runs under system/root user. Specify custom user to host the process.')
c.argument('run_as_password',
help='Password if needed for using run-as-user parameter. It will be encrypted and not logged. ')
c.argument('timeout_in_seconds', type=int, help='The timeout in seconds to execute the run command.')
c.argument('output_blob_uri', help='Uri (without SAS) to an append blob where the script output will be uploaded.')
c.argument('error_blob_uri', help='Uri (without SAS) to an append blob where the script error stream will be uploaded.')
with self.argument_context('vmss run-command delete') as c:
c.argument('vmss_name', run_cmd_vmss_name)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('run_command_name', run_cmd_name_type)
with self.argument_context('vmss run-command list') as c:
c.argument('vmss_name', run_cmd_vmss_name, id_part=None)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('expand', help='The expand expression to apply on the operation.')
with self.argument_context('vmss run-command show') as c:
c.argument('vmss_name', run_cmd_vmss_name)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('run_command_name', run_cmd_name_type)
c.argument('expand', help='The expand expression to apply on the operation.', deprecate_info=c.deprecate(hide=True))
c.argument('instance_view', action='store_true', help='Track the run command progress')
for scope in ['vm identity assign', 'vmss identity assign']:
with self.argument_context(scope) as c:
c.argument('assign_identity', options_list=['--identities'], nargs='*', help="Space-separated identities to assign. Use '{0}' to refer to the system assigned identity. Default: '{0}'".format(MSI_LOCAL_ID))
c.argument('vm_name', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
for scope in ['vm identity remove', 'vmss identity remove']:
with self.argument_context(scope) as c:
c.argument('identities', nargs='+', help="Space-separated identities to remove. Use '{0}' to refer to the system assigned identity. Default: '{0}'".format(MSI_LOCAL_ID))
c.argument('vm_name', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
for scope in ['vm identity show', 'vmss identity show']:
with self.argument_context(scope) as c:
c.argument('vm_name', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
for scope in ['vm application set', 'vmss application set']:
with self.argument_context(scope) as c:
c.argument('vm', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
c.argument('application_version_ids', options_list=['--app-version-ids'], nargs='*', help="Space-separated application version ids to set to VM.")
c.argument('order_applications', action='store_true', help='Whether set order index at each gallery applications, the order index starts from 1.')
c.argument('application_configuration_overrides', options_list=['--app-config-overrides'], nargs='*',
help='Space-separated application configuration overrides for each application version ids. '
'It should have the same number of items as the application version ids. Null is available for a application '
'which does not have a configuration override.')
for scope in ['vm application list', 'vmss application list']:
with self.argument_context(scope) as c:
c.argument('vm_name', options_list=['--vm-name', '--name', '-n'], arg_type=existing_vm_name, id_part=None)
c.argument('vmss_name', vmss_name_type, id_part=None)
for scope in ['vm create', 'vmss create']:
with self.argument_context(scope) as c:
c.argument('location', get_location_type(self.cli_ctx), help='Location in which to create VM and related resources. If default location is not configured, will default to the resource group\'s location')
c.argument('tags', tags_type)
c.argument('no_wait', help='Do not wait for the long-running operation to finish.')
c.argument('validate', options_list=['--validate'], help='Generate and validate the ARM template without creating any resources.', action='store_true')
c.argument('size', help='The VM size to be created. See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.')
c.argument('image', completer=get_urn_aliases_completion_list)
c.argument('custom_data', help='Custom init script file or text (cloud-init, cloud-config, etc..)', completer=FilesCompleter(), type=file_type)
c.argument('secrets', multi_ids_type, help='One or many Key Vault secrets as JSON strings or files via `@{path}` containing `[{ "sourceVault": { "id": "value" }, "vaultCertificates": [{ "certificateUrl": "value", "certificateStore": "cert store name (only on windows)"}] }]`', type=file_type, completer=FilesCompleter())
c.argument('assign_identity', nargs='*', arg_group='Managed Service Identity', help="accept system or user assigned identities separated by spaces. Use '[system]' to refer system assigned identity, or a resource id to refer user assigned identity. Check out help for more examples")
c.ignore('aux_subscriptions')
c.argument('edge_zone', edge_zone_type)
with self.argument_context(scope, arg_group='Authentication') as c:
c.argument('generate_ssh_keys', action='store_true', help='Generate SSH public and private key files if missing. The keys will be stored in the ~/.ssh directory')
c.argument('admin_username', help='Username for the VM. Default value is current username of OS. If the default value is system reserved, then default value will be set to azureuser. Please refer to https://docs.microsoft.com/rest/api/compute/virtualmachines/createorupdate#osprofile to get a full list of reserved values.')
c.argument('admin_password', help="Password for the VM if authentication type is 'Password'.")
c.argument('ssh_key_value', options_list=['--ssh-key-values'], completer=FilesCompleter(), type=file_type, nargs='+')
c.argument('ssh_dest_key_path', help='Destination file path on the VM for the SSH key. If the file already exists, the specified key(s) are appended to the file. Destination path for SSH public keys is currently limited to its default value "/home/username/.ssh/authorized_keys" due to a known issue in Linux provisioning agent.')
c.argument('authentication_type', help='Type of authentication to use with the VM. Defaults to password for Windows and SSH public key for Linux. "all" enables both ssh and password authentication. ', arg_type=get_enum_type(['ssh', 'password', 'all']))
with self.argument_context(scope, arg_group='Storage') as c:
if DiskStorageAccountTypes:
allowed_values = ", ".join([sku.value for sku in DiskStorageAccountTypes])
else:
allowed_values = ", ".join(['Premium_LRS', 'Standard_LRS'])
usage = 'Usage: [--storage-sku SKU | --storage-sku ID=SKU ID=SKU ID=SKU...], where each ID is "os" or a 0-indexed lun.'
allowed_values = 'Allowed values: {}.'.format(allowed_values)
storage_sku_help = 'The SKU of the storage account with which to persist VM. Use a singular sku that would be applied across all disks, ' \
'or specify individual disks. {} {}'.format(usage, allowed_values)
c.argument('os_disk_name', help='The name of the new VM OS disk.')
c.argument('os_type', help='Type of OS installed on a custom VHD. Do not use when specifying an URN or URN alias.', arg_type=get_enum_type(['windows', 'linux']))
c.argument('storage_account', help="Only applicable when used with `--use-unmanaged-disk`. The name to use when creating a new storage account or referencing an existing one. If omitted, an appropriate storage account in the same resource group and location will be used, or a new one will be created.")
c.argument('storage_sku', nargs='+', help=storage_sku_help)
c.argument('storage_container_name', help="Only applicable when used with `--use-unmanaged-disk`. Name of the storage container for the VM OS disk. Default: vhds")
c.ignore('os_publisher', 'os_offer', 'os_sku', 'os_version', 'storage_profile')
c.argument('use_unmanaged_disk', action='store_true', help='Do not use managed disk to persist VM')
c.argument('os_disk_size_gb', type=int, help='OS disk size in GB to create.')
c.argument('data_disk_sizes_gb', nargs='+', type=int, help='space-separated empty managed data disk sizes in GB to create')
c.ignore('disk_info', 'storage_account_type', 'public_ip_address_type', 'nsg_type', 'nic_type', 'vnet_type', 'load_balancer_type', 'app_gateway_type')
c.argument('os_caching', options_list=[self.deprecate(target='--storage-caching', redirect='--os-disk-caching', hide=True), '--os-disk-caching'], help='Storage caching type for the VM OS disk. Default: ReadWrite', arg_type=get_enum_type(CachingTypes))
c.argument('data_caching', options_list=['--data-disk-caching'], nargs='+',
help="storage caching type for data disk(s), including 'None', 'ReadOnly', 'ReadWrite', etc. Use a singular value to apply on all disks, or use `<lun>=<vaule1> <lun>=<value2>` to configure individual disk")
c.argument('ultra_ssd_enabled', ultra_ssd_enabled_type)
c.argument('ephemeral_os_disk', arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Allows you to create an OS disk directly on the host node, providing local disk performance and faster VM/VMSS reimage time.', is_preview=True)
c.argument('ephemeral_os_disk_placement', arg_type=ephemeral_placement_type,
help='Only applicable when used with `--ephemeral-os-disk`. Allows you to choose the Ephemeral OS disk provisioning location.', is_preview=True)
c.argument('os_disk_encryption_set', min_api='2019-07-01', help='Name or ID of disk encryption set for OS disk.')
c.argument('data_disk_encryption_sets', nargs='+', min_api='2019-07-01',
help='Names or IDs (space delimited) of disk encryption sets for data disks.')
c.argument('data_disk_iops', min_api='2019-07-01', nargs='+', type=int, help='Specify the Read-Write IOPS (space delimited) for the managed disk. Should be used only when StorageAccountType is UltraSSD_LRS. If not specified, a default value would be assigned based on diskSizeGB.')
c.argument('data_disk_mbps', min_api='2019-07-01', nargs='+', type=int, help='Specify the bandwidth in MB per second (space delimited) for the managed disk. Should be used only when StorageAccountType is UltraSSD_LRS. If not specified, a default value would be assigned based on diskSizeGB.')
c.argument('specialized', arg_type=get_three_state_flag(), help='Indicate whether the source image is specialized.')
c.argument('encryption_at_host', arg_type=get_three_state_flag(), help='Enable Host Encryption for the VM or VMSS. This will enable the encryption for all the disks including Resource/Temp disk at host itself.')
c.argument('os_disk_delete_option', arg_type=get_enum_type(self.get_models('DiskDeleteOptionTypes')), min_api='2021-03-01',
help='Specify the behavior of the managed disk when the VM gets deleted i.e whether the managed disk is deleted or detached.')
c.argument('data_disk_delete_option', options_list=['--data-disk-delete-option', self.deprecate(target='--data-delete-option', redirect='--data-disk-delete-option', hide=True)],
nargs='+', min_api='2021-03-01',
help='Specify whether data disk should be deleted or detached upon VM deletion.')
with self.argument_context(scope, arg_group='Network') as c:
c.argument('vnet_name', help='Name of the virtual network when creating a new one or referencing an existing one.')
c.argument('vnet_address_prefix', help='The IP address prefix to use when creating a new VNet in CIDR format.')
c.argument('subnet', help='The name of the subnet when creating a new VNet or referencing an existing one. Can also reference an existing subnet by ID. If both vnet-name and subnet are omitted, an appropriate VNet and subnet will be selected automatically, or a new one will be created.')
c.argument('subnet_address_prefix', help='The subnet IP address prefix to use when creating a new VNet in CIDR format.')
c.argument('nics', nargs='+', help='Names or IDs of existing NICs to attach to the VM. The first NIC will be designated as primary. If omitted, a new NIC will be created. If an existing NIC is specified, do not specify subnet, VNet, public IP or NSG.')
c.argument('private_ip_address', help='Static private IP address (e.g. 10.0.0.5).')
c.argument('public_ip_address', help='Name of the public IP address when creating one (default) or referencing an existing one. Can also reference an existing public IP by ID or specify "" for None (\'""\' in Azure CLI using PowerShell or --% operator).')
c.argument('public_ip_address_allocation', help=None, default=None, arg_type=get_enum_type(['dynamic', 'static']))
c.argument('public_ip_address_dns_name', help='Globally unique DNS name for a newly created public IP.')
if self.supported_api_version(min_api='2017-08-01', resource_type=ResourceType.MGMT_NETWORK):
PublicIPAddressSkuName = self.get_models('PublicIPAddressSkuName', resource_type=ResourceType.MGMT_NETWORK)
c.argument('public_ip_sku', help='Public IP SKU. It is set to Basic by default. The public IP is supported to be created on edge zone only when it is \'Standard\'',
default=None, arg_type=get_enum_type(PublicIPAddressSkuName))
c.argument('nic_delete_option', nargs='+', min_api='2021-03-01',
help='Specify what happens to the network interface when the VM is deleted. Use a singular '
'value to apply on all resources, or use <Name>=<Value> to configure '
'the delete behavior for individual resources. Possible options are Delete and Detach.')
with self.argument_context(scope, arg_group='Marketplace Image Plan') as c:
c.argument('plan_name', help='plan name')
c.argument('plan_product', help='plan product')
c.argument('plan_publisher', help='plan publisher')
c.argument('plan_promotion_code', help='plan promotion code')
for scope in ['vm create', 'vmss create', 'vm identity assign', 'vmss identity assign']:
with self.argument_context(scope) as c:
arg_group = 'Managed Service Identity' if scope.split()[-1] == 'create' else None
c.argument('identity_scope', options_list=['--scope'], arg_group=arg_group, help="Scope that the system assigned identity can access")
c.argument('identity_role', options_list=['--role'], arg_group=arg_group, help="Role name or id the system assigned identity will have")
c.ignore('identity_role_id')
with self.argument_context('vm auto-shutdown') as c:
c.argument('off', action='store_true', help='Turn off auto-shutdown for VM. Configuration will be cleared.')
c.argument('email', help='The email recipient to send notifications to (can be a list of semi-colon separated email addresses)')
c.argument('time', help='The UTC time of day the schedule will occur every day. Format: hhmm. Example: 1730')
c.argument('webhook', help='The webhook URL to which the notification will be sent')
c.argument('location', validator=get_default_location_from_resource_group)
for scope in ['vm diagnostics', 'vmss diagnostics']:
with self.argument_context(scope) as c:
c.argument('version', help='version of the diagnostics extension. Will use the latest if not specfied')
c.argument('settings', help='json string or a file path, which defines data to be collected.', type=validate_file_or_dict, completer=FilesCompleter())
c.argument('protected_settings', help='json string or a file path containing private configurations such as storage account keys, etc.', type=validate_file_or_dict, completer=FilesCompleter())
c.argument('is_windows_os', action='store_true', help='for Windows VMs')
for scope in ['vm encryption', 'vmss encryption']:
with self.argument_context(scope) as c:
c.argument('volume_type', help='Type of volume that the encryption operation is performed on', arg_type=get_enum_type(['DATA', 'OS', 'ALL']))
c.argument('force', action='store_true', help='continue by ignoring client side validation errors')
c.argument('disk_encryption_keyvault', help='Name or ID of the key vault where the generated encryption key will be placed.')
c.argument('key_encryption_key', help='Key vault key name or URL used to encrypt the disk encryption key.')
c.argument('key_encryption_keyvault', help='Name or ID of the key vault containing the key encryption key used to encrypt the disk encryption key. If missing, CLI will use `--disk-encryption-keyvault`.')
for scope in ['vm extension', 'vmss extension']:
with self.argument_context(scope) as c:
c.argument('publisher', help='The name of the extension publisher.')
c.argument('settings', type=validate_file_or_dict, help='Extension settings in JSON format. A JSON file path is also accepted.')
c.argument('protected_settings', type=validate_file_or_dict, help='Protected settings in JSON format for sensitive information like credentials. A JSON file path is also accepted.')
c.argument('version', help='The version of the extension. To pin extension version to this value, please specify --no-auto-upgrade-minor-version.')
c.argument('enable_auto_upgrade', arg_type=get_three_state_flag(),
help='Indicate the extension should be automatically upgraded by the platform if there is a newer version of the extension available.')
with self.argument_context('vm extension set') as c:
c.argument('vm_extension_name', name_arg_type,
completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines/extensions'),
help='Name of the extension.', id_part=None)
c.argument('force_update', action='store_true', help='force to update even if the extension configuration has not changed.')
c.argument('extension_instance_name', extension_instance_name_type)
with self.argument_context('vmss extension set', min_api='2017-12-01') as c:
c.argument('force_update', action='store_true', help='force to update even if the extension configuration has not changed.')
c.argument('extension_instance_name', extension_instance_name_type)
c.argument('provision_after_extensions', nargs='+', help='Space-separated list of extension names after which this extension should be provisioned. These extensions must already be set on the vm.')
for scope in ['vm extension image', 'vmss extension image']:
with self.argument_context(scope) as c:
c.argument('image_location', options_list=['--location', '-l'], help='Image location.')
c.argument('name', help='Image name', id_part=None)
c.argument('publisher_name', options_list=['--publisher', '-p'], help='Image publisher name')
c.argument('type', options_list=['--name', '-n'], help='Name of the extension')
c.argument('latest', action='store_true', help='Show the latest version only.')
c.argument('version', help='Extension version')
c.argument('orderby', help="the $orderby odata query option")
c.argument('top', help='the $top odata query option')
for scope in ['vm create', 'vm update', 'vmss create', 'vmss update']:
with self.argument_context(scope) as c:
c.argument('license_type', license_type)
c.argument('priority', resource_type=ResourceType.MGMT_COMPUTE, min_api='2019-03-01',
arg_type=get_enum_type(self.get_models('VirtualMachinePriorityTypes'), default=None),
help="Priority. Use 'Spot' to run short-lived workloads in a cost-effective way. 'Low' enum will be deprecated in the future. Please use 'Spot' to deploy Azure spot VM and/or VMSS. Default to Regular.")
c.argument('max_price', min_api='2019-03-01', type=float, is_preview=True,
help='The maximum price (in US Dollars) you are willing to pay for a Spot VM/VMSS. -1 indicates that the Spot VM/VMSS should not be evicted for price reasons')
c.argument('capacity_reservation_group', options_list=['--capacity-reservation-group', '--crg'],
help='The ID or name of the capacity reservation group that is used to allocate. Pass in "None" to disassociate the capacity reservation group. Please note that if you want to delete a VM/VMSS that has been associated with capacity reservation group, you need to disassociate the capacity reservation group first.',
min_api='2021-04-01', is_preview=True)
with self.argument_context('vm update') as c:
c.argument('license_type', license_type)
c.argument('user_data', help='UserData for the VM. It can be passed in as file or string. If empty string is passed in, the existing value will be deleted.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
with self.argument_context('vmss create') as c:
c.argument('priority', resource_type=ResourceType.MGMT_COMPUTE, min_api='2017-12-01',
arg_type=get_enum_type(self.get_models('VirtualMachinePriorityTypes'), default=None),
help="Priority. Use 'Spot' to run short-lived workloads in a cost-effective way. 'Low' enum will be deprecated in the future. Please use 'Spot' to deploy Azure spot VM and/or VMSS. Default to Regular.")
with self.argument_context('sig') as c:
c.argument('gallery_name', options_list=['--gallery-name', '-r'], help='gallery name')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], help='gallery image definition')
c.argument('gallery_image_version', options_list=['--gallery-image-version', '-e'], help='gallery image version')
for scope in ['sig show', 'sig image-definition show', 'sig image-definition delete']:
with self.argument_context(scope) as c:
c.argument('gallery_name', options_list=['--gallery-name', '-r'], id_part='name', help='gallery name')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], id_part='child_name_1', help='gallery image definition')
with self.argument_context('sig list-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx))
c.argument('shared_to', shared_to_type)
with self.argument_context('sig show-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
for scope in ['sig share add', 'sig share remove']:
with self.argument_context(scope) as c:
c.argument('gallery_name', type=str, help='The name of the Shared Image Gallery.', id_part='name')
c.argument('subscription_ids', nargs='+', help='A list of subscription ids to share the gallery.')
c.argument('tenant_ids', nargs='+', help='A list of tenant ids to share the gallery.')
with self.argument_context('sig share add') as c:
c.argument('op_type', default='Add', deprecate_info=c.deprecate(hide=True),
help='distinguish add operation and remove operation')
with self.argument_context('sig share remove') as c:
c.argument('op_type', default='Remove', deprecate_info=c.deprecate(hide=True),
help='distinguish add operation and remove operation')
with self.argument_context('sig share reset') as c:
c.argument('gallery_name', type=str, help='The name of the Shared Image Gallery.', id_part='name')
with self.argument_context('sig image-definition create') as c:
c.argument('offer', options_list=['--offer', '-f'], help='image offer')
c.argument('sku', options_list=['--sku', '-s'], help='image sku')
c.argument('publisher', options_list=['--publisher', '-p'], help='image publisher')
c.argument('os_type', arg_type=get_enum_type(['Windows', 'Linux']), help='the type of the OS that is included in the disk if creating a VM from user-image or a specialized VHD')
c.argument('os_state', arg_type=get_enum_type(self.get_models('OperatingSystemStateTypes')), help="This property allows the user to specify whether the virtual machines created under this image are 'Generalized' or 'Specialized'.")
c.argument('hyper_v_generation', arg_type=get_enum_type(self.get_models('HyperVGenerationTypes')), help='The hypervisor generation of the Virtual Machine. Applicable to OS disks only.')
c.argument('minimum_cpu_core', type=int, arg_group='Recommendation', help='minimum cpu cores')
c.argument('maximum_cpu_core', type=int, arg_group='Recommendation', help='maximum cpu cores')
c.argument('minimum_memory', type=int, arg_group='Recommendation', help='minimum memory in MB')
c.argument('maximum_memory', type=int, arg_group='Recommendation', help='maximum memory in MB')
c.argument('plan_publisher', help='plan publisher', arg_group='Purchase plan')
c.argument('plan_name', help='plan name', arg_group='Purchase plan')
c.argument('plan_product', help='plan product', arg_group='Purchase plan')
c.argument('eula', help='The Eula agreement for the gallery image')
c.argument('privacy_statement_uri', help='The privacy statement uri')
c.argument('release_note_uri', help='The release note uri')
c.argument('end_of_life_date', help="the end of life date, e.g. '2020-12-31'")
c.argument('disallowed_disk_types', nargs='*', help='disk types which would not work with the image, e.g., Standard_LRS')
c.argument('features', help='A list of gallery image features. E.g. "IsSecureBootSupported=true IsMeasuredBootSupported=false"')
with self.argument_context('sig image-definition list-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('shared_to', shared_to_type)
c.argument('marker', arg_type=marker_type)
c.argument('show_next_marker', action='store_true', help='Show nextMarker in result when specified.')
with self.argument_context('sig image-definition show-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], type=str, help='The name '
'of the Shared Gallery Image Definition from which the Image Versions are to be listed.',
id_part='child_name_2')
with self.argument_context('sig create') as c:
c.argument('description', help='the description of the gallery')
c.argument('permissions', arg_type=get_enum_type(GallerySharingPermissionTypes), arg_group='Sharing Profile',
min_api='2020-09-30', is_experimental=True,
help='This property allows you to specify the permission of sharing gallery.')
c.argument('soft_delete', arg_type=get_three_state_flag(), min_api='2021-03-01', is_preview=True,
help='Enable soft-deletion for resources in this gallery, '
'allowing them to be recovered within retention time.')
with self.argument_context('sig update') as c:
c.ignore('gallery')
c.argument('permissions', arg_type=get_enum_type(GallerySharingPermissionTypes), arg_group='Sharing Profile',
min_api='2020-09-30', is_experimental=True,
help='This property allows you to specify the permission of sharing gallery.')
c.argument('soft_delete', arg_type=get_three_state_flag(), min_api='2021-03-01', is_preview=True,
help='Enable soft-deletion for resources in this gallery, '
'allowing them to be recovered within retention time.')
with self.argument_context('sig image-definition create') as c:
c.argument('description', help='the description of the gallery image definition')
with self.argument_context('sig image-definition update') as c:
c.ignore('gallery_image')
with self.argument_context('sig image-version') as c:
deprecated_option = c.deprecate(target='--gallery-image-version-name', redirect='--gallery-image-version', hide=True, expiration="3.0.0")
c.argument('gallery_image_version_name', options_list=['--gallery-image-version', '-e', deprecated_option],
help='Gallery image version in semantic version pattern. The allowed characters are digit and period. Digits must be within the range of a 32-bit integer, e.g. `<MajorVersion>.<MinorVersion>.<Patch>`')
with self.argument_context('sig image-version create', resource_type=ResourceType.MGMT_COMPUTE, operation_group='gallery_image_versions') as c:
c.argument('gallery_image_version', options_list=['--gallery-image-version', '-e'],
help='Gallery image version in semantic version pattern. The allowed characters are digit and period. Digits must be within the range of a 32-bit integer, e.g. `<MajorVersion>.<MinorVersion>.<Patch>`')
c.argument('description', help='the description of the gallery image version')
c.argument('managed_image', help='image name(if in the same resource group) or resource id')
c.argument('os_snapshot', help='Name or ID of OS disk snapshot')
c.argument('data_snapshots', nargs='+', help='Names or IDs (space-delimited) of data disk snapshots')
c.argument('data_snapshot_luns', nargs='+', help='Logical unit numbers (space-delimited) of data disk snapshots')
c.argument('exclude_from_latest', arg_type=get_three_state_flag(), help='The flag means that if it is set to true, people deploying VMs with version omitted will not use this version.')
c.argument('version', help='image version')
c.argument('end_of_life_date', help="the end of life date, e.g. '2020-12-31'")
c.argument('storage_account_type', help="The default storage account type to be used per region. To set regional storage account types, use --target-regions",
arg_type=get_enum_type(["Standard_LRS", "Standard_ZRS", "Premium_LRS"]), min_api='2019-03-01')
c.argument('target_region_encryption', nargs='+',
help='Space-separated list of customer managed keys for encrypting the OS and data disks in the gallery artifact for each region. Format for each region: `<os_des>,<lun1>,<lun1_des>,<lun2>,<lun2_des>`. Use "null" as a placeholder.')
c.argument('os_vhd_uri', help='Source VHD URI of OS disk')
c.argument('os_vhd_storage_account', help='Name or ID of storage account of source VHD URI of OS disk')
c.argument('data_vhds_uris', nargs='+', help='Source VHD URIs (space-delimited) of data disks')
c.argument('data_vhds_luns', nargs='+', help='Logical unit numbers (space-delimited) of source VHD URIs of data disks')
c.argument('data_vhds_storage_accounts', options_list=['--data-vhds-storage-accounts', '--data-vhds-sa'], nargs='+', help='Names or IDs (space-delimited) of storage accounts of source VHD URIs of data disks')
c.argument('replication_mode', min_api='2021-07-01', arg_type=get_enum_type(ReplicationMode), help='Optional parameter which specifies the mode to be used for replication. This property is not updatable.')
with self.argument_context('sig image-version list-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], type=str, help='The name '
'of the Shared Gallery Image Definition from which the Image Versions are to be listed.',
id_part='child_name_2')
c.argument('shared_to', shared_to_type)
c.argument('marker', arg_type=marker_type)
c.argument('show_next_marker', action='store_true', help='Show nextMarker in result when specified.')
with self.argument_context('sig image-version show') as c:
c.argument('expand', help="The expand expression to apply on the operation, e.g. 'ReplicationStatus'")
with self.argument_context('sig image-version show-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], type=str, help='The name '
'of the Shared Gallery Image Definition from which the Image Versions are to be listed.',
id_part='child_name_2')
c.argument('gallery_image_version_name', options_list=['--gallery-image-version', '-e'], type=str, help='The '
'name of the gallery image version to be created. Needs to follow semantic version name pattern: '
'The allowed characters are digit and period. Digits must be within the range of a 32-bit integer. '
'Format: <MajorVersion>.<MinorVersion>.<Patch>', id_part='child_name_3')
for scope in ['sig image-version create', 'sig image-version update']:
with self.argument_context(scope) as c:
c.argument('target_regions', nargs='*', validator=process_gallery_image_version_namespace,
help='Space-separated list of regions and their replica counts. Use `<region>[=<replica count>][=<storage account type>]` to optionally set the replica count and/or storage account type for each region. '
'If a replica count is not specified, the default replica count will be used. If a storage account type is not specified, the default storage account type will be used')
c.argument('replica_count', help='The default number of replicas to be created per region. To set regional replication counts, use --target-regions', type=int)
# endregion
# region Gallery applications
with self.argument_context('sig gallery-application') as c:
c.argument('gallery_application_name', options_list=['--name', '-n', '--application-name'],
help='The name of the gallery Application')
with self.argument_context('sig gallery-application create') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('description', help='The description of this gallery Application Definition resource. '
'This property is updatable.')
c.argument('os_type', arg_type=get_enum_type(['Windows', 'Linux']), help='This property allows you '
'to specify the supported type of the OS that application is built for. <br><br> Possible values '
'are: <br><br> **Windows** <br><br> **Linux**')
with self.argument_context('sig gallery-application update') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('description', help='The description of this gallery Application Definition resource. '
'This property is updatable.')
with self.argument_context('sig gallery-application version') as c:
c.argument('gallery_application_name', options_list=['--application-name'],
help='The name of the gallery Application')
c.argument('gallery_application_version_name', options_list=['--name', '-n', '--version-name'],
help='The name of the gallery Application Version')
for scope in ['create', 'update']:
with self.argument_context('sig gallery-application version {}'.format(scope)) as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('package_file_link', help='The mediaLink of the artifact, must be a readable storage page blob.')
c.argument('install_command', help='The path and arguments to install the gallery application.')
c.argument('remove_command', help='The path and arguments to remove the gallery application.')
c.argument('update_command', help='The path and arguments to update the gallery application. If not present,'
' then update operation will invoke remove command on the previous version '
'and install command on the current version of the gallery application.')
c.argument('target_regions', type=validate_file_or_dict, help='The target regions where the Image Version is '
'going to be replicated to. This property is updatable. Expected value: '
'json-string/json-file/@json-file.')
c.argument('default_file_link', help='The default configuration link of the artifact, must be a readable storage page blob.')
c.argument('exclude_from', arg_type=get_three_state_flag(), help='If set to true, Virtual Machines '
'deployed from the latest version of the Image Definition won\'t use this Image Version.',
arg_group='Publishing Profile')
c.argument('end_of_life_date', help='The end of life date of the gallery image version. This property can be '
'used for decommissioning purposes. This property is updatable.', arg_group='Publishing Profile')
# endregion
# region Proximity Placement Group
with self.argument_context('ppg', min_api='2018-04-01') as c:
c.argument('proximity_placement_group_name', arg_type=name_arg_type, help="The name of the proximity placement group.")
with self.argument_context('ppg create', min_api='2018-04-01') as c:
c.argument('ppg_type', options_list=['--type', '-t'], help="The type of the proximity placement group. Allowed values: Standard.")
c.argument('tags', tags_type)
with self.argument_context('ppg show', min_api='2019-07-01') as c:
c.argument('include_colocation_status', action='store_true', help='Enable fetching the colocation status of all the resources in the proximity placement group.')
for scope, item in [('vm create', 'VM'), ('vmss create', 'VMSS'),
('vm availability-set create', 'availability set'),
('vm update', 'VM'), ('vmss update', 'VMSS'),
('vm availability-set update', 'availability set')]:
with self.argument_context(scope, min_api='2018-04-01') as c:
c.argument('proximity_placement_group', options_list=['--ppg'], help="The name or ID of the proximity placement group the {} should be associated with.".format(item),
validator=_validate_proximity_placement_group) # only availability set does not have a command level validator, so this should be added.
# endregion
# region VM Monitor
with self.argument_context('vm monitor log show') as c:
c.argument('analytics_query', options_list=['--analytics-query', '-q'], help="Query to execute over Log Analytics data.")
c.argument('timespan', help="Timespan over which to query. Defaults to querying all available data.")
with self.argument_context('vm monitor metrics') as c:
c.argument('metricnamespace', options_list=['--namespace'],
help='Namespace to query metric definitions for.')
with self.argument_context('vm monitor metrics tail') as c:
from azure.mgmt.monitor.models import AggregationType
c.extra('resource_group_name', required=True)
c.argument('resource', arg_type=existing_vm_name, help='Name or ID of a virtual machine', validator=validate_vm_name_for_monitor_metrics, id_part=None)
c.argument('metadata', action='store_true')
c.argument('dimension', nargs='*', validator=validate_metric_dimension)
c.argument('aggregation', arg_type=get_enum_type(t for t in AggregationType if t.name != 'none'), nargs='*')
c.argument('metrics', nargs='*')
c.argument('orderby',
help='Aggregation to use for sorting results and the direction of the sort. Only one order can be specificed. Examples: sum asc')
c.argument('top', help='Max number of records to retrieve. Valid only if --filter used.')
c.argument('filters', options_list=['--filter'])
c.argument('metric_namespace', options_list=['--namespace'])
with self.argument_context('vm monitor metrics tail', arg_group='Time') as c:
c.argument('start_time', arg_type=get_datetime_type(help='Start time of the query.'))
c.argument('end_time', arg_type=get_datetime_type(help='End time of the query. Defaults to the current time.'))
c.argument('offset', type=get_period_type(as_timedelta=True))
c.argument('interval', arg_group='Time', type=get_period_type())
with self.argument_context('vm monitor metrics list-definitions') as c:
c.extra('resource_group_name', required=True)
c.argument('resource_uri', arg_type=existing_vm_name, help='Name or ID of a virtual machine', validator=validate_vm_name_for_monitor_metrics, id_part=None)
# endregion
# region disk encryption set
with self.argument_context('disk-encryption-set') as c:
c.argument('disk_encryption_set_name', disk_encryption_set_name)
c.argument('key_url', help='URL pointing to a key or secret in KeyVault.')
c.argument('source_vault', help='Name or ID of the KeyVault containing the key or secret.')
c.argument('encryption_type', arg_type=get_enum_type(['EncryptionAtRestWithPlatformKey', 'EncryptionAtRestWithCustomerKey', 'EncryptionAtRestWithPlatformAndCustomerKeys']),
help='The type of key used to encrypt the data of the disk. EncryptionAtRestWithPlatformKey: Disk is encrypted at rest with Platform managed key. It is the default encryption type. EncryptionAtRestWithCustomerKey: Disk is encrypted at rest with Customer managed key that can be changed and revoked by a customer. EncryptionAtRestWithPlatformAndCustomerKeys: Disk is encrypted at rest with 2 layers of encryption. One of the keys is Customer managed and the other key is Platform managed.')
c.argument('location', validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('enable_auto_key_rotation', arg_type=get_three_state_flag(), min_api='2020-12-01',
options_list=['--enable-auto-key-rotation', '--auto-rotation'],
help='Enable automatic rotation of keys.')
# endregion
# region DiskAccess
with self.argument_context('disk-access', resource_type=ResourceType.MGMT_COMPUTE, operation_group='disk_accesses') as c:
c.argument('disk_access_name', arg_type=name_arg_type, help='Name of the disk access resource.', id_part='name')
c.argument('location', validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
# endRegion
with self.argument_context('capacity reservation group') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), validator=get_default_location_from_resource_group)
c.argument('capacity_reservation_group_name', options_list=['--capacity-reservation-group', '-n'],
help='The name of the capacity reservation group.')
c.argument('tags', tags_type)
with self.argument_context('capacity reservation group create') as c:
c.argument('zones', zones_type, help='Availability Zones to use for this capacity reservation group. If not provided, the group supports only regional resources in the region. If provided, enforces each capacity reservation in the group to be in one of the zones.')
with self.argument_context('capacity reservation group show') as c:
c.argument('instance_view', action='store_true', options_list=['--instance-view', '-i'], help='Retrieve the list of instance views of the capacity reservations under the capacity reservation group which is a snapshot of the runtime properties of a capacity reservation that is managed by the platform and can change outside of control plane operations.')
with self.argument_context('capacity reservation group list') as c:
c.argument('vm_instance', action='store_true', help='Retrieve the Virtual Machine Instance which are associated to capacity reservation group in the response.')
c.argument('vmss_instance', action='store_true', help='Retrieve the ScaleSet VM Instance which are associated to capacity reservation group in the response.')
with self.argument_context('capacity reservation') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), validator=get_default_location_from_resource_group)
c.argument('capacity_reservation_group_name', options_list=['--capacity-reservation-group', '-c'],
help='The name of the capacity reservation group.')
c.argument('capacity_reservation_name', options_list=['--capacity-reservation-name', '-n'],
help='The name of the capacity reservation.')
c.argument('capacity', type=int, help='Specify the number of virtual machines in the scale set.')
c.argument('tags', tags_type)
with self.argument_context('capacity reservation create') as c:
c.argument('zone', zone_type, help='Availability Zone to use for this capacity reservation. The zone has to be single value and also should be part for the list of zones specified during the capacity reservation group creation. If not provided, the reservation supports only non-zonal deployments. If provided, enforces VM/VMSS using this capacity reservation to be in same zone.')
c.argument('sku_name', options_list=['--sku', '-s'], required=True, help='The SKU of the resource for which capacity needs be reserved. Currently VM Skus with the capability called "CapacityReservationSupported" set to true are supported. Refer to List Microsoft.Compute SKUs in a region (https://docs.microsoft.com/rest/api/compute/resourceskus/list) for supported values.')
with self.argument_context('capacity reservation show') as c:
c.argument('instance_view', action='store_true', options_list=['--instance-view', '-i'], help='Retrieve a snapshot of the runtime properties of the capacity reservation that is managed by the platform and can change outside of control plane operations.')
|
def load_arguments(self, _):
# Model imports
StorageAccountTypes = self.get_models('StorageAccountTypes')
DiskStorageAccountTypes = self.get_models('DiskStorageAccountTypes,', operation_group='disks')
SnapshotStorageAccountTypes = self.get_models('SnapshotStorageAccountTypes', operation_group='snapshots')
UpgradeMode, CachingTypes, OperatingSystemTypes = self.get_models('UpgradeMode', 'CachingTypes', 'OperatingSystemTypes')
HyperVGenerationTypes, HyperVGeneration = self.get_models('HyperVGenerationTypes', 'HyperVGeneration')
DedicatedHostLicenseTypes = self.get_models('DedicatedHostLicenseTypes')
OrchestrationServiceNames, OrchestrationServiceStateAction = self.get_models('OrchestrationServiceNames', 'OrchestrationServiceStateAction', operation_group='virtual_machine_scale_sets')
RebootSetting, VMGuestPatchClassificationWindows, VMGuestPatchClassificationLinux = self.get_models('VMGuestPatchRebootSetting', 'VMGuestPatchClassificationWindows', 'VMGuestPatchClassificationLinux')
GallerySharingPermissionTypes = self.get_models('GallerySharingPermissionTypes', operation_group='shared_galleries')
ReplicationMode = self.get_models('ReplicationMode', operation_group='gallery_image_versions')
# REUSABLE ARGUMENT DEFINITIONS
name_arg_type = CLIArgumentType(options_list=['--name', '-n'], metavar='NAME')
multi_ids_type = CLIArgumentType(nargs='+')
existing_vm_name = CLIArgumentType(overrides=name_arg_type,
configured_default='vm',
help="The name of the Virtual Machine. You can configure the default using `az configure --defaults vm=<name>`",
completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines'), id_part='name')
existing_disk_name = CLIArgumentType(overrides=name_arg_type, help='The name of the managed disk', completer=get_resource_name_completion_list('Microsoft.Compute/disks'), id_part='name')
existing_snapshot_name = CLIArgumentType(overrides=name_arg_type, help='The name of the snapshot', completer=get_resource_name_completion_list('Microsoft.Compute/snapshots'), id_part='name')
vmss_name_type = CLIArgumentType(name_arg_type,
configured_default='vmss',
completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'),
help="Scale set name. You can configure the default using `az configure --defaults vmss=<name>`",
id_part='name')
extension_instance_name_type = CLIArgumentType(help="Name of extension instance, which can be customized. Default: name of the extension.")
image_template_name_type = CLIArgumentType(overrides=name_arg_type, id_part='name')
disk_encryption_set_name = CLIArgumentType(overrides=name_arg_type, help='Name of disk encryption set.', id_part='name')
ephemeral_placement_type = CLIArgumentType(options_list=['--ephemeral-os-disk-placement', '--ephemeral-placement'], arg_type=get_enum_type(['ResourceDisk', 'CacheDisk']), min_api='2019-12-01')
license_type = CLIArgumentType(
help="Specifies that the Windows image or disk was licensed on-premises. To enable Azure Hybrid Benefit for "
"Windows Server, use 'Windows_Server'. To enable Multi-tenant Hosting Rights for Windows 10, "
"use 'Windows_Client'. For more information see the Azure Windows VM online docs.",
arg_type=get_enum_type(['Windows_Server', 'Windows_Client', 'RHEL_BYOS', 'SLES_BYOS', 'RHEL_BASE',
'RHEL_SAPAPPS', 'RHEL_SAPHA', 'RHEL_EUS', 'SLES_BASE', 'SLES_SAP', 'SLES_HPC', 'None',
'RHEL_ELS_6']))
# StorageAccountTypes renamed to DiskStorageAccountTypes in 2018_06_01 of azure-mgmt-compute
DiskStorageAccountTypes = DiskStorageAccountTypes or StorageAccountTypes
if DiskStorageAccountTypes:
disk_sku = CLIArgumentType(arg_type=get_enum_type(DiskStorageAccountTypes))
else:
# StorageAccountTypes introduced in api version 2016_04_30_preview of Resource.MGMT.Compute package..
# However, 2017-03-09-profile targets version 2016-03-30 of compute package.
disk_sku = CLIArgumentType(arg_type=get_enum_type(['Premium_LRS', 'Standard_LRS']))
if SnapshotStorageAccountTypes:
snapshot_sku = CLIArgumentType(arg_type=get_enum_type(SnapshotStorageAccountTypes))
else:
# SnapshotStorageAccountTypes introduced in api version 2018_04_01 of Resource.MGMT.Compute package..
# However, 2017-03-09-profile targets version 2016-03-30 of compute package.
snapshot_sku = CLIArgumentType(arg_type=get_enum_type(['Premium_LRS', 'Standard_LRS']))
# special case for `network nic scale-set list` command alias
with self.argument_context('network nic scale-set list') as c:
c.argument('virtual_machine_scale_set_name', options_list=['--vmss-name'], completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'), id_part='name')
HyperVGenerationTypes = HyperVGenerationTypes or HyperVGeneration
if HyperVGenerationTypes:
hyper_v_gen_sku = CLIArgumentType(arg_type=get_enum_type(HyperVGenerationTypes, default="V1"))
else:
hyper_v_gen_sku = CLIArgumentType(arg_type=get_enum_type(["V1", "V2"], default="V1"))
ultra_ssd_enabled_type = CLIArgumentType(
arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Enables or disables the capability to have 1 or more managed data disks with UltraSSD_LRS storage account')
scale_in_policy_type = CLIArgumentType(
nargs='+', arg_type=get_enum_type(self.get_models('VirtualMachineScaleSetScaleInRules')),
help='Specify the scale-in policy (space delimited) that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled-in.'
)
edge_zone_type = CLIArgumentType(
help='The name of edge zone.',
min_api='2020-12-01',
is_preview=True
)
t_shared_to = self.get_models('SharedToValues', operation_group='shared_galleries')
shared_to_type = CLIArgumentType(
arg_type=get_enum_type(t_shared_to),
help='The query parameter to decide what shared galleries to fetch when doing listing operations. '
'If not specified, list by subscription id.'
)
marker_type = CLIArgumentType(
help='A string value that identifies the portion of the list of containers to be '
'returned with the next listing operation. The operation returns the NextMarker value within '
'the response body if the listing operation did not return all containers remaining to be listed '
'with the current page. If specified, this generator will begin returning results from the point '
'where the previous generator stopped.')
# region MixedScopes
for scope in ['vm', 'disk', 'snapshot', 'image', 'sig']:
with self.argument_context(scope) as c:
c.argument('tags', tags_type)
for scope in ['disk', 'snapshot']:
with self.argument_context(scope) as c:
c.ignore('source_blob_uri', 'source_disk', 'source_snapshot')
c.argument('source_storage_account_id', help='used when source blob is in a different subscription')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
c.argument('duration_in_seconds', help='Time duration in seconds until the SAS access expires', type=int)
if self.supported_api_version(min_api='2018-09-30', operation_group='disks'):
c.argument('access_level', arg_type=get_enum_type(['Read', 'Write']), default='Read', help='access level')
c.argument('for_upload', arg_type=get_three_state_flag(),
help='Create the {0} for uploading blobs later on through storage commands. Run "az {0} grant-access --access-level Write" to retrieve the {0}\'s SAS token.'.format(scope))
c.argument('hyper_v_generation', arg_type=hyper_v_gen_sku, help='The hypervisor generation of the Virtual Machine. Applicable to OS disks only.')
else:
c.ignore('access_level', 'for_upload', 'hyper_v_generation')
c.argument('encryption_type', min_api='2019-07-01', arg_type=get_enum_type(self.get_models('EncryptionType')),
help='Encryption type. EncryptionAtRestWithPlatformKey: Disk is encrypted with XStore managed key at rest. It is the default encryption type. EncryptionAtRestWithCustomerKey: Disk is encrypted with Customer managed key at rest.')
c.argument('disk_encryption_set', min_api='2019-07-01', help='Name or ID of disk encryption set that is used to encrypt the disk.')
c.argument('location', help='Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`. If location is not specified and no default location specified, location will be automatically set as same as the resource group.')
operation_group = 'disks' if scope == 'disk' else 'snapshots'
c.argument('network_access_policy', min_api='2020-05-01', help='Policy for accessing the disk via network.', arg_type=get_enum_type(self.get_models('NetworkAccessPolicy', operation_group=operation_group)))
c.argument('disk_access', min_api='2020-05-01', help='Name or ID of the disk access resource for using private endpoints on disks.')
c.argument('enable_bursting', arg_type=get_three_state_flag(), help='Enable bursting beyond the provisioned performance target of the disk. Bursting is disabled by default, and it does not apply to Ultra disks.')
c.argument('public_network_access', arg_type=get_enum_type(['Disabled', 'Enabled']), min_api='2021-04-01', is_preview=True, help='Customers can set on Managed Disks or Snapshots to control the export policy on the disk.')
c.argument('accelerated_network', arg_type=get_three_state_flag(), min_api='2021-04-01', is_preview=True, help='Customers can set on Managed Disks or Snapshots to enable the accelerated networking if the OS disk image support.')
for scope in ['disk create', 'snapshot create']:
with self.argument_context(scope) as c:
c.argument('source', help='source to create the disk/snapshot from, including unmanaged blob uri, managed disk id or name, or snapshot id or name')
# endregion
# region Disks
with self.argument_context('disk') as c:
c.argument('zone', zone_type, min_api='2017-03-30', options_list=['--zone']) # TODO: --size-gb currently has claimed -z. We can do a breaking change later if we want to.
c.argument('disk_name', existing_disk_name, completer=get_resource_name_completion_list('Microsoft.Compute/disks'))
c.argument('name', arg_type=name_arg_type)
c.argument('sku', arg_type=disk_sku, help='Underlying storage SKU')
c.argument('os_type', arg_type=get_enum_type(OperatingSystemTypes), help='The Operating System type of the Disk.')
c.argument('disk_iops_read_write', type=int, min_api='2018-06-01', help='The number of IOPS allowed for this disk. Only settable for UltraSSD disks. One operation can transfer between 4k and 256k bytes')
c.argument('disk_mbps_read_write', type=int, min_api='2018-06-01', help="The bandwidth allowed for this disk. Only settable for UltraSSD disks. MBps means millions of bytes per second with ISO notation of powers of 10")
c.argument('upload_size_bytes', type=int, min_api='2019-03-01',
help='The size (in bytes) of the contents of the upload including the VHD footer. Min value: 20972032. Max value: 35183298347520')
c.argument('max_shares', type=int, help='The maximum number of VMs that can attach to the disk at the same time. Value greater than one indicates a disk that can be mounted on multiple VMs at the same time')
c.argument('disk_iops_read_only', type=int, help='The total number of IOPS that will be allowed across all VMs mounting the shared disk as ReadOnly. One operation can transfer between 4k and 256k bytes')
c.argument('disk_mbps_read_only', type=int, help='The total throughput (MBps) that will be allowed across all VMs mounting the shared disk as ReadOnly. MBps means millions of bytes per second - MB here uses the ISO notation, of powers of 10')
c.argument('image_reference', help='ID or URN (publisher:offer:sku:version) of the image from which to create a disk')
c.argument('image_reference_lun', type=int, help='If the disk is created from an image\'s data disk, this is an index that indicates which of the data disks in the image to use. For OS disks, this field is null')
c.argument('gallery_image_reference', help='ID of the Compute Gallery image version from which to create a disk')
c.argument('gallery_image_reference_lun', type=int, help='If the disk is created from an image\'s data disk, this is an index that indicates which of the data disks in the image to use. For OS disks, this field is null')
c.argument('logical_sector_size', type=int, help='Logical sector size in bytes for Ultra disks. Supported values are 512 ad 4096. 4096 is the default.')
c.argument('tier', help='Performance tier of the disk (e.g, P4, S10) as described here: https://azure.microsoft.com/pricing/details/managed-disks/. Does not apply to Ultra disks.')
c.argument('edge_zone', edge_zone_type)
c.argument('security_type', choices=['TrustedLaunch'], help='The security type of the VM. Applicable for OS disks only.', min_api='2020-12-01')
c.argument('support_hibernation', arg_type=get_three_state_flag(), help='Indicate the OS on a disk supports hibernation.', min_api='2020-12-01')
# endregion
# region Snapshots
with self.argument_context('snapshot', resource_type=ResourceType.MGMT_COMPUTE, operation_group='snapshots') as c:
c.argument('snapshot_name', existing_snapshot_name, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/snapshots'))
c.argument('name', arg_type=name_arg_type)
c.argument('sku', arg_type=snapshot_sku)
c.argument('incremental', arg_type=get_three_state_flag(), min_api='2019-03-01',
help='Whether a snapshot is incremental. Incremental snapshots on the same disk occupy less space than full snapshots and can be diffed')
c.argument('edge_zone', edge_zone_type)
c.argument('copy_start', arg_type=get_three_state_flag(), min_api='2021-04-01',
help='Create snapshot by using a deep copy process, where the resource creation is considered complete only after all data has been copied from the source.')
# endregion
# region Images
with self.argument_context('image') as c:
c.argument('os_type', arg_type=get_enum_type(['Windows', 'Linux']))
c.argument('image_name', arg_type=name_arg_type, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/images'))
c.argument('tags', tags_type)
with self.argument_context('image create') as c:
# here we collpase all difference image sources to under 2 common arguments --os-disk-source --data-disk-sources
c.argument('name', arg_type=name_arg_type, help='new image name')
c.argument('source', help='OS disk source from the same region, including a virtual machine ID or name, OS disk blob URI, managed OS disk ID or name, or OS snapshot ID or name')
c.argument('data_disk_sources', nargs='+', help='Space-separated list of data disk sources, including unmanaged blob URI, managed disk ID or name, or snapshot ID or name')
c.argument('zone_resilient', min_api='2017-12-01', arg_type=get_three_state_flag(), help='Specifies whether an image is zone resilient or not. '
'Default is false. Zone resilient images can be created only in regions that provide Zone Redundant Storage')
c.argument('storage_sku', arg_type=disk_sku, help='The SKU of the storage account with which to create the VM image. Unused if source VM is specified.')
c.argument('os_disk_caching', arg_type=get_enum_type(CachingTypes), help="Storage caching type for the image's OS disk.")
c.argument('data_disk_caching', arg_type=get_enum_type(CachingTypes),
help="Storage caching type for the image's data disk.")
c.argument('hyper_v_generation', arg_type=hyper_v_gen_sku, min_api="2019-03-01", help='The hypervisor generation of the Virtual Machine created from the image.')
c.ignore('source_virtual_machine', 'os_blob_uri', 'os_disk', 'os_snapshot', 'data_blob_uris', 'data_disks', 'data_snapshots')
c.argument('edge_zone', edge_zone_type, )
# endregion
# region Image Templates
with self.argument_context('image builder') as c:
ib_output_name_help = "Name of the image builder run output."
c.argument('location', get_location_type(self.cli_ctx))
c.argument('scripts', nargs='+', help="Space-separated list of shell or powershell scripts to customize the image with. Each script must be a publicly accessible URL."
" Infers type of script from file extension ('.sh' or'.ps1') or from source type. More more customizer options and flexibility, see: 'az image template customizer add'")
c.argument('source', options_list=["--image-source", "-i"], help="The base image to customize. Must be a valid platform image URN, platform image alias, Red Hat ISO image URI, managed image name/ID, or shared image version ID.")
c.argument('image_template_name', image_template_name_type, help="The name of the image template.")
c.argument('checksum', help="The SHA256 checksum of the Red Hat ISO image")
c.argument('managed_image_destinations', nargs='+', help='Managed image output distributor information. Space-separated list of key-value pairs. E.g "image_1=westus2 image_2=westus". Each key is the name or resource ID of the managed image to be created. Each value is the location of the image.')
c.argument('shared_image_destinations', nargs='+', help='Shared image gallery (sig) output distributor information. Space-separated list of key-value pairs. E.g "my_gallery_1/image_def_1=eastus,westus my_gallery_2/image_def_2=uksouth,canadaeast,francesouth." '
'Each key is the sig image definition ID or sig gallery name and sig image definition delimited by a "/". Each value is a comma-delimited list of replica locations.')
c.argument('output_name', help=ib_output_name_help)
c.ignore('destinations_lists', 'scripts_list', 'source_dict')
with self.argument_context('image builder create') as c:
ib_source_type = CLIArgumentType(arg_group="Image Source")
ib_customizer_type = CLIArgumentType(arg_group="Customizer")
ib_cutput_type = CLIArgumentType(arg_group="Output")
c.argument('build_timeout', type=int, help="The Maximum duration to wait while building the image template, in minutes. Default is 60.")
c.argument('image_template', help='Local path or URL to an image template file. When using --image-template, all other parameters are ignored except -g and -n. Reference: https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json')
c.argument('identity', nargs='+', help='List of user assigned identities (name or ID, space delimited) of the image template.')
# VM profile
c.argument('vm_size', help='Size of the virtual machine used to build, customize and capture images. Omit or specify empty string to use the default (Standard_D1_v2)')
c.argument('os_disk_size', type=int, help='Size of the OS disk in GB. Omit or specify 0 to use Azure\'s default OS disk size')
c.argument('vnet', help='Name of VNET to deploy the build virtual machine. You should only specify it when subnet is a name')
c.argument('subnet', help='Name or ID of subnet to deploy the build virtual machine')
# Image Source Arguments
c.argument('source', arg_type=ib_source_type)
c.argument('checksum', arg_type=ib_source_type)
c.argument('', arg_type=ib_source_type)
# Image Customizer Arguments
c.argument('scripts', arg_type=ib_customizer_type)
c.argument('', arg_type=ib_customizer_type)
c.argument('', arg_type=ib_customizer_type)
# Image Output Arguments
c.argument('managed_image_destinations', arg_type=ib_cutput_type)
c.argument('shared_image_destinations', arg_type=ib_cutput_type)
c.argument('output_name', arg_type=ib_cutput_type)
with self.argument_context('image builder output') as c:
ib_sig_regions_help = "Space-separated list of regions to replicate the image version into."
ib_img_location_help = "Location where the customized image will be created."
c.argument('gallery_image_definition', arg_group="Shared Image Gallery", help="Name or ID of the existing SIG image definition to create the customized image version with.")
c.argument('gallery_name', arg_group="Shared Image Gallery", help="Shared image gallery name, if image definition name and not ID was provided.")
c.argument('gallery_replication_regions', arg_group="Shared Image Gallery", nargs='+', help=ib_sig_regions_help)
c.argument('managed_image', arg_group="Managed Image", help="Name or ID of the customized managed image to be created.")
c.argument('managed_image_location', arg_group="Managed Image", help=ib_img_location_help)
with self.argument_context('image builder output add') as c:
ib_artifact_tags_help = "Tags that will be applied to the output artifact once it has been created by the distributor. " + tags_type.settings['help']
ib_artifact_tags_type = CLIArgumentType(overrides=tags_type, help=ib_artifact_tags_help, options_list=["--artifact-tags"])
ib_default_loc_help = " Defaults to resource group's location."
c.argument('output_name', help=ib_output_name_help + " Defaults to the name of the managed image or sig image definition.")
c.argument('gallery_replication_regions', arg_group="Shared Image Gallery", nargs='+', help=ib_sig_regions_help + ib_default_loc_help)
c.argument('managed_image_location', arg_group="Managed Image", help=ib_img_location_help + ib_default_loc_help)
c.argument('is_vhd', arg_group="VHD", help="The output is a VHD distributor.", action='store_true')
c.argument('tags', arg_type=ib_artifact_tags_type)
c.ignore('location')
with self.argument_context('image builder customizer') as c:
ib_win_restart_type = CLIArgumentType(arg_group="Windows Restart")
ib_win_update_type = CLIArgumentType(arg_group="Windows Update")
ib_script_type = CLIArgumentType(arg_group="Shell and Powershell")
ib_powershell_type = CLIArgumentType(arg_group="Powershell")
ib_file_customizer_type = CLIArgumentType(arg_group="File")
c.argument('customizer_name', help="Name of the customizer.")
c.argument('customizer_type', options_list=['--type', '-t'], help="Type of customizer to be added to the image template.", arg_type=get_enum_type(ScriptType))
# Script Args
c.argument('script_url', arg_type=ib_script_type, help="URL of script to customize the image with. The URL must be publicly accessible.")
c.argument('inline_script', arg_type=ib_script_type, nargs='+', help="Space-separated list of inline script lines to customize the image with.")
# Powershell Specific Args
c.argument('valid_exit_codes', options_list=['--exit-codes', '-e'], arg_type=ib_powershell_type, nargs='+', help="Space-separated list of valid exit codes, as integers")
# Windows Restart Specific Args
c.argument('restart_command', arg_type=ib_win_restart_type, help="Command to execute the restart operation.")
c.argument('restart_check_command', arg_type=ib_win_restart_type, help="Command to verify that restart succeeded.")
c.argument('restart_timeout', arg_type=ib_win_restart_type, help="Restart timeout specified as a string consisting of a magnitude and unit, e.g. '5m' (5 minutes) or '2h' (2 hours)", default="5m")
# Windows Update Specific Args
c.argument('search_criteria', arg_type=ib_win_update_type, help='Criteria to search updates. Omit or specify empty string to use the default (search all). Refer to above link for examples and detailed description of this field.')
c.argument('filters', arg_type=ib_win_update_type, nargs='+', help='Space delimited filters to select updates to apply. Omit or specify empty array to use the default (no filter)')
c.argument('update_limit', arg_type=ib_win_update_type, help='Maximum number of updates to apply at a time. Omit or specify 0 to use the default (1000)')
# File Args
c.argument('file_source', arg_type=ib_file_customizer_type, help="The URI of the file to be downloaded into the image. It can be a github link, SAS URI for Azure Storage, etc.")
c.argument('dest_path', arg_type=ib_file_customizer_type, help="The absolute destination path where the file specified in --file-source will be downloaded to in the image")
# endregion
# region AvailabilitySets
with self.argument_context('vm availability-set') as c:
c.argument('availability_set_name', name_arg_type, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/availabilitySets'), help='Name of the availability set')
with self.argument_context('vm availability-set create') as c:
c.argument('availability_set_name', name_arg_type, validator=get_default_location_from_resource_group, help='Name of the availability set')
c.argument('platform_update_domain_count', type=int, help='Update Domain count. If unspecified, the server will pick the most optimal number like 5.')
c.argument('platform_fault_domain_count', type=int, help='Fault Domain count.')
c.argument('validate', help='Generate and validate the ARM template without creating any resources.', action='store_true')
c.argument('unmanaged', action='store_true', min_api='2016-04-30-preview', help='contained VMs should use unmanaged disks')
with self.argument_context('vm availability-set update') as c:
if self.supported_api_version(max_api='2016-04-30-preview', operation_group='virtual_machines'):
c.argument('name', name_arg_type, id_part='name', completer=get_resource_name_completion_list('Microsoft.Compute/availabilitySets'), help='Name of the availability set')
c.argument('availability_set_name', options_list=['--availability-set-name'])
# endregion
# region VirtualMachines
with self.argument_context('vm') as c:
c.argument('vm_name', existing_vm_name)
c.argument('size', completer=get_vm_size_completion_list)
c.argument('name', arg_type=name_arg_type)
c.argument('zone', zone_type, min_api='2017-03-30')
c.argument('caching', help='Disk caching policy', arg_type=get_enum_type(CachingTypes))
c.argument('nsg', help='The name to use when creating a new Network Security Group (default) or referencing an existing one. Can also reference an existing NSG by ID or specify "" for none.', arg_group='Network')
c.argument('nsg_rule', help='NSG rule to create when creating a new NSG. Defaults to open ports for allowing RDP on Windows and allowing SSH on Linux.', arg_group='Network', arg_type=get_enum_type(['RDP', 'SSH']))
c.argument('application_security_groups', min_api='2017-09-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network')
c.argument('workspace', is_preview=True, arg_group='Monitor', help='Name or ID of Log Analytics Workspace. If you specify the workspace through its name, the workspace should be in the same resource group with the vm, otherwise a new workspace will be created.')
with self.argument_context('vm capture') as c:
c.argument('overwrite', action='store_true')
with self.argument_context('vm update') as c:
c.argument('os_disk', min_api='2017-12-01', help="Managed OS disk ID or name to swap to")
c.argument('write_accelerator', nargs='*', min_api='2017-12-01',
help="enable/disable disk write accelerator. Use singular value 'true/false' to apply across, or specify individual disks, e.g.'os=true 1=true 2=true' for os disk and data disks with lun of 1 & 2")
c.argument('disk_caching', nargs='*', help="Use singular value to apply across, or specify individual disks, e.g. 'os=ReadWrite 0=None 1=ReadOnly' should enable update os disk and 2 data disks")
c.argument('ultra_ssd_enabled', ultra_ssd_enabled_type)
c.argument('enable_secure_boot', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable secure boot.')
c.argument('enable_vtpm', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable vTPM.')
c.argument('size', help='The new size of the virtual machine. See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.', is_preview=True)
c.argument('ephemeral_os_disk_placement', arg_type=ephemeral_placement_type,
help='Only applicable when used with `--size`. Allows you to choose the Ephemeral OS disk provisioning location.', is_preview=True)
c.argument('enable_hibernation', arg_type=get_three_state_flag(), min_api='2021-03-01', help='The flag that enable or disable hibernation capability on the VM.')
c.argument('v_cpus_available', type=int, min_api='2021-07-01', help='Specify the number of vCPUs available for the VM')
c.argument('v_cpus_per_core', type=int, min_api='2021-07-01', help='Specify the ratio of vCPU to physical core. Setting this property to 1 also means that hyper-threading is disabled.')
with self.argument_context('vm create') as c:
c.argument('name', name_arg_type, validator=_resource_not_exists(self.cli_ctx, 'Microsoft.Compute/virtualMachines'))
c.argument('vm_name', name_arg_type, id_part=None, help='Name of the virtual machine.', completer=None)
c.argument('os_disk_size_gb', type=int, help='the size of the os disk in GB', arg_group='Storage')
c.argument('availability_set', help='Name or ID of an existing availability set to add the VM to. None by default.')
c.argument('vmss', help='Name or ID of an existing virtual machine scale set that the virtual machine should be assigned to. None by default.')
c.argument('nsg', help='The name to use when creating a new Network Security Group (default) or referencing an existing one. Can also reference an existing NSG by ID or specify "" for none (\'""\' in Azure CLI using PowerShell or --% operator).', arg_group='Network')
c.argument('nsg_rule', help='NSG rule to create when creating a new NSG. Defaults to open ports for allowing RDP on Windows and allowing SSH on Linux. NONE represents no NSG rule', arg_group='Network', arg_type=get_enum_type(['RDP', 'SSH', 'NONE']))
c.argument('application_security_groups', resource_type=ResourceType.MGMT_NETWORK, min_api='2017-09-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network', validator=validate_asg_names_or_ids)
c.argument('boot_diagnostics_storage',
help='pre-existing storage account name or its blob uri to capture boot diagnostics. Its sku should be one of Standard_GRS, Standard_LRS and Standard_RAGRS')
c.argument('accelerated_networking', resource_type=ResourceType.MGMT_NETWORK, min_api='2016-09-01', arg_type=get_three_state_flag(), arg_group='Network',
help="enable accelerated networking. Unless specified, CLI will enable it based on machine image and size")
if self.supported_api_version(min_api='2019-03-01', resource_type=ResourceType.MGMT_COMPUTE):
VirtualMachineEvictionPolicyTypes = self.get_models('VirtualMachineEvictionPolicyTypes', resource_type=ResourceType.MGMT_COMPUTE)
c.argument('eviction_policy', resource_type=ResourceType.MGMT_COMPUTE, min_api='2019-03-01',
arg_type=get_enum_type(VirtualMachineEvictionPolicyTypes, default=None),
help="The eviction policy for the Spot priority virtual machine. Default eviction policy is Deallocate for a Spot priority virtual machine")
c.argument('enable_agent', arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Indicates whether virtual machine agent should be provisioned on the virtual machine. When this property is not specified, default behavior is to set it to true. This will ensure that VM Agent is installed on the VM so that extensions can be added to the VM later')
c.argument('enable_auto_update', arg_type=get_three_state_flag(), min_api='2020-06-01',
help='Indicate whether Automatic Updates is enabled for the Windows virtual machine')
c.argument('patch_mode', arg_type=get_enum_type(['AutomaticByOS', 'AutomaticByPlatform', 'Manual', 'ImageDefault']), min_api='2020-12-01',
help='Mode of in-guest patching to IaaS virtual machine. Allowed values for Windows VM: AutomaticByOS, AutomaticByPlatform, Manual. Allowed values for Linux VM: AutomaticByPlatform, ImageDefault. Manual - You control the application of patches to a virtual machine. You do this by applying patches manually inside the VM. In this mode, automatic updates are disabled; the paramater --enable-auto-update must be false. AutomaticByOS - The virtual machine will automatically be updated by the OS. The parameter --enable-auto-update must be true. AutomaticByPlatform - the virtual machine will automatically updated by the OS. ImageDefault - The virtual machine\'s default patching configuration is used. The parameter --enable-agent and --enable-auto-update must be true')
c.argument('ssh_key_name', help='Use it as public key in virtual machine. It should be an existing SSH key resource in Azure.')
c.argument('enable_hotpatching', arg_type=get_three_state_flag(), help='Patch VMs without requiring a reboot. --enable-agent must be set and --patch-mode must be set to AutomaticByPlatform', min_api='2020-12-01')
c.argument('platform_fault_domain', min_api='2020-06-01',
help='Specify the scale set logical fault domain into which the virtual machine will be created. By default, the virtual machine will be automatically assigned to a fault domain that best maintains balance across available fault domains. This is applicable only if the virtualMachineScaleSet property of this virtual machine is set. The virtual machine scale set that is referenced, must have platform fault domain count. This property cannot be updated once the virtual machine is created. Fault domain assignment can be viewed in the virtual machine instance view')
c.argument('count', type=int, is_preview=True,
help='Number of virtual machines to create. Value range is [2, 250], inclusive. Don\'t specify this parameter if you want to create a normal single VM. The VMs are created in parallel. The output of this command is an array of VMs instead of one single VM. Each VM has its own public IP, NIC. VNET and NSG are shared. It is recommended that no existing public IP, NIC, VNET and NSG are in resource group. When --count is specified, --attach-data-disks, --attach-os-disk, --boot-diagnostics-storage, --computer-name, --host, --host-group, --nics, --os-disk-name, --private-ip-address, --public-ip-address, --public-ip-address-dns-name, --storage-account, --storage-container-name, --subnet, --use-unmanaged-disk, --vnet-name are not allowed.')
c.argument('security_type', arg_type=get_enum_type(['TrustedLaunch']), min_api='2020-12-01',
help='Specify if the VM is Trusted Launch enabled. See https://docs.microsoft.com/azure/virtual-machines/trusted-launch.')
c.argument('enable_secure_boot', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable secure boot. It is part of trusted launch.')
c.argument('enable_vtpm', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Enable vTPM. It is part of trusted launch.')
c.argument('user_data', help='UserData for the VM. It can be passed in as file or string.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
c.argument('enable_hibernation', arg_type=get_three_state_flag(), min_api='2021-03-01', help='The flag that enable or disable hibernation capability on the VM.')
c.argument('v_cpus_available', type=int, min_api='2021-07-01', help='Specify the number of vCPUs available for the VM')
c.argument('v_cpus_per_core', type=int, min_api='2021-07-01', help='Specify the vCPU to physical core ratio. Setting this property to 1 also means that hyper-threading is disabled.')
with self.argument_context('vm create', arg_group='Storage') as c:
c.argument('attach_os_disk', help='Attach an existing OS disk to the VM. Can use the name or ID of a managed disk or the URI to an unmanaged disk VHD.')
c.argument('attach_data_disks', nargs='+', help='Attach existing data disks to the VM. Can use the name or ID of a managed disk or the URI to an unmanaged disk VHD.')
with self.argument_context('vm create', arg_group='Dedicated Host', min_api='2019-03-01') as c:
c.argument('dedicated_host_group', options_list=['--host-group'], is_preview=True, help="Name or ID of the dedicated host group that the VM will reside in. --host and --host-group can't be used together.")
c.argument('dedicated_host', options_list=['--host'], is_preview=True, help="ID of the dedicated host that the VM will reside in. --host and --host-group can't be used together.")
with self.argument_context('vm update', arg_group='Dedicated Host', min_api='2019-03-01') as c:
c.argument('dedicated_host_group', options_list=['--host-group'], is_preview=True, help="Name or ID of the dedicated host group that the VM will reside in. --host and --host-group can't be used together. You should deallocate the VM before update, and start the VM after update. Please check out help for more examples.")
c.argument('dedicated_host', options_list=['--host'], is_preview=True, help="ID of the dedicated host that the VM will reside in. --host and --host-group can't be used together. You should deallocate the VM before update, and start the VM after update. Please check out help for more examples.")
with self.argument_context('vm open-port') as c:
c.argument('vm_name', name_arg_type, help='The name of the virtual machine to open inbound traffic on.')
c.argument('network_security_group_name', options_list=('--nsg-name',), help='The name of the network security group to create if one does not exist. Ignored if an NSG already exists.', validator=validate_nsg_name)
c.argument('apply_to_subnet', help='Allow inbound traffic on the subnet instead of the NIC', action='store_true')
c.argument('port', help="The port or port range (ex: 80-100) to open inbound traffic to. Use '*' to allow traffic to all ports. Use comma separated values to specify more than one port or port range.")
c.argument('priority', help='Rule priority, between 100 (highest priority) and 4096 (lowest priority). Must be unique for each rule in the collection.', type=int)
for scope in ['vm show', 'vm list']:
with self.argument_context(scope) as c:
c.argument('show_details', action='store_true', options_list=['--show-details', '-d'], help='show public ip address, FQDN, and power states. command will run slow')
for scope in ['vm show', 'vmss show']:
with self.argument_context(scope) as c:
c.argument('include_user_data', action='store_true', options_list=['--include-user-data', '-u'], help='Include the user data properties in the query result.', min_api='2021-03-01')
for scope in ['vm get-instance-view', 'vm wait', 'vmss wait']:
with self.argument_context(scope) as c:
c.ignore('include_user_data')
with self.argument_context('vm diagnostics') as c:
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'])
with self.argument_context('vm diagnostics set') as c:
c.argument('storage_account', completer=get_resource_name_completion_list('Microsoft.Storage/storageAccounts'))
with self.argument_context('vm install-patches') as c:
c.argument('maximum_duration', type=str, help='Specify the maximum amount of time that the operation will run. It must be an ISO 8601-compliant duration string such as PT4H (4 hours)')
c.argument('reboot_setting', arg_type=get_enum_type(RebootSetting), help='Define when it is acceptable to reboot a VM during a software update operation.')
c.argument('classifications_to_include_win', nargs='+', arg_type=get_enum_type(VMGuestPatchClassificationWindows), help='Space-separated list of classifications to include for Windows VM.')
c.argument('classifications_to_include_linux', nargs='+', arg_type=get_enum_type(VMGuestPatchClassificationLinux), help='Space-separated list of classifications to include for Linux VM.')
c.argument('kb_numbers_to_include', nargs='+', help='Space-separated list of KBs to include in the patch operation. Applicable to Windows VM only')
c.argument('kb_numbers_to_exclude', nargs='+', help='Space-separated list of KBs to exclude in the patch operation. Applicable to Windows VM only')
c.argument('exclude_kbs_requiring_reboot', arg_type=get_three_state_flag(), help="Filter out KBs that don't have a reboot behavior of 'NeverReboots' when this is set. Applicable to Windows VM only")
c.argument('package_name_masks_to_include', nargs='+', help='Space-separated list of packages to include in the patch operation. Format: packageName_packageVersion. Applicable to Linux VM only')
c.argument('package_name_masks_to_exclude', nargs='+', help='Space-separated list of packages to exclude in the patch operation. Format: packageName_packageVersion. Applicable to Linux VM only')
with self.argument_context('vm disk') as c:
c.argument('vm_name', options_list=['--vm-name'], id_part=None, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines'))
c.argument('new', action='store_true', help='create a new disk')
c.argument('sku', arg_type=disk_sku, help='Underlying storage SKU')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
c.argument('lun', type=int, help='0-based logical unit number (LUN). Max value depends on the Virtual Machine size.')
with self.argument_context('vm disk attach') as c:
c.argument('enable_write_accelerator', min_api='2017-12-01', action='store_true', help='enable write accelerator')
c.argument('disk', options_list=['--name', '-n', c.deprecate(target='--disk', redirect='--name', hide=True)],
help="The name or ID of the managed disk", validator=validate_vm_disk, id_part='name',
completer=get_resource_name_completion_list('Microsoft.Compute/disks'))
with self.argument_context('vm disk detach') as c:
c.argument('disk_name', arg_type=name_arg_type, help='The data disk name.')
with self.argument_context('vm encryption enable') as c:
c.argument('encrypt_format_all', action='store_true', help='Encrypts-formats data disks instead of encrypting them. Encrypt-formatting is a lot faster than in-place encryption but wipes out the partition getting encrypt-formatted.')
# Place aad arguments in their own group
aad_arguments = 'Azure Active Directory'
c.argument('aad_client_id', arg_group=aad_arguments)
c.argument('aad_client_secret', arg_group=aad_arguments)
c.argument('aad_client_cert_thumbprint', arg_group=aad_arguments)
with self.argument_context('vm extension') as c:
c.argument('vm_extension_name', name_arg_type, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines/extensions'), help='Name of the extension.', id_part='child_name_1')
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'], id_part='name')
c.argument('expand', deprecate_info=c.deprecate(expiration='3.0.0', hide=True))
with self.argument_context('vm extension list') as c:
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'], id_part=None)
with self.argument_context('vm secret') as c:
c.argument('secrets', multi_ids_type, options_list=['--secrets', '-s'], help='Space-separated list of key vault secret URIs. Perhaps, produced by \'az keyvault secret list-versions --vault-name vaultname -n cert1 --query "[?attributes.enabled].id" -o tsv\'')
c.argument('keyvault', help='Name or ID of the key vault.', validator=validate_keyvault)
c.argument('certificate', help='key vault certificate name or its full secret URL')
c.argument('certificate_store', help='Windows certificate store names. Default: My')
with self.argument_context('vm secret list') as c:
c.argument('vm_name', arg_type=existing_vm_name, id_part=None)
with self.argument_context('vm image') as c:
c.argument('publisher_name', options_list=['--publisher', '-p'], help='image publisher')
c.argument('publisher', options_list=['--publisher', '-p'], help='image publisher')
c.argument('offer', options_list=['--offer', '-f'], help='image offer')
c.argument('plan', help='image billing plan')
c.argument('sku', options_list=['--sku', '-s'], help='image sku')
c.argument('version', help="image sku's version")
c.argument('urn', help="URN, in format of 'publisher:offer:sku:version' or 'publisher:offer:sku:edge_zone:version'. If specified, other argument values can be omitted")
with self.argument_context('vm image list') as c:
c.argument('image_location', get_location_type(self.cli_ctx))
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image list-offers') as c:
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image list-skus') as c:
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image list-publishers') as c:
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image show') as c:
c.argument('skus', options_list=['--sku', '-s'])
c.argument('edge_zone', edge_zone_type)
with self.argument_context('vm image terms') as c:
c.argument('urn', help='URN, in the format of \'publisher:offer:sku:version\'. If specified, other argument values can be omitted')
c.argument('publisher', help='Image publisher')
c.argument('offer', help='Image offer')
c.argument('plan', help='Image billing plan')
with self.argument_context('vm nic') as c:
c.argument('vm_name', existing_vm_name, options_list=['--vm-name'], id_part=None)
c.argument('nics', nargs='+', help='Names or IDs of NICs.', validator=validate_vm_nics)
c.argument('primary_nic', help='Name or ID of the primary NIC. If missing, the first NIC in the list will be the primary.')
with self.argument_context('vm nic show') as c:
c.argument('nic', help='NIC name or ID.', validator=validate_vm_nic)
with self.argument_context('vm unmanaged-disk') as c:
c.argument('new', action='store_true', help='Create a new disk.')
c.argument('lun', type=int, help='0-based logical unit number (LUN). Max value depends on the Virtual Machine size.')
c.argument('vhd_uri', help="Virtual hard disk URI. For example: https://mystorage.blob.core.windows.net/vhds/d1.vhd")
with self.argument_context('vm unmanaged-disk attach') as c:
c.argument('disk_name', options_list=['--name', '-n'], help='The data disk name.')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
with self.argument_context('vm unmanaged-disk detach') as c:
c.argument('disk_name', options_list=['--name', '-n'], help='The data disk name.')
for scope in ['vm unmanaged-disk attach', 'vm unmanaged-disk detach']:
with self.argument_context(scope) as c:
c.argument('vm_name', arg_type=existing_vm_name, options_list=['--vm-name'], id_part=None)
with self.argument_context('vm unmanaged-disk list') as c:
c.argument('vm_name', options_list=['--vm-name', '--name', '-n'], arg_type=existing_vm_name, id_part=None)
with self.argument_context('vm user') as c:
c.argument('username', options_list=['--username', '-u'], help='The user name')
c.argument('password', options_list=['--password', '-p'], help='The user password')
with self.argument_context('vm list-skus') as c:
c.argument('size', options_list=['--size', '-s'], help="size name, partial name is accepted")
c.argument('zone', options_list=['--zone', '-z'], arg_type=get_three_state_flag(), help="show skus supporting availability zones")
c.argument('show_all', options_list=['--all'], arg_type=get_three_state_flag(),
help="show all information including vm sizes not available under the current subscription")
c.argument('resource_type', options_list=['--resource-type', '-r'], help='resource types e.g. "availabilitySets", "snapshots", "disks", etc')
with self.argument_context('vm restart') as c:
c.argument('force', action='store_true', help='Force the VM to restart by redeploying it. Use if the VM is unresponsive.')
with self.argument_context('vm host') as c:
c.argument('host_group_name', options_list=['--host-group'], id_part='name', help="Name of the Dedicated Host Group")
c.argument('host_name', name_arg_type, id_part='child_name_1', help="Name of the Dedicated Host")
c.ignore('expand')
with self.argument_context('vm host create') as c:
c.argument('platform_fault_domain', options_list=['--platform-fault-domain', '-d'], type=int,
help="Fault domain of the host within a group. Allowed values: 0, 1, 2")
c.argument('auto_replace_on_failure', options_list=['--auto-replace'], arg_type=get_three_state_flag(),
help="Replace the host automatically if a failure occurs")
c.argument('license_type', arg_type=get_enum_type(DedicatedHostLicenseTypes),
help="The software license type that will be applied to the VMs deployed on the dedicated host.")
c.argument('sku', help="SKU of the dedicated host. Available SKUs: https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/")
with self.argument_context('vm host list') as c:
c.argument('host_group_name', id_part=None)
with self.argument_context('vm host group') as c:
c.argument('host_group_name', name_arg_type, id_part='name', help="Name of the Dedicated Host Group")
c.argument('automatic_placement', arg_type=get_three_state_flag(), min_api='2020-06-01',
help='Specify whether virtual machines or virtual machine scale sets can be placed automatically '
'on the dedicated host group. Automatic placement means resources are allocated on dedicated '
'hosts, that are chosen by Azure, under the dedicated host group. The value is defaulted to '
'false when not provided.')
with self.argument_context('vm host group create') as c:
c.argument('platform_fault_domain_count', options_list=["--platform-fault-domain-count", "-c"], type=int,
help="Number of fault domains that the host group can span.")
c.argument('zones', zone_type)
for scope in ["vm host", "vm host group"]:
with self.argument_context("{} create".format(scope)) as c:
location_type = get_location_type(self.cli_ctx)
custom_location_msg = " Otherwise, location will default to the resource group's location"
custom_location_type = CLIArgumentType(overrides=location_type,
help=location_type.settings["help"] + custom_location_msg)
c.argument('location', arg_type=custom_location_type)
# endregion
# region VMSS
scaleset_name_aliases = ['vm_scale_set_name', 'virtual_machine_scale_set_name', 'name']
with self.argument_context('vmss') as c:
c.argument('zones', zones_type, min_api='2017-03-30')
c.argument('instance_id', id_part='child_name_1')
c.argument('instance_ids', multi_ids_type, help='Space-separated list of IDs (ex: 1 2 3 ...) or * for all instances. If not provided, the action will be applied on the scaleset itself')
c.argument('tags', tags_type)
c.argument('caching', help='Disk caching policy', arg_type=get_enum_type(CachingTypes))
for dest in scaleset_name_aliases:
c.argument(dest, vmss_name_type)
c.argument('host_group', min_api='2020-06-01',
help='Name or ID of dedicated host group that the virtual machine scale set resides in')
for scope in ['vmss deallocate', 'vmss delete-instances', 'vmss restart', 'vmss start', 'vmss stop', 'vmss show', 'vmss update-instances', 'vmss simulate-eviction']:
with self.argument_context(scope) as c:
for dest in scaleset_name_aliases:
c.argument(dest, vmss_name_type, id_part=None) # due to instance-ids parameter
with self.argument_context('vmss create', operation_group='virtual_machine_scale_sets') as c:
VirtualMachineEvictionPolicyTypes = self.get_models('VirtualMachineEvictionPolicyTypes', resource_type=ResourceType.MGMT_COMPUTE)
c.argument('name', name_arg_type)
c.argument('nat_backend_port', default=None, help='Backend port to open with NAT rules. Defaults to 22 on Linux and 3389 on Windows.')
c.argument('single_placement_group', arg_type=get_three_state_flag(), help="Limit the scale set to a single placement group."
" See https://docs.microsoft.com/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups for details.")
c.argument('platform_fault_domain_count', type=int, help='Fault Domain count for each placement group in the availability zone', min_api='2017-12-01')
c.argument('vmss_name', name_arg_type, id_part=None, help='Name of the virtual machine scale set.')
c.argument('instance_count', help='Number of VMs in the scale set.', type=int)
c.argument('disable_overprovision', help='Overprovision option (see https://azure.microsoft.com/documentation/articles/virtual-machine-scale-sets-overview/ for details).', action='store_true')
c.argument('upgrade_policy_mode', help=None, arg_type=get_enum_type(UpgradeMode))
c.argument('health_probe', help='Probe name from the existing load balancer, mainly used for rolling upgrade or automatic repairs')
c.argument('vm_sku', help='Size of VMs in the scale set. Default to "Standard_DS1_v2". See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.')
c.argument('nsg', help='Name or ID of an existing Network Security Group.', arg_group='Network')
c.argument('eviction_policy', resource_type=ResourceType.MGMT_COMPUTE, min_api='2017-12-01', arg_type=get_enum_type(VirtualMachineEvictionPolicyTypes, default=None),
help="The eviction policy for virtual machines in a Spot priority scale set. Default eviction policy is Deallocate for a Spot priority scale set")
c.argument('application_security_groups', resource_type=ResourceType.MGMT_COMPUTE, min_api='2018-06-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network', validator=validate_asg_names_or_ids)
c.argument('computer_name_prefix', help='Computer name prefix for all of the virtual machines in the scale set. Computer name prefixes must be 1 to 15 characters long')
c.argument('orchestration_mode', help='Choose how virtual machines are managed by the scale set. In Uniform mode, you define a virtual machine model and Azure will generate identical instances based on that model. In Flexible mode, you manually create and add a virtual machine of any configuration to the scale set or generate identical instances based on virtual machine model defined for the scale set.',
arg_type=get_enum_type(['Uniform', 'Flexible']))
c.argument('scale_in_policy', scale_in_policy_type)
c.argument('automatic_repairs_grace_period', min_api='2018-10-01',
help='The amount of time (in minutes, between 30 and 90) for which automatic repairs are suspended due to a state change on VM.')
c.argument('user_data', help='UserData for the virtual machines in the scale set. It can be passed in as file or string.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
c.argument('network_api_version', min_api='2021-03-01',
help="Specify the Microsoft.Network API version used when creating networking resources in the Network "
"Interface Configurations for Virtual Machine Scale Set with orchestration mode 'Flexible'. Default "
"value is 2020-11-01.")
c.argument('enable_spot_restore', arg_type=get_three_state_flag(), min_api='2021-04-01', help='Enable the Spot-Try-Restore feature where evicted VMSS SPOT instances will be tried to be restored opportunistically based on capacity availability and pricing constraints')
c.argument('spot_restore_timeout', min_api='2021-04-01', help='Timeout value expressed as an ISO 8601 time duration after which the platform will not try to restore the VMSS SPOT instances')
c.argument('enable_agent', arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Indicate whether virtual machine agent should be provisioned on the virtual machine. When this property is not specified, default behavior is to set it to true. This will ensure that VM Agent is installed on the VM so that extensions can be added to the VM later')
c.argument('enable_auto_update', arg_type=get_three_state_flag(), min_api='2020-06-01',
help='Indicate whether Automatic Updates is enabled for the Windows virtual machine')
c.argument('patch_mode', arg_type=get_enum_type(['AutomaticByOS', 'AutomaticByPlatform', 'Manual', 'ImageDefault']), min_api='2020-12-01',
help='Mode of in-guest patching to IaaS virtual machine. Allowed values for Windows VM: AutomaticByOS, AutomaticByPlatform, Manual. Allowed values for Linux VM: AutomaticByPlatform, ImageDefault. Manual - You control the application of patches to a virtual machine. You do this by applying patches manually inside the VM. In this mode, automatic updates are disabled; the paramater --enable-auto-update must be false. AutomaticByOS - The virtual machine will automatically be updated by the OS. The parameter --enable-auto-update must be true. AutomaticByPlatform - the virtual machine will automatically updated by the OS. ImageDefault - The virtual machine\'s default patching configuration is used. The parameter --enable-agent and --enable-auto-update must be true')
with self.argument_context('vmss create', arg_group='Network Balancer') as c:
LoadBalancerSkuName = self.get_models('LoadBalancerSkuName', resource_type=ResourceType.MGMT_NETWORK)
c.argument('application_gateway', help='Name to use when creating a new application gateway (default) or referencing an existing one. Can also reference an existing application gateway by ID or specify "" for none.', options_list=['--app-gateway'])
c.argument('app_gateway_capacity', help='The number of instances to use when creating a new application gateway.')
c.argument('app_gateway_sku', help='SKU when creating a new application gateway.')
c.argument('app_gateway_subnet_address_prefix', help='The subnet IP address prefix to use when creating a new application gateway in CIDR format.')
c.argument('backend_pool_name', help='Name to use for the backend pool when creating a new load balancer or application gateway.')
c.argument('backend_port', help='When creating a new load balancer, backend port to open with NAT rules (Defaults to 22 on Linux and 3389 on Windows). When creating an application gateway, the backend port to use for the backend HTTP settings.', type=int)
c.argument('load_balancer', help='Name to use when creating a new load balancer (default) or referencing an existing one. Can also reference an existing load balancer by ID or specify "" for none.', options_list=['--load-balancer', '--lb'])
c.argument('load_balancer_sku', resource_type=ResourceType.MGMT_NETWORK, min_api='2017-08-01', options_list=['--lb-sku'], arg_type=get_enum_type(LoadBalancerSkuName),
help="Sku of the Load Balancer to create. Default to 'Standard' when single placement group is turned off; otherwise, default to 'Basic'. The public IP is supported to be created on edge zone only when it is 'Standard'")
c.argument('nat_pool_name', help='Name to use for the NAT pool when creating a new load balancer.', options_list=['--lb-nat-pool-name', '--nat-pool-name'])
with self.argument_context('vmss create', min_api='2017-03-30', arg_group='Network') as c:
c.argument('public_ip_per_vm', action='store_true', help="Each VM instance will have a public ip. For security, you can use '--nsg' to apply appropriate rules")
c.argument('vm_domain_name', help="domain name of VM instances, once configured, the FQDN is `vm<vm-index>.<vm-domain-name>.<..rest..>`")
c.argument('dns_servers', nargs='+', help="space-separated IP addresses of DNS servers, e.g. 10.0.0.5 10.0.0.6")
c.argument('accelerated_networking', arg_type=get_three_state_flag(),
help="enable accelerated networking. Unless specified, CLI will enable it based on machine image and size")
with self.argument_context('vmss update') as c:
protection_policy_type = CLIArgumentType(overrides=get_three_state_flag(), arg_group="Protection Policy", min_api='2019-03-01')
c.argument('protect_from_scale_in', arg_type=protection_policy_type, help="Protect the VM instance from scale-in operations.")
c.argument('protect_from_scale_set_actions', arg_type=protection_policy_type, help="Protect the VM instance from scale set actions (including scale-in).")
c.argument('enable_terminate_notification', min_api='2019-03-01', arg_type=get_three_state_flag(),
help='Enable terminate notification')
c.argument('ultra_ssd_enabled', ultra_ssd_enabled_type)
c.argument('scale_in_policy', scale_in_policy_type)
c.argument('user_data', help='UserData for the virtual machines in the scale set. It can be passed in as file or string. If empty string is passed in, the existing value will be deleted.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
c.argument('enable_spot_restore', arg_type=get_three_state_flag(), min_api='2021-04-01',
help='Enable the Spot-Try-Restore feature where evicted VMSS SPOT instances will be tried to be restored opportunistically based on capacity availability and pricing constraints')
c.argument('spot_restore_timeout', min_api='2021-04-01',
help='Timeout value expressed as an ISO 8601 time duration after which the platform will not try to restore the VMSS SPOT instances')
c.argument('vm_sku', help='The new size of the virtual machine instances in the scale set. Default to "Standard_DS1_v2". See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.', is_preview=True)
c.argument('ephemeral_os_disk_placement', arg_type=ephemeral_placement_type,
help='Only applicable when used with `--vm-sku`. Allows you to choose the Ephemeral OS disk provisioning location.', is_preview=True)
with self.argument_context('vmss update', min_api='2018-10-01', arg_group='Automatic Repairs') as c:
c.argument('enable_automatic_repairs', arg_type=get_three_state_flag(), help='Enable automatic repairs')
c.argument(
'automatic_repairs_grace_period',
help='The amount of time (in minutes, between 30 and 90) for which automatic repairs are suspended due to a state change on VM.'
)
for scope in ['vmss create', 'vmss update']:
with self.argument_context(scope) as c:
c.argument('terminate_notification_time', min_api='2019-03-01',
help='Length of time (in minutes, between 5 and 15) a notification to be sent to the VM on the instance metadata server till the VM gets deleted')
c.argument('max_batch_instance_percent', type=int, min_api='2020-12-01',
help='The maximum percent of total virtual machine instances that will be upgraded simultaneously by the rolling upgrade in one batch. Default: 20%')
c.argument('max_unhealthy_instance_percent', type=int, min_api='2020-12-01',
help='The maximum percentage of the total virtual machine instances in the scale set that can be simultaneously unhealthy. Default: 20%')
c.argument('max_unhealthy_upgraded_instance_percent', type=int, min_api='2020-12-01',
help='The maximum percentage of upgraded virtual machine instances that can be found to be in an unhealthy state. Default: 20%')
c.argument('pause_time_between_batches', min_api='2020-12-01',
help='The wait time between completing the update for all virtual machines in one batch and starting the next batch. Default: 0 seconds')
c.argument('enable_cross_zone_upgrade', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Set this Boolean property will allow VMSS to ignore AZ boundaries when constructing upgrade batches, and only consider Update Domain and maxBatchInstancePercent to determine the batch size')
c.argument('prioritize_unhealthy_instances', arg_type=get_three_state_flag(), min_api='2020-12-01',
help='Set this Boolean property will lead to all unhealthy instances in a scale set getting upgraded before any healthy instances')
for scope, help_prefix in [('vmss update', 'Update the'), ('vmss wait', 'Wait on the')]:
with self.argument_context(scope) as c:
c.argument('instance_id', id_part='child_name_1', help="{0} VM instance with this ID. If missing, {0} VMSS.".format(help_prefix))
for scope in ['vmss update-instances', 'vmss delete-instances']:
with self.argument_context(scope) as c:
c.argument('instance_ids', multi_ids_type, help='Space-separated list of IDs (ex: 1 2 3 ...) or * for all instances.')
with self.argument_context('vmss diagnostics') as c:
c.argument('vmss_name', id_part=None, help='Scale set name')
with self.argument_context('vmss disk') as c:
options_list = ['--vmss-name'] + [c.deprecate(target=opt, redirect='--vmss-name', hide=True)for opt in name_arg_type.settings['options_list']]
new_vmss_name_type = CLIArgumentType(overrides=vmss_name_type, options_list=options_list)
c.argument('lun', type=int, help='0-based logical unit number (LUN). Max value depends on the Virtual Machine instance size.')
c.argument('size_gb', options_list=['--size-gb', '-z'], help='size in GB. Max size: 4095 GB (certain preview disks can be larger).', type=int)
c.argument('vmss_name', new_vmss_name_type, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'))
c.argument('disk', validator=validate_vmss_disk, help='existing disk name or ID to attach or detach from VM instances',
min_api='2017-12-01', completer=get_resource_name_completion_list('Microsoft.Compute/disks'))
c.argument('instance_id', help='Scale set VM instance id', min_api='2017-12-01')
c.argument('sku', arg_type=disk_sku, help='Underlying storage SKU')
with self.argument_context('vmss encryption') as c:
c.argument('vmss_name', vmss_name_type, completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'))
with self.argument_context('vmss extension') as c:
c.argument('extension_name', name_arg_type, help='Name of the extension.')
c.argument('vmss_name', vmss_name_type, options_list=['--vmss-name'], id_part=None)
with self.argument_context('vmss nic') as c:
c.argument('virtual_machine_scale_set_name', options_list=['--vmss-name'], help='Scale set name.', completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachineScaleSets'), id_part='name')
c.argument('virtualmachine_index', options_list=['--instance-id'], id_part='child_name_1')
c.argument('network_interface_name', options_list=['--name', '-n'], metavar='NIC_NAME', help='The network interface (NIC).', completer=get_resource_name_completion_list('Microsoft.Network/networkInterfaces'), id_part='child_name_2')
with self.argument_context('vmss nic list') as c:
c.argument('virtual_machine_scale_set_name', arg_type=vmss_name_type, options_list=['--vmss-name'], id_part=None)
with self.argument_context('vmss set-orchestration-service-state') as c:
c.argument('service_name', arg_type=get_enum_type(OrchestrationServiceNames), help='The name of the orchestration service.')
c.argument('action', arg_type=get_enum_type(OrchestrationServiceStateAction), help='The action to be performed.')
# endregion
# region VM & VMSS Shared
for scope in ['vm', 'vmss']:
with self.argument_context(scope) as c:
c.argument('no_auto_upgrade',
options_list=['--no-auto-upgrade-minor-version', c.deprecate(target='--no-auto-upgrade', redirect='--no-auto-upgrade-minor-version')],
arg_type=get_three_state_flag(),
help='If set, the extension service will not automatically pick or upgrade to the latest minor version, even if the extension is redeployed.')
with self.argument_context('{} run-command'.format(scope)) as c:
c.argument('command_id', completer=get_vm_run_command_completion_list, help="The command id. Use 'az {} run-command list' to get the list".format(scope))
if scope == 'vmss':
c.argument('vmss_name', vmss_name_type)
with self.argument_context('{} run-command invoke'.format(scope)) as c:
c.argument('parameters', nargs='+', help="space-separated parameters in the format of '[name=]value'")
c.argument('scripts', nargs='+', help="Space-separated script lines. Use @{file} to load script from a file")
with self.argument_context('{} stop'.format(scope)) as c:
c.argument('skip_shutdown', action='store_true', help='Skip shutdown and power-off immediately.', min_api='2019-03-01')
run_cmd_name_type = CLIArgumentType(options_list=['--name', '--run-command-name'], help='The name of the virtual machine run command.')
run_cmd_vm_name = CLIArgumentType(options_list=['--vm-name'], help='The name of the virtual machine')
for scope in ['create', 'update']:
with self.argument_context('vm run-command {}'.format(scope)) as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('script', help='Contain the powershell or bash script to execute on the VM.')
c.argument('script_uri', help='Contain a uri to the script to execute on the VM. Uri can be any link accessible from the VM or a storage blob without SAS. If subscription has access to the storage blob, then SAS will be auto-generated. ')
c.argument('command_id', help='Specify a command id of predefined script. All command ids can be listed using "list" command.')
c.argument('parameters', nargs='+', help='Set custom parameters in a name-value pair.')
c.argument('protected_parameters', nargs='+', help='Set custom parameters in a name-value pair. These parameters will be encrypted during transmission and will not be logged.')
c.argument('async_execution', arg_type=get_three_state_flag(), help='Optional. If set to true, provisioning '
'will complete as soon as the script starts and will not wait for script to complete.')
c.argument('run_as_user', help='By default script process runs under system/root user. Specify custom user to host the process.')
c.argument('run_as_password', help='Password if needed for using run-as-user parameter. It will be encrypted and not logged. ')
c.argument('timeout_in_seconds', type=int, help='The timeout in seconds to execute the run command.')
c.argument('output_blob_uri', help='Specify the Azure storage blob where script output stream will be uploaded.')
c.argument('error_blob_uri', help='Specify the Azure storage blob where script error stream will be uploaded.')
with self.argument_context('vm run-command delete') as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
with self.argument_context('vm run-command list') as c:
c.argument('vm_name', run_cmd_vm_name, id_part=None)
c.argument('expand', help='The expand expression to apply on the operation.')
c.argument('location', arg_type=get_location_type(self.cli_ctx))
with self.argument_context('vm run-command show') as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
c.argument('expand', help='The expand expression to apply on the operation.', deprecate_info=c.deprecate(hide=True))
c.argument('instance_view', action='store_true', help='Track the run command progress')
c.argument('location', arg_type=get_location_type(self.cli_ctx))
c.argument('command_id', help='The command id.')
with self.argument_context('vm run-command wait') as c:
c.argument('vm_name', run_cmd_vm_name)
c.argument('run_command_name', run_cmd_name_type)
c.argument('expand', help='The expand expression to apply on the operation.', deprecate_info=c.deprecate(hide=True))
c.argument('instance_view', action='store_true', help='Track the run command progress')
c.argument('location', arg_type=get_location_type(self.cli_ctx))
c.argument('command_id', help='The command id.')
run_cmd_vmss_name = CLIArgumentType(options_list=['--vmss-name'], help='The name of the VM scale set.')
for scope in ['create', 'update']:
with self.argument_context('vmss run-command {}'.format(scope)) as c:
c.argument('vmss_name', run_cmd_vmss_name)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('run_command_name', run_cmd_name_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('script', help='Contain the powershell or bash script to execute on the VM.')
c.argument('script_uri',
help='Contain a uri to the script to execute on the VM. Uri can be any link accessible from the VM or a storage blob without SAS. If subscription has access to the storage blob, then SAS will be auto-generated. ')
c.argument('command_id',
help='Specify a command id of predefined script. All command ids can be listed using "list" command.')
c.argument('parameters', nargs='+', help='Set custom parameters in a name-value pair.')
c.argument('protected_parameters', nargs='+',
help='Set custom parameters in a name-value pair. These parameters will be encrypted during transmission and will not be logged.')
c.argument('async_execution', arg_type=get_three_state_flag(), help='Optional. If set to true, provisioning '
'will complete as soon as the script starts and will not wait for script to complete.')
c.argument('run_as_user',
help='By default script process runs under system/root user. Specify custom user to host the process.')
c.argument('run_as_password',
help='Password if needed for using run-as-user parameter. It will be encrypted and not logged. ')
c.argument('timeout_in_seconds', type=int, help='The timeout in seconds to execute the run command.')
c.argument('output_blob_uri', help='Uri (without SAS) to an append blob where the script output will be uploaded.')
c.argument('error_blob_uri', help='Uri (without SAS) to an append blob where the script error stream will be uploaded.')
with self.argument_context('vmss run-command delete') as c:
c.argument('vmss_name', run_cmd_vmss_name)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('run_command_name', run_cmd_name_type)
with self.argument_context('vmss run-command list') as c:
c.argument('vmss_name', run_cmd_vmss_name, id_part=None)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('expand', help='The expand expression to apply on the operation.')
with self.argument_context('vmss run-command show') as c:
c.argument('vmss_name', run_cmd_vmss_name)
c.argument('instance_id', help='The instance ID of the virtual machine.')
c.argument('run_command_name', run_cmd_name_type)
c.argument('expand', help='The expand expression to apply on the operation.', deprecate_info=c.deprecate(hide=True))
c.argument('instance_view', action='store_true', help='Track the run command progress')
for scope in ['vm identity assign', 'vmss identity assign']:
with self.argument_context(scope) as c:
c.argument('assign_identity', options_list=['--identities'], nargs='*', help="Space-separated identities to assign. Use '{0}' to refer to the system assigned identity. Default: '{0}'".format(MSI_LOCAL_ID))
c.argument('vm_name', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
for scope in ['vm identity remove', 'vmss identity remove']:
with self.argument_context(scope) as c:
c.argument('identities', nargs='+', help="Space-separated identities to remove. Use '{0}' to refer to the system assigned identity. Default: '{0}'".format(MSI_LOCAL_ID))
c.argument('vm_name', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
for scope in ['vm identity show', 'vmss identity show']:
with self.argument_context(scope) as c:
c.argument('vm_name', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
for scope in ['vm application set', 'vmss application set']:
with self.argument_context(scope) as c:
c.argument('vm', existing_vm_name)
c.argument('vmss_name', vmss_name_type)
c.argument('application_version_ids', options_list=['--app-version-ids'], nargs='*', help="Space-separated application version ids to set to VM.")
c.argument('order_applications', action='store_true', help='Whether set order index at each gallery applications, the order index starts from 1.')
c.argument('application_configuration_overrides', options_list=['--app-config-overrides'], nargs='*',
help='Space-separated application configuration overrides for each application version ids. '
'It should have the same number of items as the application version ids. Null is available for a application '
'which does not have a configuration override.')
for scope in ['vm application list', 'vmss application list']:
with self.argument_context(scope) as c:
c.argument('vm_name', options_list=['--vm-name', '--name', '-n'], arg_type=existing_vm_name, id_part=None)
c.argument('vmss_name', vmss_name_type, id_part=None)
for scope in ['vm create', 'vmss create']:
with self.argument_context(scope) as c:
c.argument('location', get_location_type(self.cli_ctx), help='Location in which to create VM and related resources. If default location is not configured, will default to the resource group\'s location')
c.argument('tags', tags_type)
c.argument('no_wait', help='Do not wait for the long-running operation to finish.')
c.argument('validate', options_list=['--validate'], help='Generate and validate the ARM template without creating any resources.', action='store_true')
c.argument('size', help='The VM size to be created. See https://azure.microsoft.com/pricing/details/virtual-machines/ for size info.')
c.argument('image', completer=get_urn_aliases_completion_list)
c.argument('custom_data', help='Custom init script file or text (cloud-init, cloud-config, etc..)', completer=FilesCompleter(), type=file_type)
c.argument('secrets', multi_ids_type, help='One or many Key Vault secrets as JSON strings or files via `@{path}` containing `[{ "sourceVault": { "id": "value" }, "vaultCertificates": [{ "certificateUrl": "value", "certificateStore": "cert store name (only on windows)"}] }]`', type=file_type, completer=FilesCompleter())
c.argument('assign_identity', nargs='*', arg_group='Managed Service Identity', help="accept system or user assigned identities separated by spaces. Use '[system]' to refer system assigned identity, or a resource id to refer user assigned identity. Check out help for more examples")
c.ignore('aux_subscriptions')
c.argument('edge_zone', edge_zone_type)
with self.argument_context(scope, arg_group='Authentication') as c:
c.argument('generate_ssh_keys', action='store_true', help='Generate SSH public and private key files if missing. The keys will be stored in the ~/.ssh directory')
c.argument('admin_username', help='Username for the VM. Default value is current username of OS. If the default value is system reserved, then default value will be set to azureuser. Please refer to https://docs.microsoft.com/rest/api/compute/virtualmachines/createorupdate#osprofile to get a full list of reserved values.')
c.argument('admin_password', help="Password for the VM if authentication type is 'Password'.")
c.argument('ssh_key_value', options_list=['--ssh-key-values'], completer=FilesCompleter(), type=file_type, nargs='+')
c.argument('ssh_dest_key_path', help='Destination file path on the VM for the SSH key. If the file already exists, the specified key(s) are appended to the file. Destination path for SSH public keys is currently limited to its default value "/home/username/.ssh/authorized_keys" due to a known issue in Linux provisioning agent.')
c.argument('authentication_type', help='Type of authentication to use with the VM. Defaults to password for Windows and SSH public key for Linux. "all" enables both ssh and password authentication. ', arg_type=get_enum_type(['ssh', 'password', 'all']))
with self.argument_context(scope, arg_group='Storage') as c:
if DiskStorageAccountTypes:
allowed_values = ", ".join([sku.value for sku in DiskStorageAccountTypes])
else:
allowed_values = ", ".join(['Premium_LRS', 'Standard_LRS'])
usage = 'Usage: [--storage-sku SKU | --storage-sku ID=SKU ID=SKU ID=SKU...], where each ID is "os" or a 0-indexed lun.'
allowed_values = 'Allowed values: {}.'.format(allowed_values)
storage_sku_help = 'The SKU of the storage account with which to persist VM. Use a singular sku that would be applied across all disks, ' \
'or specify individual disks. {} {}'.format(usage, allowed_values)
c.argument('os_disk_name', help='The name of the new VM OS disk.')
c.argument('os_type', help='Type of OS installed on a custom VHD. Do not use when specifying an URN or URN alias.', arg_type=get_enum_type(['windows', 'linux']))
c.argument('storage_account', help="Only applicable when used with `--use-unmanaged-disk`. The name to use when creating a new storage account or referencing an existing one. If omitted, an appropriate storage account in the same resource group and location will be used, or a new one will be created.")
c.argument('storage_sku', nargs='+', help=storage_sku_help)
c.argument('storage_container_name', help="Only applicable when used with `--use-unmanaged-disk`. Name of the storage container for the VM OS disk. Default: vhds")
c.ignore('os_publisher', 'os_offer', 'os_sku', 'os_version', 'storage_profile')
c.argument('use_unmanaged_disk', action='store_true', help='Do not use managed disk to persist VM')
c.argument('os_disk_size_gb', type=int, help='OS disk size in GB to create.')
c.argument('data_disk_sizes_gb', nargs='+', type=int, help='space-separated empty managed data disk sizes in GB to create')
c.ignore('disk_info', 'storage_account_type', 'public_ip_address_type', 'nsg_type', 'nic_type', 'vnet_type', 'load_balancer_type', 'app_gateway_type')
c.argument('os_caching', options_list=[self.deprecate(target='--storage-caching', redirect='--os-disk-caching', hide=True), '--os-disk-caching'], help='Storage caching type for the VM OS disk. Default: ReadWrite', arg_type=get_enum_type(CachingTypes))
c.argument('data_caching', options_list=['--data-disk-caching'], nargs='+',
help="storage caching type for data disk(s), including 'None', 'ReadOnly', 'ReadWrite', etc. Use a singular value to apply on all disks, or use `<lun>=<vaule1> <lun>=<value2>` to configure individual disk")
c.argument('ultra_ssd_enabled', ultra_ssd_enabled_type)
c.argument('ephemeral_os_disk', arg_type=get_three_state_flag(), min_api='2018-06-01',
help='Allows you to create an OS disk directly on the host node, providing local disk performance and faster VM/VMSS reimage time.', is_preview=True)
c.argument('ephemeral_os_disk_placement', arg_type=ephemeral_placement_type,
help='Only applicable when used with `--ephemeral-os-disk`. Allows you to choose the Ephemeral OS disk provisioning location.', is_preview=True)
c.argument('os_disk_encryption_set', min_api='2019-07-01', help='Name or ID of disk encryption set for OS disk.')
c.argument('data_disk_encryption_sets', nargs='+', min_api='2019-07-01',
help='Names or IDs (space delimited) of disk encryption sets for data disks.')
c.argument('data_disk_iops', min_api='2019-07-01', nargs='+', type=int, help='Specify the Read-Write IOPS (space delimited) for the managed disk. Should be used only when StorageAccountType is UltraSSD_LRS. If not specified, a default value would be assigned based on diskSizeGB.')
c.argument('data_disk_mbps', min_api='2019-07-01', nargs='+', type=int, help='Specify the bandwidth in MB per second (space delimited) for the managed disk. Should be used only when StorageAccountType is UltraSSD_LRS. If not specified, a default value would be assigned based on diskSizeGB.')
c.argument('specialized', arg_type=get_three_state_flag(), help='Indicate whether the source image is specialized.')
c.argument('encryption_at_host', arg_type=get_three_state_flag(), help='Enable Host Encryption for the VM or VMSS. This will enable the encryption for all the disks including Resource/Temp disk at host itself.')
c.argument('os_disk_delete_option', arg_type=get_enum_type(self.get_models('DiskDeleteOptionTypes')), min_api='2021-03-01',
help='Specify the behavior of the managed disk when the VM gets deleted i.e whether the managed disk is deleted or detached.')
c.argument('data_disk_delete_option', options_list=['--data-disk-delete-option', self.deprecate(target='--data-delete-option', redirect='--data-disk-delete-option', hide=True)],
nargs='+', min_api='2021-03-01',
help='Specify whether data disk should be deleted or detached upon VM deletion.')
with self.argument_context(scope, arg_group='Network') as c:
c.argument('vnet_name', help='Name of the virtual network when creating a new one or referencing an existing one.')
c.argument('vnet_address_prefix', help='The IP address prefix to use when creating a new VNet in CIDR format.')
c.argument('subnet', help='The name of the subnet when creating a new VNet or referencing an existing one. Can also reference an existing subnet by ID. If both vnet-name and subnet are omitted, an appropriate VNet and subnet will be selected automatically, or a new one will be created.')
c.argument('subnet_address_prefix', help='The subnet IP address prefix to use when creating a new VNet in CIDR format.')
c.argument('nics', nargs='+', help='Names or IDs of existing NICs to attach to the VM. The first NIC will be designated as primary. If omitted, a new NIC will be created. If an existing NIC is specified, do not specify subnet, VNet, public IP or NSG.')
c.argument('private_ip_address', help='Static private IP address (e.g. 10.0.0.5).')
c.argument('public_ip_address', help='Name of the public IP address when creating one (default) or referencing an existing one. Can also reference an existing public IP by ID or specify "" for None (\'""\' in Azure CLI using PowerShell or --% operator).')
c.argument('public_ip_address_allocation', help=None, default=None, arg_type=get_enum_type(['dynamic', 'static']))
c.argument('public_ip_address_dns_name', help='Globally unique DNS name for a newly created public IP.')
if self.supported_api_version(min_api='2017-08-01', resource_type=ResourceType.MGMT_NETWORK):
PublicIPAddressSkuName = self.get_models('PublicIPAddressSkuName', resource_type=ResourceType.MGMT_NETWORK)
c.argument('public_ip_sku', help='Public IP SKU. It is set to Basic by default. The public IP is supported to be created on edge zone only when it is \'Standard\'',
default=None, arg_type=get_enum_type(PublicIPAddressSkuName))
c.argument('nic_delete_option', nargs='+', min_api='2021-03-01',
help='Specify what happens to the network interface when the VM is deleted. Use a singular '
'value to apply on all resources, or use <Name>=<Value> to configure '
'the delete behavior for individual resources. Possible options are Delete and Detach.')
with self.argument_context(scope, arg_group='Marketplace Image Plan') as c:
c.argument('plan_name', help='plan name')
c.argument('plan_product', help='plan product')
c.argument('plan_publisher', help='plan publisher')
c.argument('plan_promotion_code', help='plan promotion code')
for scope in ['vm create', 'vmss create', 'vm identity assign', 'vmss identity assign']:
with self.argument_context(scope) as c:
arg_group = 'Managed Service Identity' if scope.split()[-1] == 'create' else None
c.argument('identity_scope', options_list=['--scope'], arg_group=arg_group, help="Scope that the system assigned identity can access")
c.argument('identity_role', options_list=['--role'], arg_group=arg_group, help="Role name or id the system assigned identity will have")
c.ignore('identity_role_id')
with self.argument_context('vm auto-shutdown') as c:
c.argument('off', action='store_true', help='Turn off auto-shutdown for VM. Configuration will be cleared.')
c.argument('email', help='The email recipient to send notifications to (can be a list of semi-colon separated email addresses)')
c.argument('time', help='The UTC time of day the schedule will occur every day. Format: hhmm. Example: 1730')
c.argument('webhook', help='The webhook URL to which the notification will be sent')
c.argument('location', validator=get_default_location_from_resource_group)
for scope in ['vm diagnostics', 'vmss diagnostics']:
with self.argument_context(scope) as c:
c.argument('version', help='version of the diagnostics extension. Will use the latest if not specfied')
c.argument('settings', help='json string or a file path, which defines data to be collected.', type=validate_file_or_dict, completer=FilesCompleter())
c.argument('protected_settings', help='json string or a file path containing private configurations such as storage account keys, etc.', type=validate_file_or_dict, completer=FilesCompleter())
c.argument('is_windows_os', action='store_true', help='for Windows VMs')
for scope in ['vm encryption', 'vmss encryption']:
with self.argument_context(scope) as c:
c.argument('volume_type', help='Type of volume that the encryption operation is performed on', arg_type=get_enum_type(['DATA', 'OS', 'ALL']))
c.argument('force', action='store_true', help='continue by ignoring client side validation errors')
c.argument('disk_encryption_keyvault', help='Name or ID of the key vault where the generated encryption key will be placed.')
c.argument('key_encryption_key', help='Key vault key name or URL used to encrypt the disk encryption key.')
c.argument('key_encryption_keyvault', help='Name or ID of the key vault containing the key encryption key used to encrypt the disk encryption key. If missing, CLI will use `--disk-encryption-keyvault`.')
for scope in ['vm extension', 'vmss extension']:
with self.argument_context(scope) as c:
c.argument('publisher', help='The name of the extension publisher.')
c.argument('settings', type=validate_file_or_dict, help='Extension settings in JSON format. A JSON file path is also accepted.')
c.argument('protected_settings', type=validate_file_or_dict, help='Protected settings in JSON format for sensitive information like credentials. A JSON file path is also accepted.')
c.argument('version', help='The version of the extension. To pin extension version to this value, please specify --no-auto-upgrade-minor-version.')
c.argument('enable_auto_upgrade', arg_type=get_three_state_flag(),
help='Indicate the extension should be automatically upgraded by the platform if there is a newer version of the extension available.')
with self.argument_context('vm extension set') as c:
c.argument('vm_extension_name', name_arg_type,
completer=get_resource_name_completion_list('Microsoft.Compute/virtualMachines/extensions'),
help='Name of the extension.', id_part=None)
c.argument('force_update', action='store_true', help='force to update even if the extension configuration has not changed.')
c.argument('extension_instance_name', extension_instance_name_type)
with self.argument_context('vmss extension set', min_api='2017-12-01') as c:
c.argument('force_update', action='store_true', help='force to update even if the extension configuration has not changed.')
c.argument('extension_instance_name', extension_instance_name_type)
c.argument('provision_after_extensions', nargs='+', help='Space-separated list of extension names after which this extension should be provisioned. These extensions must already be set on the vm.')
for scope in ['vm extension image', 'vmss extension image']:
with self.argument_context(scope) as c:
c.argument('image_location', options_list=['--location', '-l'], help='Image location.')
c.argument('name', help='Image name', id_part=None)
c.argument('publisher_name', options_list=['--publisher', '-p'], help='Image publisher name')
c.argument('type', options_list=['--name', '-n'], help='Name of the extension')
c.argument('latest', action='store_true', help='Show the latest version only.')
c.argument('version', help='Extension version')
c.argument('orderby', help="the $orderby odata query option")
c.argument('top', help='the $top odata query option')
for scope in ['vm create', 'vm update', 'vmss create', 'vmss update']:
with self.argument_context(scope) as c:
c.argument('license_type', license_type)
c.argument('priority', resource_type=ResourceType.MGMT_COMPUTE, min_api='2019-03-01',
arg_type=get_enum_type(self.get_models('VirtualMachinePriorityTypes'), default=None),
help="Priority. Use 'Spot' to run short-lived workloads in a cost-effective way. 'Low' enum will be deprecated in the future. Please use 'Spot' to deploy Azure spot VM and/or VMSS. Default to Regular.")
c.argument('max_price', min_api='2019-03-01', type=float, is_preview=True,
help='The maximum price (in US Dollars) you are willing to pay for a Spot VM/VMSS. -1 indicates that the Spot VM/VMSS should not be evicted for price reasons')
c.argument('capacity_reservation_group', options_list=['--capacity-reservation-group', '--crg'],
help='The ID or name of the capacity reservation group that is used to allocate. Pass in "None" to disassociate the capacity reservation group. Please note that if you want to delete a VM/VMSS that has been associated with capacity reservation group, you need to disassociate the capacity reservation group first.',
min_api='2021-04-01', is_preview=True)
with self.argument_context('vm update') as c:
c.argument('license_type', license_type)
c.argument('user_data', help='UserData for the VM. It can be passed in as file or string. If empty string is passed in, the existing value will be deleted.', completer=FilesCompleter(), type=file_type, min_api='2021-03-01')
with self.argument_context('vmss create') as c:
c.argument('priority', resource_type=ResourceType.MGMT_COMPUTE, min_api='2017-12-01',
arg_type=get_enum_type(self.get_models('VirtualMachinePriorityTypes'), default=None),
help="Priority. Use 'Spot' to run short-lived workloads in a cost-effective way. 'Low' enum will be deprecated in the future. Please use 'Spot' to deploy Azure spot VM and/or VMSS. Default to Regular.")
with self.argument_context('sig') as c:
c.argument('gallery_name', options_list=['--gallery-name', '-r'], help='gallery name')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], help='gallery image definition')
c.argument('gallery_image_version', options_list=['--gallery-image-version', '-e'], help='gallery image version')
for scope in ['sig show', 'sig image-definition show', 'sig image-definition delete']:
with self.argument_context(scope) as c:
c.argument('gallery_name', options_list=['--gallery-name', '-r'], id_part='name', help='gallery name')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], id_part='child_name_1', help='gallery image definition')
with self.argument_context('sig list-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx))
c.argument('shared_to', shared_to_type)
with self.argument_context('sig show-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
for scope in ['sig share add', 'sig share remove']:
with self.argument_context(scope) as c:
c.argument('gallery_name', type=str, help='The name of the Shared Image Gallery.', id_part='name')
c.argument('subscription_ids', nargs='+', help='A list of subscription ids to share the gallery.')
c.argument('tenant_ids', nargs='+', help='A list of tenant ids to share the gallery.')
with self.argument_context('sig share add') as c:
c.argument('op_type', default='Add', deprecate_info=c.deprecate(hide=True),
help='distinguish add operation and remove operation')
with self.argument_context('sig share remove') as c:
c.argument('op_type', default='Remove', deprecate_info=c.deprecate(hide=True),
help='distinguish add operation and remove operation')
with self.argument_context('sig share reset') as c:
c.argument('gallery_name', type=str, help='The name of the Shared Image Gallery.', id_part='name')
with self.argument_context('sig image-definition create') as c:
c.argument('offer', options_list=['--offer', '-f'], help='image offer')
c.argument('sku', options_list=['--sku', '-s'], help='image sku')
c.argument('publisher', options_list=['--publisher', '-p'], help='image publisher')
c.argument('os_type', arg_type=get_enum_type(['Windows', 'Linux']), help='the type of the OS that is included in the disk if creating a VM from user-image or a specialized VHD')
c.argument('os_state', arg_type=get_enum_type(self.get_models('OperatingSystemStateTypes')), help="This property allows the user to specify whether the virtual machines created under this image are 'Generalized' or 'Specialized'.")
c.argument('hyper_v_generation', arg_type=get_enum_type(self.get_models('HyperVGenerationTypes')), help='The hypervisor generation of the Virtual Machine. Applicable to OS disks only.')
c.argument('minimum_cpu_core', type=int, arg_group='Recommendation', help='minimum cpu cores')
c.argument('maximum_cpu_core', type=int, arg_group='Recommendation', help='maximum cpu cores')
c.argument('minimum_memory', type=int, arg_group='Recommendation', help='minimum memory in MB')
c.argument('maximum_memory', type=int, arg_group='Recommendation', help='maximum memory in MB')
c.argument('plan_publisher', help='plan publisher', arg_group='Purchase plan')
c.argument('plan_name', help='plan name', arg_group='Purchase plan')
c.argument('plan_product', help='plan product', arg_group='Purchase plan')
c.argument('eula', help='The Eula agreement for the gallery image')
c.argument('privacy_statement_uri', help='The privacy statement uri')
c.argument('release_note_uri', help='The release note uri')
c.argument('end_of_life_date', help="the end of life date, e.g. '2020-12-31'")
c.argument('disallowed_disk_types', nargs='*', help='disk types which would not work with the image, e.g., Standard_LRS')
c.argument('features', help='A list of gallery image features. E.g. "IsSecureBootSupported=true IsMeasuredBootSupported=false"')
with self.argument_context('sig image-definition list-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('shared_to', shared_to_type)
c.argument('marker', arg_type=marker_type)
c.argument('show_next_marker', action='store_true', help='Show nextMarker in result when specified.')
with self.argument_context('sig image-definition show-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], type=str, help='The name '
'of the Shared Gallery Image Definition from which the Image Versions are to be listed.',
id_part='child_name_2')
with self.argument_context('sig create') as c:
c.argument('description', help='the description of the gallery')
c.argument('permissions', arg_type=get_enum_type(GallerySharingPermissionTypes), arg_group='Sharing Profile',
min_api='2020-09-30', is_experimental=True,
help='This property allows you to specify the permission of sharing gallery.')
c.argument('soft_delete', arg_type=get_three_state_flag(), min_api='2021-03-01', is_preview=True,
help='Enable soft-deletion for resources in this gallery, '
'allowing them to be recovered within retention time.')
with self.argument_context('sig update') as c:
c.ignore('gallery')
c.argument('permissions', arg_type=get_enum_type(GallerySharingPermissionTypes), arg_group='Sharing Profile',
min_api='2020-09-30', is_experimental=True,
help='This property allows you to specify the permission of sharing gallery.')
c.argument('soft_delete', arg_type=get_three_state_flag(), min_api='2021-03-01', is_preview=True,
help='Enable soft-deletion for resources in this gallery, '
'allowing them to be recovered within retention time.')
with self.argument_context('sig image-definition create') as c:
c.argument('description', help='the description of the gallery image definition')
with self.argument_context('sig image-definition update') as c:
c.ignore('gallery_image')
with self.argument_context('sig image-version') as c:
deprecated_option = c.deprecate(target='--gallery-image-version-name', redirect='--gallery-image-version', hide=True, expiration="3.0.0")
c.argument('gallery_image_version_name', options_list=['--gallery-image-version', '-e', deprecated_option],
help='Gallery image version in semantic version pattern. The allowed characters are digit and period. Digits must be within the range of a 32-bit integer, e.g. `<MajorVersion>.<MinorVersion>.<Patch>`')
with self.argument_context('sig image-version create', resource_type=ResourceType.MGMT_COMPUTE, operation_group='gallery_image_versions') as c:
c.argument('gallery_image_version', options_list=['--gallery-image-version', '-e'],
help='Gallery image version in semantic version pattern. The allowed characters are digit and period. Digits must be within the range of a 32-bit integer, e.g. `<MajorVersion>.<MinorVersion>.<Patch>`')
c.argument('description', help='the description of the gallery image version')
c.argument('managed_image', help='image name(if in the same resource group) or resource id')
c.argument('os_snapshot', help='Name or ID of OS disk snapshot')
c.argument('data_snapshots', nargs='+', help='Names or IDs (space-delimited) of data disk snapshots')
c.argument('data_snapshot_luns', nargs='+', help='Logical unit numbers (space-delimited) of data disk snapshots')
c.argument('exclude_from_latest', arg_type=get_three_state_flag(), help='The flag means that if it is set to true, people deploying VMs with version omitted will not use this version.')
c.argument('version', help='image version')
c.argument('end_of_life_date', help="the end of life date, e.g. '2020-12-31'")
c.argument('storage_account_type', help="The default storage account type to be used per region. To set regional storage account types, use --target-regions",
arg_type=get_enum_type(["Standard_LRS", "Standard_ZRS", "Premium_LRS"]), min_api='2019-03-01')
c.argument('target_region_encryption', nargs='+',
help='Space-separated list of customer managed keys for encrypting the OS and data disks in the gallery artifact for each region. Format for each region: `<os_des>,<lun1>,<lun1_des>,<lun2>,<lun2_des>`. Use "null" as a placeholder.')
c.argument('os_vhd_uri', help='Source VHD URI of OS disk')
c.argument('os_vhd_storage_account', help='Name or ID of storage account of source VHD URI of OS disk')
c.argument('data_vhds_uris', nargs='+', help='Source VHD URIs (space-delimited) of data disks')
c.argument('data_vhds_luns', nargs='+', help='Logical unit numbers (space-delimited) of source VHD URIs of data disks')
c.argument('data_vhds_storage_accounts', options_list=['--data-vhds-storage-accounts', '--data-vhds-sa'], nargs='+', help='Names or IDs (space-delimited) of storage accounts of source VHD URIs of data disks')
c.argument('replication_mode', min_api='2021-07-01', arg_type=get_enum_type(ReplicationMode), help='Optional parameter which specifies the mode to be used for replication. This property is not updatable.')
with self.argument_context('sig image-version list-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], type=str, help='The name '
'of the Shared Gallery Image Definition from which the Image Versions are to be listed.',
id_part='child_name_2')
c.argument('shared_to', shared_to_type)
c.argument('marker', arg_type=marker_type)
c.argument('show_next_marker', action='store_true', help='Show nextMarker in result when specified.')
with self.argument_context('sig image-version show') as c:
c.argument('expand', help="The expand expression to apply on the operation, e.g. 'ReplicationStatus'")
with self.argument_context('sig image-version show-shared') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('gallery_unique_name', type=str, help='The unique name of the Shared Gallery.',
id_part='child_name_1')
c.argument('gallery_image_name', options_list=['--gallery-image-definition', '-i'], type=str, help='The name '
'of the Shared Gallery Image Definition from which the Image Versions are to be listed.',
id_part='child_name_2')
c.argument('gallery_image_version_name', options_list=['--gallery-image-version', '-e'], type=str, help='The '
'name of the gallery image version to be created. Needs to follow semantic version name pattern: '
'The allowed characters are digit and period. Digits must be within the range of a 32-bit integer. '
'Format: <MajorVersion>.<MinorVersion>.<Patch>', id_part='child_name_3')
for scope in ['sig image-version create', 'sig image-version update']:
with self.argument_context(scope) as c:
c.argument('target_regions', nargs='*', validator=process_gallery_image_version_namespace,
help='Space-separated list of regions and their replica counts. Use `<region>[=<replica count>][=<storage account type>]` to optionally set the replica count and/or storage account type for each region. '
'If a replica count is not specified, the default replica count will be used. If a storage account type is not specified, the default storage account type will be used')
c.argument('replica_count', help='The default number of replicas to be created per region. To set regional replication counts, use --target-regions', type=int)
# endregion
# region Gallery applications
with self.argument_context('sig gallery-application') as c:
c.argument('gallery_application_name', options_list=['--name', '-n', '--application-name'],
help='The name of the gallery Application')
with self.argument_context('sig gallery-application create') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('description', help='The description of this gallery Application Definition resource. '
'This property is updatable.')
c.argument('os_type', arg_type=get_enum_type(['Windows', 'Linux']), help='This property allows you '
'to specify the supported type of the OS that application is built for. <br><br> Possible values '
'are: <br><br> **Windows** <br><br> **Linux**')
with self.argument_context('sig gallery-application update') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('description', help='The description of this gallery Application Definition resource. '
'This property is updatable.')
with self.argument_context('sig gallery-application version') as c:
c.argument('gallery_application_name', options_list=['--application-name'],
help='The name of the gallery Application')
c.argument('gallery_application_version_name', options_list=['--name', '-n', '--version-name'],
help='The name of the gallery Application Version')
for scope in ['create', 'update']:
with self.argument_context('sig gallery-application version {}'.format(scope)) as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('package_file_link', help='The mediaLink of the artifact, must be a readable storage page blob.')
c.argument('install_command', help='The path and arguments to install the gallery application.')
c.argument('remove_command', help='The path and arguments to remove the gallery application.')
c.argument('update_command', help='The path and arguments to update the gallery application. If not present,'
' then update operation will invoke remove command on the previous version '
'and install command on the current version of the gallery application.')
c.argument('target_regions', type=validate_file_or_dict, help='The target regions where the Image Version is '
'going to be replicated to. This property is updatable. Expected value: '
'json-string/json-file/@json-file.')
c.argument('default_file_link', help='The default configuration link of the artifact, must be a readable storage page blob.')
c.argument('exclude_from', arg_type=get_three_state_flag(), help='If set to true, Virtual Machines '
'deployed from the latest version of the Image Definition won\'t use this Image Version.',
arg_group='Publishing Profile')
c.argument('end_of_life_date', help='The end of life date of the gallery image version. This property can be '
'used for decommissioning purposes. This property is updatable.', arg_group='Publishing Profile')
# endregion
# region Proximity Placement Group
with self.argument_context('ppg', min_api='2018-04-01') as c:
c.argument('proximity_placement_group_name', arg_type=name_arg_type, help="The name of the proximity placement group.")
with self.argument_context('ppg create', min_api='2018-04-01') as c:
c.argument('ppg_type', options_list=['--type', '-t'], help="The type of the proximity placement group. Allowed values: Standard.")
c.argument('tags', tags_type)
with self.argument_context('ppg show', min_api='2019-07-01') as c:
c.argument('include_colocation_status', action='store_true', help='Enable fetching the colocation status of all the resources in the proximity placement group.')
for scope, item in [('vm create', 'VM'), ('vmss create', 'VMSS'),
('vm availability-set create', 'availability set'),
('vm update', 'VM'), ('vmss update', 'VMSS'),
('vm availability-set update', 'availability set')]:
with self.argument_context(scope, min_api='2018-04-01') as c:
c.argument('proximity_placement_group', options_list=['--ppg'], help="The name or ID of the proximity placement group the {} should be associated with.".format(item),
validator=_validate_proximity_placement_group) # only availability set does not have a command level validator, so this should be added.
# endregion
# region VM Monitor
with self.argument_context('vm monitor log show') as c:
c.argument('analytics_query', options_list=['--analytics-query', '-q'], help="Query to execute over Log Analytics data.")
c.argument('timespan', help="Timespan over which to query. Defaults to querying all available data.")
with self.argument_context('vm monitor metrics') as c:
c.argument('metricnamespace', options_list=['--namespace'],
help='Namespace to query metric definitions for.')
with self.argument_context('vm monitor metrics tail') as c:
from azure.mgmt.monitor.models import AggregationType
c.extra('resource_group_name', required=True)
c.argument('resource', arg_type=existing_vm_name, help='Name or ID of a virtual machine', validator=validate_vm_name_for_monitor_metrics, id_part=None)
c.argument('metadata', action='store_true')
c.argument('dimension', nargs='*', validator=validate_metric_dimension)
c.argument('aggregation', arg_type=get_enum_type(t for t in AggregationType if t.name != 'none'), nargs='*')
c.argument('metrics', nargs='*')
c.argument('orderby',
help='Aggregation to use for sorting results and the direction of the sort. Only one order can be specificed. Examples: sum asc')
c.argument('top', help='Max number of records to retrieve. Valid only if --filter used.')
c.argument('filters', options_list=['--filter'])
c.argument('metric_namespace', options_list=['--namespace'])
with self.argument_context('vm monitor metrics tail', arg_group='Time') as c:
c.argument('start_time', arg_type=get_datetime_type(help='Start time of the query.'))
c.argument('end_time', arg_type=get_datetime_type(help='End time of the query. Defaults to the current time.'))
c.argument('offset', type=get_period_type(as_timedelta=True))
c.argument('interval', arg_group='Time', type=get_period_type())
with self.argument_context('vm monitor metrics list-definitions') as c:
c.extra('resource_group_name', required=True)
c.argument('resource_uri', arg_type=existing_vm_name, help='Name or ID of a virtual machine', validator=validate_vm_name_for_monitor_metrics, id_part=None)
# endregion
# region disk encryption set
with self.argument_context('disk-encryption-set') as c:
c.argument('disk_encryption_set_name', disk_encryption_set_name)
c.argument('key_url', help='URL pointing to a key or secret in KeyVault.')
c.argument('source_vault', help='Name or ID of the KeyVault containing the key or secret.')
c.argument('encryption_type', arg_type=get_enum_type(['EncryptionAtRestWithPlatformKey', 'EncryptionAtRestWithCustomerKey', 'EncryptionAtRestWithPlatformAndCustomerKeys']),
help='The type of key used to encrypt the data of the disk. EncryptionAtRestWithPlatformKey: Disk is encrypted at rest with Platform managed key. It is the default encryption type. EncryptionAtRestWithCustomerKey: Disk is encrypted at rest with Customer managed key that can be changed and revoked by a customer. EncryptionAtRestWithPlatformAndCustomerKeys: Disk is encrypted at rest with 2 layers of encryption. One of the keys is Customer managed and the other key is Platform managed.')
c.argument('location', validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('enable_auto_key_rotation', arg_type=get_three_state_flag(), min_api='2020-12-01',
options_list=['--enable-auto-key-rotation', '--auto-rotation'],
help='Enable automatic rotation of keys.')
# endregion
# region DiskAccess
with self.argument_context('disk-access', resource_type=ResourceType.MGMT_COMPUTE, operation_group='disk_accesses') as c:
c.argument('disk_access_name', arg_type=name_arg_type, help='Name of the disk access resource.', id_part='name')
c.argument('location', validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
# endRegion
with self.argument_context('capacity reservation group') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), validator=get_default_location_from_resource_group)
c.argument('capacity_reservation_group_name', options_list=['--capacity-reservation-group', '-n'],
help='The name of the capacity reservation group.')
c.argument('tags', tags_type)
with self.argument_context('capacity reservation group create') as c:
c.argument('zones', zones_type, help='Availability Zones to use for this capacity reservation group. If not provided, the group supports only regional resources in the region. If provided, enforces each capacity reservation in the group to be in one of the zones.')
with self.argument_context('capacity reservation group show') as c:
c.argument('instance_view', action='store_true', options_list=['--instance-view', '-i'], help='Retrieve the list of instance views of the capacity reservations under the capacity reservation group which is a snapshot of the runtime properties of a capacity reservation that is managed by the platform and can change outside of control plane operations.')
with self.argument_context('capacity reservation group list') as c:
c.argument('vm_instance', action='store_true', help='Retrieve the Virtual Machine Instance which are associated to capacity reservation group in the response.')
c.argument('vmss_instance', action='store_true', help='Retrieve the ScaleSet VM Instance which are associated to capacity reservation group in the response.')
with self.argument_context('capacity reservation') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), validator=get_default_location_from_resource_group)
c.argument('capacity_reservation_group_name', options_list=['--capacity-reservation-group', '-c'],
help='The name of the capacity reservation group.')
c.argument('capacity_reservation_name', options_list=['--capacity-reservation-name', '-n'],
help='The name of the capacity reservation.')
c.argument('capacity', type=int, help='Specify the number of virtual machines in the scale set.')
c.argument('tags', tags_type)
with self.argument_context('capacity reservation create') as c:
c.argument('zone', zone_type, help='Availability Zone to use for this capacity reservation. The zone has to be single value and also should be part for the list of zones specified during the capacity reservation group creation. If not provided, the reservation supports only non-zonal deployments. If provided, enforces VM/VMSS using this capacity reservation to be in same zone.')
c.argument('sku_name', options_list=['--sku', '-s'], required=True, help='The SKU of the resource for which capacity needs be reserved. Currently VM Skus with the capability called "CapacityReservationSupported" set to true are supported. Refer to List Microsoft.Compute SKUs in a region (https://docs.microsoft.com/rest/api/compute/resourceskus/list) for supported values.')
with self.argument_context('capacity reservation show') as c:
c.argument('instance_view', action='store_true', options_list=['--instance-view', '-i'], help='Retrieve a snapshot of the runtime properties of the capacity reservation that is managed by the platform and can change outside of control plane operations.')
|
38,262 |
def formatting_options(settings: sublime.Settings) -> Dict[str, Any]:
# Build 4085 allows "trim_trailing_white_space_on_save" to be a string so we have to account for that in a
# backwards-compatible way.
trim_trailing_white_space = settings.get("trim_trailing_white_space_on_save")
if isinstance(trim_trailing_white_space, bool):
pass
elif isinstance(trim_trailing_white_space, str):
trim_trailing_white_space = (trim_trailing_white_space != "none")
else:
trim_trailing_white_space = False
return {
# Size of a tab in spaces.
"tabSize": settings.get("tab_size", 4),
# Prefer spaces over tabs.
"insertSpaces": settings.get("translate_tabs_to_spaces", False),
# Trim trailing whitespace on a line. (since 3.15)
"trimTrailingWhitespace": trim_trailing_white_space,
# Insert a newline character at the end of the file if one does not exist. (since 3.15)
"insertFinalNewline": settings.get("ensure_newline_at_eof_on_save", False),
# Trim all newlines after the final newline at the end of the file. (sine 3.15)
"trimFinalNewlines": settings.get("ensure_newline_at_eof_on_save", False)
}
|
def formatting_options(settings: sublime.Settings) -> Dict[str, Any]:
# Build 4085 allows "trim_trailing_white_space_on_save" to be a string so we have to account for that in a
# backwards-compatible way.
trim_trailing_white_space = settings.get("trim_trailing_white_space_on_save") not in (False, "none")
return {
# Size of a tab in spaces.
"tabSize": settings.get("tab_size", 4),
# Prefer spaces over tabs.
"insertSpaces": settings.get("translate_tabs_to_spaces", False),
# Trim trailing whitespace on a line. (since 3.15)
"trimTrailingWhitespace": trim_trailing_white_space,
# Insert a newline character at the end of the file if one does not exist. (since 3.15)
"insertFinalNewline": settings.get("ensure_newline_at_eof_on_save", False),
# Trim all newlines after the final newline at the end of the file. (sine 3.15)
"trimFinalNewlines": settings.get("ensure_newline_at_eof_on_save", False)
}
|
20,247 |
def run(*args):
if 'update' in args:
dry_run = False
else:
dry_run = True
for (old_pattern, new_pattern) in outdated_patterns:
matching_redirects = Redirect.objects.filter(old_path__startswith=old_pattern)
total = len(matching_redirects)
for redirect in matching_redirects:
old_path = redirect.old_path
updated_path = old_path.replace(old_pattern, new_pattern)
if dry_run:
print(f'{old_path} -> {updated_path}')
else:
print(f"updating {old_path} -> {updated_path}")
redirect.old_path = updated_path
redirect.save()
if dry_run:
print(f'\n\nSummary: Would update {total} redirects matching {old_pattern}')
print('run the following command to update them:')
print(' ./cfgov/manage.py runscript update_existing_redirects --script-args')
else:
print(f'\n\nSummary: Updated {total} redirects')
|
def run(*args):
if 'update' in args:
dry_run = False
else:
dry_run = True
for (old_pattern, new_pattern) in outdated_patterns:
matching_redirects = Redirect.objects.filter(old_path__startswith=old_pattern)
total = len(matching_redirects)
for redirect in matching_redirects:
old_path = redirect.old_path
updated_path = old_path.replace(old_pattern, new_pattern)
if dry_run:
print(f'{old_path} -> {updated_path}')
else:
print(f"updating {old_path} -> {updated_path}")
redirect.old_path = updated_path
redirect.save()
if dry_run:
print(f'\n\nSummary: Would update {total} redirects matching {old_pattern}')
print('run the following command to update them:')
print(' ./cfgov/manage.py runscript update_existing_redirects --script-args update')
else:
print(f'\n\nSummary: Updated {total} redirects')
|
56,294 |
def set_data_classes(config: Config, labels: List[LabelEntity]):
# Save labels in data configs.
for subset in ('train', 'val', 'test'):
if subset == 'train':
cfg = get_data_train(config)
else: cfg = config.data[subset]
cfg.labels = labels
config.data[subset].labels = labels
# Set proper number of classes in model's detection heads.
head_names = ('mask_head', 'bbox_head', 'segm_head')
num_classes = len(labels)
if 'roi_head' in config.model:
for head_name in head_names:
if head_name in config.model.roi_head:
if isinstance(config.model.roi_head[head_name], List):
for head in config.model.roi_head[head_name]:
head.num_classes = num_classes
else:
config.model.roi_head[head_name].num_classes = num_classes
else:
for head_name in head_names:
if head_name in config.model:
config.model[head_name].num_classes = num_classes
# FIXME. ?
|
def set_data_classes(config: Config, labels: List[LabelEntity]):
# Save labels in data configs.
for subset in ('train', 'val', 'test'):
if subset == 'train':
cfg = get_data_train(config)
else:
cfg = config.data[subset]
cfg.labels = labels
config.data[subset].labels = labels
# Set proper number of classes in model's detection heads.
head_names = ('mask_head', 'bbox_head', 'segm_head')
num_classes = len(labels)
if 'roi_head' in config.model:
for head_name in head_names:
if head_name in config.model.roi_head:
if isinstance(config.model.roi_head[head_name], List):
for head in config.model.roi_head[head_name]:
head.num_classes = num_classes
else:
config.model.roi_head[head_name].num_classes = num_classes
else:
for head_name in head_names:
if head_name in config.model:
config.model[head_name].num_classes = num_classes
# FIXME. ?
|
19,899 |
def list_tools(cm, args):
from edalize.edatool import walk_tool_packages
_tp = list(walk_tool_packages())
maxlen = max(map(len, _tp))
for tool_name in _tp:
tool_class = get_edatool(tool_name)
desc = tool_class.get_doc(0)["description"]
print(f"{tool_name:{maxlen}} : {desc}")
|
def list_tools(cm, args):
from edalize.edatool import walk_tool_packages
_tp = list(walk_tool_packages())
maxlen = max(map(len, _tp))
for tool_name in _tp:
tool_class = get_edatool(tool_name)
desc = tool_class.get_doc(0)["description"]
print(f"{tool_name:{maxlen}}: {desc}")
|
36,688 |
def _init_module_attrs(spec, module, *, override=False):
# The passed-in module may be not support attribute assignment,
# in which case we simply don't set the attributes.
# __name__
if (override or getattr(module, '__name__', None) is None):
try:
module.__name__ = spec.name
except AttributeError:
pass
# __loader__
if override or getattr(module, '__loader__', None) is None:
try:
module.__loader__ = spec.loader
except AttributeError:
pass
# __package__
if override or getattr(module, '__package__', None) is None:
try:
module.__package__ = spec.parent
except AttributeError:
pass
# __spec__
try:
module.__spec__ = spec
except AttributeError:
pass
# __path__
if override or getattr(module, '__path__', None) is None:
if spec.submodule_search_locations is not None:
# XXX We should extend __path__ if it's already a list.
try:
module.__path__ = spec.submodule_search_locations
except AttributeError:
pass
# __file__/__cached__
if spec.has_location:
if override or getattr(module, '__file__', None) is None:
try:
module.__file__ = spec.origin
except AttributeError:
pass
if override or getattr(module, '__cached__', None) is None:
if spec.cached is not None:
try:
module.__cached__ = spec.cached
except AttributeError:
pass
# A backward compatibility hack.
if _bootstrap_external and isinstance(spec.loader, _bootstrap_external.NamespaceLoader):
# While the docs say that module.__file__ is not set for
# built-in modules, and the code below will avoid setting it if
# spec.has_location is false, this is incorrect for namespace
# packages. Namespace packages have no location, but their
# __spec__.origin is None, and thus their module.__file__
# should also be None for consistency. While a bit of a hack,
# this is the best place to ensure this consistency.
#
# See # https://docs.python.org/3/library/importlib.html#importlib.abc.Loader.load_module
# and bpo-32305
module.__file__ = None
return module
|
def _init_module_attrs(spec, module, *, override=False):
# The passed-in module may be not support attribute assignment,
# in which case we simply don't set the attributes.
# __name__
if (override or getattr(module, '__name__', None) is None):
try:
module.__name__ = spec.name
except AttributeError:
pass
# __loader__
if override or getattr(module, '__loader__', None) is None:
try:
module.__loader__ = spec.loader
except AttributeError:
pass
# __package__
if override or getattr(module, '__package__', None) is None:
try:
module.__package__ = spec.parent
except AttributeError:
pass
# __spec__
try:
module.__spec__ = spec
except AttributeError:
pass
# __path__
if override or getattr(module, '__path__', None) is None:
if spec.submodule_search_locations is not None:
# XXX We should extend __path__ if it's already a list.
try:
module.__path__ = spec.submodule_search_locations
except AttributeError:
pass
# __file__/__cached__
if spec.has_location:
if override or getattr(module, '__file__', None) is None:
try:
module.__file__ = spec.origin
except AttributeError:
pass
if override or getattr(module, '__cached__', None) is None:
if spec.cached is not None:
try:
module.__cached__ = spec.cached
except AttributeError:
pass
# A backward compatibility hack.
if _bootstrap_external and isinstance(spec.loader, _bootstrap_external.NamespaceLoader):
# While the docs say that module.__file__ is not set for
# built-in modules, and the code below will avoid setting it if
# spec.has_location is false, this is incorrect for namespace
# packages. Namespace packages have no location, but their
# __spec__.origin is None, and thus their module.__file__
# should also be None for consistency. While a bit of a hack,
# this is the best place to ensure this consistency.
#
# See https://docs.python.org/3/library/importlib.html#importlib.abc.Loader.load_module
# and bpo-32305
module.__file__ = None
return module
|
37,620 |
def get_unique_backends():
"""Gets the unique backends that are available.
Returns:
list: Unique available backends.
Raises:
QiskitError: No backends available.
MissingOptionalLibraryError: If qiskit-ibmq-provider is not installed
"""
warnings.warn(
"The qiskit.IBMQ entrypoint and the qiskit-ibmq-provider package ("
"accessible from 'qiskit.providers.ibmq`) are deprecated and will be removed "
"in a future release. Instead you should use the qiskit-ibm-provider package "
"which is accesible from 'qiskit_ibm_provider'.",
DeprecationWarning,
stacklevel=3,
)
try:
from qiskit.providers.ibmq import IBMQ
except ImportError as ex:
raise MissingOptionalLibraryError(
libname="qiskit-ibmq-provider",
name="get_unique_backends",
pip_install="pip install qiskit-ibmq-provider",
) from ex
backends = []
for provider in IBMQ.providers():
for backend in provider.backends():
backends.append(backend)
unique_hardware_backends = []
unique_names = []
for back in backends:
if back.name() not in unique_names and not back.configuration().simulator:
unique_hardware_backends.append(back)
unique_names.append(back.name())
if not unique_hardware_backends:
raise QiskitError("No backends available.")
return unique_hardware_backends
|
def get_unique_backends():
"""Gets the unique backends that are available.
Returns:
list: Unique available backends.
Raises:
QiskitError: No backends available.
MissingOptionalLibraryError: If qiskit-ibmq-provider is not installed
"""
warnings.warn(
"The qiskit.IBMQ entrypoint and the qiskit-ibmq-provider package ("
"accessible from 'qiskit.providers.ibmq`) are deprecated and will be removed "
"in a future release. Instead you should use the qiskit-ibm-provider package "
"which is accessible from 'qiskit_ibm_provider'.",
DeprecationWarning,
stacklevel=3,
)
try:
from qiskit.providers.ibmq import IBMQ
except ImportError as ex:
raise MissingOptionalLibraryError(
libname="qiskit-ibmq-provider",
name="get_unique_backends",
pip_install="pip install qiskit-ibmq-provider",
) from ex
backends = []
for provider in IBMQ.providers():
for backend in provider.backends():
backends.append(backend)
unique_hardware_backends = []
unique_names = []
for back in backends:
if back.name() not in unique_names and not back.configuration().simulator:
unique_hardware_backends.append(back)
unique_names.append(back.name())
if not unique_hardware_backends:
raise QiskitError("No backends available.")
return unique_hardware_backends
|
41,904 |
def _get_contour_plot(study: Study, params: Optional[List[str]] = None) -> "Axes":
# Calculate basic numbers for plotting.
trials = [trial for trial in study.trials if trial.state == TrialState.COMPLETE]
if len(trials) == 0:
_logger.warning("Your study does not have any completed trials.")
_, ax = plt.subplots()
return ax
all_params = {p_name for t in trials for p_name in t.params.keys()}
if params is None:
sorted_params = sorted(list(all_params))
elif len(params) <= 1:
_logger.warning("The length of params must be greater than 1.")
fig, ax = plt.subplots()
return ax
else:
for input_p_name in params:
if input_p_name not in all_params:
raise ValueError("Parameter {} does not exist in your study.".format(input_p_name))
sorted_params = sorted(list(set(params)))
n_params = len(sorted_params)
plt.style.use("ggplot") # Use ggplot style sheet for similar outputs to plotly.
if n_params == 2:
# Set up the graph style.
fig, axs = plt.subplots()
axs.set_title("Contour Plot")
cmap = _set_cmap(study)
contour_point_num = 1000
# Prepare data and draw contour plots.
if params:
x_param = params[0]
y_param = params[1]
else:
x_param = sorted_params[0]
y_param = sorted_params[1]
cs = _generate_contour_subplot(trials, x_param, y_param, axs, cmap, contour_point_num)
if isinstance(cs, ContourSet):
axcb = fig.colorbar(cs)
axcb.set_label("Objective Value")
else:
# Set up the graph style.
fig, axs = plt.subplots(n_params, n_params)
fig.suptitle("Contour Plot")
cmap = _set_cmap(study)
contour_point_num = 100
# Prepare data and draw contour plots.
cs_list = []
for x_i, x_param in enumerate(sorted_params):
for y_i, y_param in enumerate(sorted_params):
ax = axs[y_i, x_i]
cs = _generate_contour_subplot(
trials, x_param, y_param, ax, cmap, contour_point_num
)
if isinstance(cs, ContourSet):
cs_list.append(cs)
if cs_list:
axcb = fig.colorbar(cs_list[0], ax=axs)
axcb.set_label("Objective Value")
return axs
|
def _get_contour_plot(study: Study, params: Optional[List[str]] = None) -> "Axes":
# Calculate basic numbers for plotting.
trials = [trial for trial in study.trials if trial.state == TrialState.COMPLETE]
if len(trials) == 0:
_logger.warning("Your study does not have any completed trials.")
_, ax = plt.subplots()
return ax
all_params = {p_name for t in trials for p_name in t.params.keys()}
if params is None:
sorted_params = sorted(list(all_params))
elif len(params) <= 1:
_logger.warning("The length of params must be greater than 1.")
_, ax = plt.subplots()
return ax
else:
for input_p_name in params:
if input_p_name not in all_params:
raise ValueError("Parameter {} does not exist in your study.".format(input_p_name))
sorted_params = sorted(list(set(params)))
n_params = len(sorted_params)
plt.style.use("ggplot") # Use ggplot style sheet for similar outputs to plotly.
if n_params == 2:
# Set up the graph style.
fig, axs = plt.subplots()
axs.set_title("Contour Plot")
cmap = _set_cmap(study)
contour_point_num = 1000
# Prepare data and draw contour plots.
if params:
x_param = params[0]
y_param = params[1]
else:
x_param = sorted_params[0]
y_param = sorted_params[1]
cs = _generate_contour_subplot(trials, x_param, y_param, axs, cmap, contour_point_num)
if isinstance(cs, ContourSet):
axcb = fig.colorbar(cs)
axcb.set_label("Objective Value")
else:
# Set up the graph style.
fig, axs = plt.subplots(n_params, n_params)
fig.suptitle("Contour Plot")
cmap = _set_cmap(study)
contour_point_num = 100
# Prepare data and draw contour plots.
cs_list = []
for x_i, x_param in enumerate(sorted_params):
for y_i, y_param in enumerate(sorted_params):
ax = axs[y_i, x_i]
cs = _generate_contour_subplot(
trials, x_param, y_param, ax, cmap, contour_point_num
)
if isinstance(cs, ContourSet):
cs_list.append(cs)
if cs_list:
axcb = fig.colorbar(cs_list[0], ax=axs)
axcb.set_label("Objective Value")
return axs
|
53,939 |
def _get_plugin_msg_info(
name: str, version_s: str, core: Type[dbt.semver.VersionSpecifier]
) -> Tuple[str, str]:
plugin = dbt.semver.VersionSpecifier.from_version_string(version_s)
latest_plugin = get_latest_version(version_url=get_package_pypi_url(name))
update_msg = ""
if plugin.major != core.major and plugin.minor != core.minor:
compatibility_msg = red("Not compatible!")
update_msg = (
f" Your version of dbt-{name} is not compatible with core!\n"
" You can find instructions for upgrading here:\n"
" https://docs.getdbt.com/dbt-cli/install/overview"
)
return (compatibility_msg, update_msg)
if not latest_plugin:
compatibility_msg = yellow("No PYPI version available")
return (compatibility_msg, update_msg)
if plugin < latest_plugin:
compatibility_msg = yellow("Update available!")
update_msg = (
f" Your version of dbt-{name} is out of date! "
"You can find instructions for upgrading here:\n"
" https://docs.getdbt.com/dbt-cli/install/overview"
)
elif plugin > latest_plugin:
compatibility_msg = green("Ahead of latest version!")
else:
compatibility_msg = green("Up to date!")
return (compatibility_msg, update_msg)
|
def _get_plugin_msg_info(
name: str, version_s: str, core: Type[dbt.semver.VersionSpecifier]
) -> Tuple[str, str]:
plugin = dbt.semver.VersionSpecifier.from_version_string(version_s)
latest_plugin = get_latest_version(version_url=get_package_pypi_url(name))
update_msg = ""
if plugin.major != core.major and plugin.minor != core.minor:
compatibility_msg = red("Not compatible!")
update_msg = (
f" Your version of dbt-{name} is not compatible with core!\n"
" You can find instructions for upgrading here:\n"
" https://docs.getdbt.com/dbt-cli/install/overview"
)
return (compatibility_msg, update_msg)
if not latest_plugin:
msg = f"The latest version of dbt-{name} could not be determined from {get_package_pypi_url(name)}"
compatibility_msg = yellow(msg)
return (compatibility_msg, update_msg)
if plugin < latest_plugin:
compatibility_msg = yellow("Update available!")
update_msg = (
f" Your version of dbt-{name} is out of date! "
"You can find instructions for upgrading here:\n"
" https://docs.getdbt.com/dbt-cli/install/overview"
)
elif plugin > latest_plugin:
compatibility_msg = green("Ahead of latest version!")
else:
compatibility_msg = green("Up to date!")
return (compatibility_msg, update_msg)
|
28,107 |
def notify_journal_entry_cqc(cmdr, is_beta, entry, state):
"""
Send a journal entry to each plugin.
:param cmdr: The Cmdr name, or None if not yet known
:param entry: The journal entry as a dictionary
:param state: A dictionary containing info about the Cmdr, current ship and cargo
:param is_beta: whether the player is in a Beta universe.
:returns: Error message from the first plugin that returns one (if any)
"""
error = None
for plugin in PLUGINS:
journal_entry = plugin._get_func('journal_entry_cqc')
if journal_entry:
try:
# Pass a copy of the journal entry in case the callee modifies it
newerror = journal_entry(cmdr, is_beta, dict(entry), dict(state))
error = error or newerror
except Exception as e:
logger.exception(f'Plugin "{plugin.name}" failed')
return error
|
def notify_journal_entry_cqc(cmdr, is_beta, entry, state):
"""
Send a journal entry to each plugin.
:param cmdr: The Cmdr name, or None if not yet known
:param entry: The journal entry as a dictionary
:param state: A dictionary containing info about the Cmdr, current ship and cargo
:param is_beta: whether the player is in a Beta universe.
:returns: Error message from the first plugin that returns one (if any)
"""
error = None
for plugin in PLUGINS:
journal_entry = plugin._get_func('journal_entry_cqc')
if journal_entry is not None and callable(journal_entry):
try:
# Pass a copy of the journal entry in case the callee modifies it
newerror = journal_entry(cmdr, is_beta, dict(entry), dict(state))
error = error or newerror
except Exception as e:
logger.exception(f'Plugin "{plugin.name}" failed')
return error
|
46,671 |
def _make_pubsubs(hosts, pubsub_routers):
if len(pubsub_routers) != len(hosts):
raise ValueError(
f"lenght of pubsub_routers={pubsub_routers} should be equaled to "
f"hosts={hosts}"
)
return tuple(
Pubsub(
host=host,
router=router,
my_id=host.get_id(),
)
for host, router in zip(hosts, pubsub_routers)
)
|
def _make_pubsubs(hosts, pubsub_routers):
if len(pubsub_routers) != len(hosts):
raise ValueError(
f"lenght of pubsub_routers={pubsub_routers} should be equaled to "
f"length of hosts={hosts}"
)
return tuple(
Pubsub(
host=host,
router=router,
my_id=host.get_id(),
)
for host, router in zip(hosts, pubsub_routers)
)
|
47,737 |
def test__api__fix_string_specific_exclude():
"""Basic checking of lint functionality with a specific rule excludsion."""
result = sqlfluff.fix(my_bad_query, exclude_rules="L036")
# Check actual result
assert result == "SELECT *, 1, blah AS foo FROM mytable\n"
|
def test__api__fix_string_specific_exclude():
"""Basic checking of lint functionality with a specific rule exclusion."""
result = sqlfluff.fix(my_bad_query, exclude_rules="L036")
# Check actual result
assert result == "SELECT *, 1, blah AS foo FROM mytable\n"
|
2,945 |
def hash_pandas_object(
obj, index=True, encoding="utf8", hash_key=None, categorize=True
):
"""
Return a data hash of the Index/Series/DataFrame.
Parameters
----------
index : bool, default True
include the index in the hash (if Series/DataFrame)
encoding : str, default 'utf8'
encoding for data & key when str
hash_key : str key to encode, default to _default_hash_key
categorize : bool, default True
Whether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
.. versionadded:: 0.20.0
Returns
-------
Series of uint64, same length as the object
"""
from pandas import Series
if hash_key is None:
hash_key = _default_hash_key
if isinstance(obj, ABCMultiIndex):
return Series(hash_tuples(obj, encoding, hash_key), dtype="uint64", copy=False)
if isinstance(obj, ABCIndexClass):
h = hash_array(obj.values, encoding, hash_key, categorize).astype(
"uint64", copy=False
)
h = Series(h, index=obj, dtype="uint64", copy=False)
elif isinstance(obj, ABCSeries):
h = hash_array(obj.values, encoding, hash_key, categorize).astype(
"uint64", copy=False
)
if index:
index_iter = (
hash_pandas_object(
obj.index,
index=False,
encoding=encoding,
hash_key=hash_key,
categorize=categorize,
).values
for _ in [None]
)
arrays = itertools.chain([h], index_iter)
h = _combine_hash_arrays(arrays, 2)
h = Series(h, index=obj.index, dtype="uint64", copy=False)
elif isinstance(obj, ABCDataFrame):
hashes = (hash_array(series.values) for _, series in obj.items())
num_items = len(obj.columns)
if index:
index_hash_generator = (
hash_pandas_object(
obj.index,
index=False,
encoding=encoding,
hash_key=hash_key,
categorize=categorize,
).values # noqa
for _ in [None]
)
num_items += 1
hashes = itertools.chain(hashes, index_hash_generator)
h = _combine_hash_arrays(hashes, num_items)
h = Series(h, index=obj.index, dtype="uint64", copy=False)
else:
raise TypeError("Unexpected type for hashing %s" % type(obj))
return h
|
def hash_pandas_object(
obj, index=True, encoding="utf8", hash_key=None, categorize=True
):
"""
Return a data hash of the Index/Series/DataFrame.
Parameters
----------
index : bool, default True
include the index in the hash (if Series/DataFrame)
encoding : str, default 'utf8'
encoding for data & key when str
hash_key : str, default '_default_hash_key'
categorize : bool, default True
Whether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
.. versionadded:: 0.20.0
Returns
-------
Series of uint64, same length as the object
"""
from pandas import Series
if hash_key is None:
hash_key = _default_hash_key
if isinstance(obj, ABCMultiIndex):
return Series(hash_tuples(obj, encoding, hash_key), dtype="uint64", copy=False)
if isinstance(obj, ABCIndexClass):
h = hash_array(obj.values, encoding, hash_key, categorize).astype(
"uint64", copy=False
)
h = Series(h, index=obj, dtype="uint64", copy=False)
elif isinstance(obj, ABCSeries):
h = hash_array(obj.values, encoding, hash_key, categorize).astype(
"uint64", copy=False
)
if index:
index_iter = (
hash_pandas_object(
obj.index,
index=False,
encoding=encoding,
hash_key=hash_key,
categorize=categorize,
).values
for _ in [None]
)
arrays = itertools.chain([h], index_iter)
h = _combine_hash_arrays(arrays, 2)
h = Series(h, index=obj.index, dtype="uint64", copy=False)
elif isinstance(obj, ABCDataFrame):
hashes = (hash_array(series.values) for _, series in obj.items())
num_items = len(obj.columns)
if index:
index_hash_generator = (
hash_pandas_object(
obj.index,
index=False,
encoding=encoding,
hash_key=hash_key,
categorize=categorize,
).values # noqa
for _ in [None]
)
num_items += 1
hashes = itertools.chain(hashes, index_hash_generator)
h = _combine_hash_arrays(hashes, num_items)
h = Series(h, index=obj.index, dtype="uint64", copy=False)
else:
raise TypeError("Unexpected type for hashing %s" % type(obj))
return h
|
10,479 |
def process_version_added(version_added):
if not isinstance(version_added, str):
return version_added
if ':' not in version_added:
return version_added
# Strip tag from version_added. It suffices to do this here since
# this is only used for ansible-base, and there the only valid tag
# is `ansible.builtin:`.
return version_added[version_added.index(':') + 1:]
|
def process_version_added(version_added):
if not isinstance(version_added, string_types):
return version_added
if ':' not in version_added:
return version_added
# Strip tag from version_added. It suffices to do this here since
# this is only used for ansible-base, and there the only valid tag
# is `ansible.builtin:`.
return version_added[version_added.index(':') + 1:]
|
30,156 |
def fetch_consumption(zone_key="CA-QC", session=None, logger=None):
data = _fetch_quebec_consumption()
for elem in reversed(data["details"]):
if "demandeTotal" in elem["valeurs"]:
return {
"zoneKey": zone_key,
"datetime": elem["date"],
"consumption": elem["valeurs"]["demandeTotal"],
"source": "hydroquebec.com",
}
|
def fetch_consumption(zone_key="CA-QC", session=None, target_datetime=None, logger=None):
data = _fetch_quebec_consumption()
for elem in reversed(data["details"]):
if "demandeTotal" in elem["valeurs"]:
return {
"zoneKey": zone_key,
"datetime": elem["date"],
"consumption": elem["valeurs"]["demandeTotal"],
"source": "hydroquebec.com",
}
|
19,989 |
def find_color_card(img, threshold='adaptgauss', threshvalue=125, blurry=False, background='dark'):
"""Automatically detects a color card and output info to use in create_color_card_mask function
Inputs:
img = Input RGB image data containing a color card.
threshold = Threshold method, either 'normal', 'otsu', or 'adaptgauss', optional (default 'adaptgauss)
thresh_value = Thresholding value, optional (default 125)
blurry = Bool (default False) if True then image sharpening applied
background = Type of image background either 'dark' or 'light (default 'dark'); if 'light' then histogram
expansion applied to better detect edges, but histogram expansion will be hindered if there
is a dark background
Returns:
df = Dataframe containing information about the filtered contours
start_coord = Two element tuple of starting coordinates, location of the top left pixel detected
spacing = Two element tuple of spacing between centers of chips
:param img: numpy.ndarray
:param threshold: str
:param threshvalue: int
:param blurry: bool
:param background: str
:return df: pandas.core.frame.DataFrame
:return start_coord: tuple
:return spacing: tuple
"""
# Imports
import skimage
import pandas as pd
from scipy.spatial.distance import squareform, pdist
# Get image attributes
height, width, channels = img.shape
totalpx = float(height * width)
# Minimum and maximum square size based upon 12 MP image
minarea = 1000. / 12000000. * totalpx
maxarea = 8000000. / 12000000. * totalpx
# Create gray image for further processing
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Laplacian Fourier Transform detection of blurriness
blurfactor = cv2.Laplacian(gray_img, cv2.CV_64F).var()
# If image is blurry then try to deblur using kernel
if blurry:
# from https://www.packtpub.com/mapt/book/Application+Development/9781785283932/2/ch02lvl1sec22/Sharpening
kernel = np.array([[-1, -1, -1, -1, -1],
[-1, 2, 2, 2, -1],
[-1, 2, 8, 2, -1],
[-1, 2, 2, 2, -1],
[-1, -1, -1, -1, -1]]) / 8.0
# Store result back out for further processing
gray_img = cv2.filter2D(gray_img, -1, kernel)
# In darker samples, the expansion of the histogram hinders finding the squares due to problems with the otsu
# thresholding. If your image has a bright background then apply
if background == 'light':
clahe = cv2.createCLAHE(clipLimit=3.25, tileGridSize=(4, 4))
# apply CLAHE histogram expansion to find squares better with canny edge detection
gray_img = clahe.apply(gray_img)
elif background != 'dark':
fatal_error('Background parameter ' + str(background) + ' is not "light" or "dark"!')
# Thresholding
if threshold == "otsu":
# Blur slightly so defects on card squares and background patterns are less likely to be picked up
gaussian = cv2.GaussianBlur(gray_img, (5, 5), 0)
ret, threshold = cv2.threshold(gaussian, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
elif threshold == "normal":
# Blur slightly so defects on card squares and background patterns are less likely to be picked up
gaussian = cv2.GaussianBlur(gray_img, (5, 5), 0)
ret, threshold = cv2.threshold(gaussian, threshvalue, 255, cv2.THRESH_BINARY)
elif threshold == "adaptgauss":
# Blur slightly so defects on card squares and background patterns are less likely to be picked up
gaussian = cv2.GaussianBlur(gray_img, (11, 11), 0)
threshold = cv2.adaptiveThreshold(gaussian, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV, 51, 2)
else:
fatal_error('Threshold ' + str(threshold) + ' is not "otsu", "normal", or "adaptgauss"!')
# Apply automatic Canny edge detection using the computed median
edges = skimage.feature.canny(threshold)
edges.dtype = 'uint8'
# Compute contours to find the squares of the card
_, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Variable of which contour is which
mindex = []
# Variable to store moments
mu = []
# Variable to x,y coordinates in tuples
mc = []
# Variable to x coordinate as integer
mx = []
# Variable to y coordinate as integer
my = []
# Variable to store area
marea = []
# Variable to store whether something is a square (1) or not (0)
msquare = []
# Variable to store square approximation coordinates
msquarecoords = []
# Variable to store child hierarchy element
mchild = []
# Fitted rectangle height
mheight = []
# Fitted rectangle width
mwidth = []
# Ratio of height/width
mwhratio = []
# Extract moments from contour image
for x in range(0, len(contours)):
mu.append(cv2.moments(contours[x]))
marea.append(cv2.contourArea(contours[x]))
mchild.append(int(hierarchy[0][x][2]))
mindex.append(x)
# Cycle through moment data and compute location for each moment
for m in mu:
if m['m00'] != 0: # This is the area term for a moment
mc.append((int(m['m10'] / m['m00']), int(m['m01']) / m['m00']))
mx.append(int(m['m10'] / m['m00']))
my.append(int(m['m01'] / m['m00']))
else:
mc.append((0, 0))
mx.append((0))
my.append((0))
# Loop over our contours and extract data about them
for index, c in enumerate(contours):
# Area isn't 0, but greater than min-area and less than max-area
if marea[index] != 0 and minarea < marea[index] < maxarea:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.15 * peri, True)
center, wh, angle = cv2.minAreaRect(c) # Rotated rectangle
mwidth.append(wh[0])
mheight.append(wh[1])
mwhratio.append(wh[0] / wh[1])
msquare.append(len(approx))
# If the approx contour has 4 points then we can assume we have 4-sided objects
if len(approx) == 4 or 5:
msquarecoords.append(approx)
else: # It's not square
msquare.append(0)
msquarecoords.append(0)
else: # Contour has area of 0, not interesting
msquare.append(0)
msquarecoords.append(0)
mwidth.append(0)
mheight.append(0)
mwhratio.append(0)
# Make a pandas df from data for filtering out junk
locarea = {'index': mindex, 'X': mx, 'Y': my, 'width': mwidth, 'height': mheight, 'WHratio': mwhratio,
'Area': marea, 'square': msquare, 'child': mchild}
df = pd.DataFrame(locarea)
# Add calculated blur factor to output
df['blurriness'] = blurfactor
# Filter df for attributes that would isolate squares of reasonable size
df = df[(df['Area'] > minarea) & (df['Area'] < maxarea) & (df['child'] != -1) &
(df['square'].isin([4, 5])) & (df['WHratio'] < 1.2) & (df['WHratio'] > 0.85)]
# Filter nested squares from dataframe, was having issues with median being towards smaller nested squares
df = df[~(df['index'].isin(df['index'] + 1))]
# Count up squares that are within a given radius, more squares = more likelihood of them being the card
# Median width of square time 2.5 gives proximity radius for searching for similar squares
median_sq_width_px = df["width"].median()
# Squares that are within 6 widths of the current square
pixeldist = median_sq_width_px * 6
# Computes euclidean distance matrix for the x and y contour centroids
distmatrix = pd.DataFrame(squareform(pdist(df[['X', 'Y']])))
# Add up distances that are less than ones have distance less than pixeldist pixels
distmatrixflat = distmatrix.apply(lambda dist: dist[dist <= pixeldist].count() - 1, axis=1)
# Append distprox summary to dataframe
df = df.assign(distprox=distmatrixflat.values)
# Compute how similar in area the squares are. lots of similar values indicates card
# isolate area measurements
filtered_area = df['Area']
# Create empty matrix for storing comparisons
sizecomp = np.zeros((len(filtered_area), len(filtered_area)))
# Double loop through all areas to compare to each other
for p in range(0, len(filtered_area)):
for o in range(0, len(filtered_area)):
big = max(filtered_area.iloc[p], filtered_area.iloc[o])
small = min(filtered_area.iloc[p], filtered_area.iloc[o])
pct = 100. * (small / big)
sizecomp[p][o] = pct
# How many comparisons given 90% square similarity
sizematrix = pd.DataFrame(sizecomp).apply(lambda sim: sim[sim >= 90].count() - 1, axis=1)
# Append sizeprox summary to dataframe
df = df.assign(sizeprox=sizematrix.values)
# Reorder dataframe for better printing
df = df[['index', 'X', 'Y', 'width', 'height', 'WHratio', 'Area', 'square', 'child',
'blurriness', 'distprox', 'sizeprox']]
# Loosely filter for size and distance (relative size to median)
minsqwidth = median_sq_width_px * 0.80
maxsqwidth = median_sq_width_px * 1.2
df = df[(df['distprox'] >= 5) & (df['sizeprox'] >= 5) & (df['width'] > minsqwidth) &
(df['width'] < maxsqwidth)]
# Filter for proximity again to root out stragglers
# Find and count up squares that are within given radius,
# more squares = more likelihood of them being the card
# Median width of square time 2.5 gives proximity radius for searching for similar squares
median_sq_width_px = df["width"].median()
# Squares that are within 6 widths of the current square
pixeldist = median_sq_width_px * 5
# Computes euclidean distance matrix for the x and y contour centroids
distmatrix = pd.DataFrame(squareform(pdist(df[['X', 'Y']])))
# Add up distances that are less than ones have distance less than pixeldist pixels
distmatrixflat = distmatrix.apply(lambda dist: dist[dist <= pixeldist].count() - 1, axis=1)
# Append distprox summary to dataframe
df = df.assign(distprox=distmatrixflat.values)
# Filter results for distance proximity to other squares
df = df[(df['distprox'] >= 4)]
# Extract the starting coordinate
start_coord = (int(df['X'].min()), int(df['Y'].min()))
# Calculate the range
spacingx_short = (df['X'].max() - df['X'].min()) / 3
spacingy_short = (df['Y'].max() - df['Y'].min()) / 3
spacingx_long = (df['X'].max() - df['X'].min()) / 5
spacingy_long = (df['Y'].max() - df['Y'].min()) / 5
# Chip spacing since 4x6 card assumed
spacing_short = min(spacingx_short, spacingy_short)
spacing_long = max(spacingx_long, spacingy_long)
# Smaller spacing measurement might have a chip missing
spacing = int(max(spacing_short, spacing_long))
spacing = (spacing, spacing)
return df, start_coord, spacing
|
def find_color_card(rgb_img, threshold='adaptgauss', threshvalue=125, blurry=False, background='dark'):
"""Automatically detects a color card and output info to use in create_color_card_mask function
Inputs:
img = Input RGB image data containing a color card.
threshold = Threshold method, either 'normal', 'otsu', or 'adaptgauss', optional (default 'adaptgauss)
thresh_value = Thresholding value, optional (default 125)
blurry = Bool (default False) if True then image sharpening applied
background = Type of image background either 'dark' or 'light (default 'dark'); if 'light' then histogram
expansion applied to better detect edges, but histogram expansion will be hindered if there
is a dark background
Returns:
df = Dataframe containing information about the filtered contours
start_coord = Two element tuple of starting coordinates, location of the top left pixel detected
spacing = Two element tuple of spacing between centers of chips
:param img: numpy.ndarray
:param threshold: str
:param threshvalue: int
:param blurry: bool
:param background: str
:return df: pandas.core.frame.DataFrame
:return start_coord: tuple
:return spacing: tuple
"""
# Imports
import skimage
import pandas as pd
from scipy.spatial.distance import squareform, pdist
# Get image attributes
height, width, channels = img.shape
totalpx = float(height * width)
# Minimum and maximum square size based upon 12 MP image
minarea = 1000. / 12000000. * totalpx
maxarea = 8000000. / 12000000. * totalpx
# Create gray image for further processing
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Laplacian Fourier Transform detection of blurriness
blurfactor = cv2.Laplacian(gray_img, cv2.CV_64F).var()
# If image is blurry then try to deblur using kernel
if blurry:
# from https://www.packtpub.com/mapt/book/Application+Development/9781785283932/2/ch02lvl1sec22/Sharpening
kernel = np.array([[-1, -1, -1, -1, -1],
[-1, 2, 2, 2, -1],
[-1, 2, 8, 2, -1],
[-1, 2, 2, 2, -1],
[-1, -1, -1, -1, -1]]) / 8.0
# Store result back out for further processing
gray_img = cv2.filter2D(gray_img, -1, kernel)
# In darker samples, the expansion of the histogram hinders finding the squares due to problems with the otsu
# thresholding. If your image has a bright background then apply
if background == 'light':
clahe = cv2.createCLAHE(clipLimit=3.25, tileGridSize=(4, 4))
# apply CLAHE histogram expansion to find squares better with canny edge detection
gray_img = clahe.apply(gray_img)
elif background != 'dark':
fatal_error('Background parameter ' + str(background) + ' is not "light" or "dark"!')
# Thresholding
if threshold == "otsu":
# Blur slightly so defects on card squares and background patterns are less likely to be picked up
gaussian = cv2.GaussianBlur(gray_img, (5, 5), 0)
ret, threshold = cv2.threshold(gaussian, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
elif threshold == "normal":
# Blur slightly so defects on card squares and background patterns are less likely to be picked up
gaussian = cv2.GaussianBlur(gray_img, (5, 5), 0)
ret, threshold = cv2.threshold(gaussian, threshvalue, 255, cv2.THRESH_BINARY)
elif threshold == "adaptgauss":
# Blur slightly so defects on card squares and background patterns are less likely to be picked up
gaussian = cv2.GaussianBlur(gray_img, (11, 11), 0)
threshold = cv2.adaptiveThreshold(gaussian, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV, 51, 2)
else:
fatal_error('Threshold ' + str(threshold) + ' is not "otsu", "normal", or "adaptgauss"!')
# Apply automatic Canny edge detection using the computed median
edges = skimage.feature.canny(threshold)
edges.dtype = 'uint8'
# Compute contours to find the squares of the card
_, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Variable of which contour is which
mindex = []
# Variable to store moments
mu = []
# Variable to x,y coordinates in tuples
mc = []
# Variable to x coordinate as integer
mx = []
# Variable to y coordinate as integer
my = []
# Variable to store area
marea = []
# Variable to store whether something is a square (1) or not (0)
msquare = []
# Variable to store square approximation coordinates
msquarecoords = []
# Variable to store child hierarchy element
mchild = []
# Fitted rectangle height
mheight = []
# Fitted rectangle width
mwidth = []
# Ratio of height/width
mwhratio = []
# Extract moments from contour image
for x in range(0, len(contours)):
mu.append(cv2.moments(contours[x]))
marea.append(cv2.contourArea(contours[x]))
mchild.append(int(hierarchy[0][x][2]))
mindex.append(x)
# Cycle through moment data and compute location for each moment
for m in mu:
if m['m00'] != 0: # This is the area term for a moment
mc.append((int(m['m10'] / m['m00']), int(m['m01']) / m['m00']))
mx.append(int(m['m10'] / m['m00']))
my.append(int(m['m01'] / m['m00']))
else:
mc.append((0, 0))
mx.append((0))
my.append((0))
# Loop over our contours and extract data about them
for index, c in enumerate(contours):
# Area isn't 0, but greater than min-area and less than max-area
if marea[index] != 0 and minarea < marea[index] < maxarea:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.15 * peri, True)
center, wh, angle = cv2.minAreaRect(c) # Rotated rectangle
mwidth.append(wh[0])
mheight.append(wh[1])
mwhratio.append(wh[0] / wh[1])
msquare.append(len(approx))
# If the approx contour has 4 points then we can assume we have 4-sided objects
if len(approx) == 4 or 5:
msquarecoords.append(approx)
else: # It's not square
msquare.append(0)
msquarecoords.append(0)
else: # Contour has area of 0, not interesting
msquare.append(0)
msquarecoords.append(0)
mwidth.append(0)
mheight.append(0)
mwhratio.append(0)
# Make a pandas df from data for filtering out junk
locarea = {'index': mindex, 'X': mx, 'Y': my, 'width': mwidth, 'height': mheight, 'WHratio': mwhratio,
'Area': marea, 'square': msquare, 'child': mchild}
df = pd.DataFrame(locarea)
# Add calculated blur factor to output
df['blurriness'] = blurfactor
# Filter df for attributes that would isolate squares of reasonable size
df = df[(df['Area'] > minarea) & (df['Area'] < maxarea) & (df['child'] != -1) &
(df['square'].isin([4, 5])) & (df['WHratio'] < 1.2) & (df['WHratio'] > 0.85)]
# Filter nested squares from dataframe, was having issues with median being towards smaller nested squares
df = df[~(df['index'].isin(df['index'] + 1))]
# Count up squares that are within a given radius, more squares = more likelihood of them being the card
# Median width of square time 2.5 gives proximity radius for searching for similar squares
median_sq_width_px = df["width"].median()
# Squares that are within 6 widths of the current square
pixeldist = median_sq_width_px * 6
# Computes euclidean distance matrix for the x and y contour centroids
distmatrix = pd.DataFrame(squareform(pdist(df[['X', 'Y']])))
# Add up distances that are less than ones have distance less than pixeldist pixels
distmatrixflat = distmatrix.apply(lambda dist: dist[dist <= pixeldist].count() - 1, axis=1)
# Append distprox summary to dataframe
df = df.assign(distprox=distmatrixflat.values)
# Compute how similar in area the squares are. lots of similar values indicates card
# isolate area measurements
filtered_area = df['Area']
# Create empty matrix for storing comparisons
sizecomp = np.zeros((len(filtered_area), len(filtered_area)))
# Double loop through all areas to compare to each other
for p in range(0, len(filtered_area)):
for o in range(0, len(filtered_area)):
big = max(filtered_area.iloc[p], filtered_area.iloc[o])
small = min(filtered_area.iloc[p], filtered_area.iloc[o])
pct = 100. * (small / big)
sizecomp[p][o] = pct
# How many comparisons given 90% square similarity
sizematrix = pd.DataFrame(sizecomp).apply(lambda sim: sim[sim >= 90].count() - 1, axis=1)
# Append sizeprox summary to dataframe
df = df.assign(sizeprox=sizematrix.values)
# Reorder dataframe for better printing
df = df[['index', 'X', 'Y', 'width', 'height', 'WHratio', 'Area', 'square', 'child',
'blurriness', 'distprox', 'sizeprox']]
# Loosely filter for size and distance (relative size to median)
minsqwidth = median_sq_width_px * 0.80
maxsqwidth = median_sq_width_px * 1.2
df = df[(df['distprox'] >= 5) & (df['sizeprox'] >= 5) & (df['width'] > minsqwidth) &
(df['width'] < maxsqwidth)]
# Filter for proximity again to root out stragglers
# Find and count up squares that are within given radius,
# more squares = more likelihood of them being the card
# Median width of square time 2.5 gives proximity radius for searching for similar squares
median_sq_width_px = df["width"].median()
# Squares that are within 6 widths of the current square
pixeldist = median_sq_width_px * 5
# Computes euclidean distance matrix for the x and y contour centroids
distmatrix = pd.DataFrame(squareform(pdist(df[['X', 'Y']])))
# Add up distances that are less than ones have distance less than pixeldist pixels
distmatrixflat = distmatrix.apply(lambda dist: dist[dist <= pixeldist].count() - 1, axis=1)
# Append distprox summary to dataframe
df = df.assign(distprox=distmatrixflat.values)
# Filter results for distance proximity to other squares
df = df[(df['distprox'] >= 4)]
# Extract the starting coordinate
start_coord = (int(df['X'].min()), int(df['Y'].min()))
# Calculate the range
spacingx_short = (df['X'].max() - df['X'].min()) / 3
spacingy_short = (df['Y'].max() - df['Y'].min()) / 3
spacingx_long = (df['X'].max() - df['X'].min()) / 5
spacingy_long = (df['Y'].max() - df['Y'].min()) / 5
# Chip spacing since 4x6 card assumed
spacing_short = min(spacingx_short, spacingy_short)
spacing_long = max(spacingx_long, spacingy_long)
# Smaller spacing measurement might have a chip missing
spacing = int(max(spacing_short, spacing_long))
spacing = (spacing, spacing)
return df, start_coord, spacing
|
33,616 |
def check_no_existing_node(node_ip_address, redis_client):
"""A helper method to check there is no node registered
to the cluster of the given ndoe address.
"""
clients = ray.state._parse_client_table(redis_client)
for client in clients:
assert "NodeID" in client
assert "NodeManagerAddress" in client
assert "Alive" in client
if client["Alive"] is False:
continue
if client["NodeManagerAddress"] == node_ip_address:
raise Exception("This Redis instance is already connected to "
"clients with this IP address.")
|
def check_no_existing_node(node_ip_address, redis_client):
"""A helper method to check there is no node registered
to the cluster of the given node address.
"""
clients = ray.state._parse_client_table(redis_client)
for client in clients:
assert "NodeID" in client
assert "NodeManagerAddress" in client
assert "Alive" in client
if client["Alive"] is False:
continue
if client["NodeManagerAddress"] == node_ip_address:
raise Exception("This Redis instance is already connected to "
"clients with this IP address.")
|
35,586 |
def densenet121(pretrained: bool = False, progress: bool = True, **kwargs) -> DenseNet:
r"""Densenet-121 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
memory_efficient (bool) - If True, uses checkpointing. Much more memory efficient,
but slower. Default: *False*. See `"paper" <https://arxiv.org/pdf/1707.06990.pdf>`_
"""
return _densenet('densenet121', 32, (6, 12, 24, 16), 64, pretrained, progress,
**kwargs)
|
def densenet121(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> DenseNet:
r"""Densenet-121 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
memory_efficient (bool) - If True, uses checkpointing. Much more memory efficient,
but slower. Default: *False*. See `"paper" <https://arxiv.org/pdf/1707.06990.pdf>`_
"""
return _densenet('densenet121', 32, (6, 12, 24, 16), 64, pretrained, progress,
**kwargs)
|
35,082 |
def call_all_topi_funcs(mod, params, target, opt_level=3):
"""Call all TOPI compute to extract auto_scheduler tasks in a Relay program"""
# pylint: disable=import-outside-toplevel
from tvm import relay
# Turn off AutoTVM config not found warnings
old_autotvm_silent = autotvm.GLOBAL_SCOPE.silent
autotvm.GLOBAL_SCOPE.silent = True
with transform.PassContext(
opt_level=opt_level,
config={
"relay.backend.use_auto_scheduler": True,
},
disabled_pass={"AutoSchedulerLayoutRewrite"},
):
compiler = relay.vm.VMCompiler()
if params:
compiler.set_params(params)
mod = tvm.IRModule.from_expr(mod) if isinstance(mod, relay.Function) else mod
try:
compiler.lower(mod, target)
# pylint: disable=broad-except
except Exception:
logger.warning("Got exception in task extraction:\n %s", traceback.format_exc())
finally:
autotvm.GLOBAL_SCOPE.silent = old_autotvm_silent
|
def call_all_topi_funcs(mod, params, target, opt_level=3):
"""Call all TOPI compute to extract auto_scheduler tasks in a Relay program"""
# pylint: disable=import-outside-toplevel
from tvm import relay
# Turn off AutoTVM config not found warnings
old_autotvm_silent = autotvm.GLOBAL_SCOPE.silent
autotvm.GLOBAL_SCOPE.silent = True
with transform.PassContext(
opt_level=opt_level,
config={
"relay.backend.use_auto_scheduler": True,
},
disabled_pass={"AutoSchedulerLayoutRewrite"},
):
compiler = relay.vm.VMCompiler()
if params:
compiler.set_params(params)
mod = tvm.IRModule.from_expr(mod) if isinstance(mod, relay.Function) else mod
try:
compiler.lower(mod, target)
except Exception: # pylint: disable=broad-except
logger.warning("Got exception in task extraction:\n %s", traceback.format_exc())
finally:
autotvm.GLOBAL_SCOPE.silent = old_autotvm_silent
|
41,700 |
def generate_package_hash(full_path: Path) -> str:
sha256_hash = hashlib.sha256()
with open(full_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
|
def _generate_package_hash(full_path: Path) -> str:
sha256_hash = hashlib.sha256()
with open(full_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
|
6,081 |
def getQueuesResolved(siteDict):
"""
Get the list of queue descriptions merging site/ce/queue parameters and adding some
derived parameters.
:param dict siteDict: dictionary with configuration data as returned by Resources.getQueues() method
:return: S_OK/S_ERROR, Value dictionary per queue with configuration data updated, e.g. for SiteDirector
"""
queueDict = {}
for site in siteDict:
for ce in siteDict[site]:
ceDict = siteDict[site][ce]
qDict = ceDict.pop('Queues')
for queue in qDict:
queueName = '%s_%s' % (ce, queue)
queueDict[queueName] = qDict[queue]
queueDict[queueName] = qDict[queue]
queueDict[queueName]['Queue'] = queue
queueDict[queueName]['Site'] = site
# Evaluate the CPU limit of the queue according to the Glue convention
# To Do: should be a utility
if "maxCPUTime" in queueDict[queueName] and \
"SI00" in queueDict[queueName]:
maxCPUTime = float(queueDict[queueName]['maxCPUTime'])
# For some sites there are crazy values in the CS
maxCPUTime = max(maxCPUTime, 0)
maxCPUTime = min(maxCPUTime, 86400 * 12.5)
si00 = float(queueDict[queueName]['SI00'])
queueCPUTime = 60. / 250. * maxCPUTime * si00
queueDict[queueName]['CPUTime'] = int(queueCPUTime)
# Tags & RequiredTags defined on the Queue level and on the CE level are concatenated
# This also converts them from a string to a list if required.
for tagFieldName in ('Tag', 'RequiredTag'):
ceTags = ceDict.get(tagFieldName, [])
if isinstance(ceTags, basestring):
ceTags = fromChar(ceTags)
queueTags = queueDict[queueName].get(tagFieldName)
if queueTags and isinstance(queueTags, basestring):
queueTags = fromChar(queueTags)
queueDict[queueName][tagFieldName] = queueTags
if ceTags:
if queueTags:
allTags = list(set(ceTags + queueTags))
queueDict[queueName][tagFieldName] = allTags
else:
queueDict[queueName][tagFieldName] = ceTags
# Some parameters can be defined on the CE level and are inherited by all Queues
for parameter in ['MaxRAM', 'NumberOfProcessors', 'WholeNode']:
queueParameter = queueDict[queueName].get(parameter)
ceParameter = ceDict.get(parameter)
if ceParameter or queueParameter:
queueDict[queueName][parameter] = ceParameter if not queueParameter \
else queueParameter
# If we have a multi-core queue add MultiProcessor tag
if queueDict[queueName].get('NumberOfProcessors', 1) > 1:
queueDict[queueName].setdefault('Tag', []).append('MultiProcessor')
queueDict[queueName]['CEName'] = ce
queueDict[queueName]['GridCE'] = ce
queueDict[queueName]['CEType'] = ceDict['CEType']
queueDict[queueName]['GridMiddleware'] = ceDict['CEType']
queueDict[queueName]['QueueName'] = queue
platform = ''
if "Platform" in queueDict[queueName]:
platform = queueDict[queueName]['Platform']
elif "Platform" in ceDict:
platform = ceDict['Platform']
elif "OS" in ceDict:
architecture = ceDict.get('architecture', 'x86_64')
platform = '_'.join([architecture, ceDict['OS']])
queueDict[queueName]['Platform'] = platform
if "Platform" not in queueDict[queueName] and platform:
result = getDIRACPlatform(platform)
if result['OK']:
queueDict[queueName]['Platform'] = result['Value'][0]
return S_OK(queueDict)
|
def getQueuesResolved(siteDict):
"""
Get the list of queue descriptions merging site/ce/queue parameters and adding some
derived parameters.
:param dict siteDict: dictionary with configuration data as returned by Resources.getQueues() method
:return: S_OK/S_ERROR, Value dictionary per queue with configuration data updated, e.g. for SiteDirector
"""
queueDict = {}
for site in siteDict:
for ce, ceDict in siteDict[site].items():
ceDict = siteDict[site][ce]
qDict = ceDict.pop('Queues')
for queue in qDict:
queueName = '%s_%s' % (ce, queue)
queueDict[queueName] = qDict[queue]
queueDict[queueName] = qDict[queue]
queueDict[queueName]['Queue'] = queue
queueDict[queueName]['Site'] = site
# Evaluate the CPU limit of the queue according to the Glue convention
# To Do: should be a utility
if "maxCPUTime" in queueDict[queueName] and \
"SI00" in queueDict[queueName]:
maxCPUTime = float(queueDict[queueName]['maxCPUTime'])
# For some sites there are crazy values in the CS
maxCPUTime = max(maxCPUTime, 0)
maxCPUTime = min(maxCPUTime, 86400 * 12.5)
si00 = float(queueDict[queueName]['SI00'])
queueCPUTime = 60. / 250. * maxCPUTime * si00
queueDict[queueName]['CPUTime'] = int(queueCPUTime)
# Tags & RequiredTags defined on the Queue level and on the CE level are concatenated
# This also converts them from a string to a list if required.
for tagFieldName in ('Tag', 'RequiredTag'):
ceTags = ceDict.get(tagFieldName, [])
if isinstance(ceTags, basestring):
ceTags = fromChar(ceTags)
queueTags = queueDict[queueName].get(tagFieldName)
if queueTags and isinstance(queueTags, basestring):
queueTags = fromChar(queueTags)
queueDict[queueName][tagFieldName] = queueTags
if ceTags:
if queueTags:
allTags = list(set(ceTags + queueTags))
queueDict[queueName][tagFieldName] = allTags
else:
queueDict[queueName][tagFieldName] = ceTags
# Some parameters can be defined on the CE level and are inherited by all Queues
for parameter in ['MaxRAM', 'NumberOfProcessors', 'WholeNode']:
queueParameter = queueDict[queueName].get(parameter)
ceParameter = ceDict.get(parameter)
if ceParameter or queueParameter:
queueDict[queueName][parameter] = ceParameter if not queueParameter \
else queueParameter
# If we have a multi-core queue add MultiProcessor tag
if queueDict[queueName].get('NumberOfProcessors', 1) > 1:
queueDict[queueName].setdefault('Tag', []).append('MultiProcessor')
queueDict[queueName]['CEName'] = ce
queueDict[queueName]['GridCE'] = ce
queueDict[queueName]['CEType'] = ceDict['CEType']
queueDict[queueName]['GridMiddleware'] = ceDict['CEType']
queueDict[queueName]['QueueName'] = queue
platform = ''
if "Platform" in queueDict[queueName]:
platform = queueDict[queueName]['Platform']
elif "Platform" in ceDict:
platform = ceDict['Platform']
elif "OS" in ceDict:
architecture = ceDict.get('architecture', 'x86_64')
platform = '_'.join([architecture, ceDict['OS']])
queueDict[queueName]['Platform'] = platform
if "Platform" not in queueDict[queueName] and platform:
result = getDIRACPlatform(platform)
if result['OK']:
queueDict[queueName]['Platform'] = result['Value'][0]
return S_OK(queueDict)
|
32,474 |
def get_domain_command():
results = []
execution_metrics = ExecutionMetrics()
domains_list = argToList(demisto.args()['domain'])
for domain in domains_list:
contents = []
context = {}
headers = [] # type: ignore
domain = extract_domain_name(domain)
try:
whois = get_whois_for_domain(domain)
admin = {
'Country': whois.get('administrativeContactCountry'),
'Email': whois.get('administrativeContactEmail'),
'Name': whois.get('administrativeContactName'),
'Phone': whois.get('administrativeContactTelephone')
}
registrant = {
'Country': whois.get('registrantCountry'),
'Email': whois.get('registrantEmail'),
'Name': whois.get('registrantName'),
'Phone': whois.get('registrantTelephone')
}
first_queried = whois.get('created')
name_servers = whois.get('nameServers')
emails = whois.get('emails')
registrar = {'Name': whois.get('registrarName')}
creation_date = first_queried
domain_status = whois.get('status')
updated_date = whois.get('updated')
expiration_date = whois.get('expires')
whois = {
'Name': whois.get('domainName'),
'Registrar Name': whois.get('registrarName'),
'Last Retrieved': whois.get('timeOfLatestRealtimeCheck'),
'Created': whois.get('created'),
'Updated': whois.get('updated'),
'Expires': whois.get('expires'),
'IANAID': whois.get('registrarIANAID'),
'Last Observed': whois.get('auditUpdatedDate')
}
domain_categorization = [] # type: ignore
domain_categorization = get_domain_categorization(domain)
content_categories = domain_categorization.get('content_categories') # type: ignore
malware_categories = domain_categorization.get('security_categories') # type: ignore
risk_score = domain_categorization.get('status') # type: ignore
domain_categorization_table = {
'Content Categories': content_categories,
'Malware Categories': malware_categories
}
domain_details = [] # type: ignore
domain_details = get_domain_details(domain)
popularity = domain_details.get('popularity') # type: ignore
secure_rank = domain_details.get('securerank2') # type: ignore
dbotscore = securerank_to_dbotscore(secure_rank)
context[outputPaths['domain']] = {
'Name': domain,
'Admin': admin,
'Registrant': registrant,
'Registrar': registrar,
'CreationDate': creation_date,
'DomainStatus': domain_status,
'UpdatedDate': updated_date,
'ExpirationDate': expiration_date,
'Umbrella': {
'RiskScore': risk_score,
'SecureRank': secure_rank,
'FirstQueriedTime': first_queried,
'ContentCategories': content_categories,
'MalwareCategories': malware_categories
}
}
# Add malicious if needed
if risk_score == -1 or (secure_rank and secure_rank < MALICIOUS_THRESHOLD):
context[outputPaths['domain']]['Malicious'] = {
'Vendor': 'Cisco Umbrella Investigate',
'Description': 'Malicious domain found with risk score -1'
}
dbotscore = 3
context[outputPaths['dbotscore']] = {
'Indicator': domain,
'Type': 'domain',
'Vendor': 'Cisco Umbrella Investigate',
'Score': dbotscore,
'Reliability': reliability
}
contents.append({
'Risk Score': risk_score,
'Secure Rank': secure_rank,
'Populairty': popularity,
'Demisto Reputation': scoreToReputation(dbotscore),
'First Queried time': first_queried,
})
# Domain reputation + [whois -> whois nameservers -> whois emails] + domain categorization
readable_domain_reputation = tableToMarkdown('"Umbrella Investigate" Domain Reputation for: ' + domain,
contents, headers)
readable_whois = tableToMarkdown('"Umbrella Investigate" WHOIS Record Data for: ' + domain, whois, headers,
date_fields=["Last Retrieved"])
readable_name_servers = tableToMarkdown('Name Servers:', {'Name Servers': name_servers}, headers)
readable_emails = tableToMarkdown('Emails:', emails, ['Emails'])
readable_domain = tableToMarkdown('Domain Categorization:', domain_categorization_table, headers)
readable = readable_domain_reputation + readable_whois + readable_name_servers + readable_emails + readable_domain
results.append(CommandResults(
readable_output=readable,
entry_type=entryTypes['note'],
content_format=formats['json'],
outputs=context,
raw_response=[contents, whois, name_servers, emails, domain_categorization_table]
))
execution_metrics.success += 1
except RequestException as r:
if r.response.status_code == 429:
execution_metrics.quota_error += 1
results.append(
CommandResults(
readable_output=f"Quota exceeded.",
entry_type=entryTypes['note'],
content_format=formats['json'],
outputs=context,
raw_response=contents
))
continue
execution_metrics.general_error += 1
if r.response.status_code == 404:
human_readable = tableToMarkdown(name='Cisco Umbrella Investigate:',
t={'DOMAIN': domain, 'Result': 'Not found'},
headers=['DOMAIN', 'Result'])
context[outputPaths['domain']] = {'Name': domain}
context[outputPaths['dbotscore']] = {'Indicator': domain,
'Type': 'domain',
'Vendor': 'Cisco Umbrella Investigate',
'Score': 0,
'Message': 'No results found',
'Reliability': reliability}
results.append(
CommandResults(
entry_type=entryTypes['note'],
content_format=formats['json'],
readable_output=human_readable,
outputs=context,
raw_response=contents
))
else:
if execution_metrics.metrics is not None and execution_metrics.is_supported():
results.append(execution_metrics.metrics)
return_results(results)
return_error(r.response.text)
if execution_metrics.metrics is not None and execution_metrics.is_supported():
results.append(execution_metrics.metrics)
return results
|
def get_domain_command():
results = []
execution_metrics = ExecutionMetrics()
domains_list = argToList(demisto.args()['domain'])
for domain in domains_list:
contents = []
context = {}
headers = [] # type: ignore
domain = extract_domain_name(domain)
try:
whois = get_whois_for_domain(domain)
admin = {
'Country': whois.get('administrativeContactCountry'),
'Email': whois.get('administrativeContactEmail'),
'Name': whois.get('administrativeContactName'),
'Phone': whois.get('administrativeContactTelephone')
}
registrant = {
'Country': whois.get('registrantCountry'),
'Email': whois.get('registrantEmail'),
'Name': whois.get('registrantName'),
'Phone': whois.get('registrantTelephone')
}
first_queried = whois.get('created')
name_servers = whois.get('nameServers')
emails = whois.get('emails')
registrar = {'Name': whois.get('registrarName')}
creation_date = first_queried
domain_status = whois.get('status')
updated_date = whois.get('updated')
expiration_date = whois.get('expires')
whois = {
'Name': whois.get('domainName'),
'Registrar Name': whois.get('registrarName'),
'Last Retrieved': whois.get('timeOfLatestRealtimeCheck'),
'Created': whois.get('created'),
'Updated': whois.get('updated'),
'Expires': whois.get('expires'),
'IANAID': whois.get('registrarIANAID'),
'Last Observed': whois.get('auditUpdatedDate')
}
domain_categorization = [] # type: ignore
domain_categorization = get_domain_categorization(domain)
content_categories = domain_categorization.get('content_categories') # type: ignore
malware_categories = domain_categorization.get('security_categories') # type: ignore
risk_score = domain_categorization.get('status') # type: ignore
domain_categorization_table = {
'Content Categories': content_categories,
'Malware Categories': malware_categories
}
domain_details = [] # type: ignore
domain_details = get_domain_details(domain)
popularity = domain_details.get('popularity') # type: ignore
secure_rank = domain_details.get('securerank2') # type: ignore
dbotscore = securerank_to_dbotscore(secure_rank)
context[outputPaths['domain']] = {
'Name': domain,
'Admin': admin,
'Registrant': registrant,
'Registrar': registrar,
'CreationDate': creation_date,
'DomainStatus': domain_status,
'UpdatedDate': updated_date,
'ExpirationDate': expiration_date,
'Umbrella': {
'RiskScore': risk_score,
'SecureRank': secure_rank,
'FirstQueriedTime': first_queried,
'ContentCategories': content_categories,
'MalwareCategories': malware_categories
}
}
# Add malicious if needed
if risk_score == -1 or (secure_rank and secure_rank < MALICIOUS_THRESHOLD):
context[outputPaths['domain']]['Malicious'] = {
'Vendor': 'Cisco Umbrella Investigate',
'Description': 'Malicious domain found with risk score -1'
}
dbotscore = 3
context[outputPaths['dbotscore']] = {
'Indicator': domain,
'Type': 'domain',
'Vendor': 'Cisco Umbrella Investigate',
'Score': dbotscore,
'Reliability': reliability
}
contents.append({
'Risk Score': risk_score,
'Secure Rank': secure_rank,
'Populairty': popularity,
'Demisto Reputation': scoreToReputation(dbotscore),
'First Queried time': first_queried,
})
# Domain reputation + [whois -> whois nameservers -> whois emails] + domain categorization
readable_domain_reputation = tableToMarkdown('"Umbrella Investigate" Domain Reputation for: ' + domain,
contents, headers)
readable_whois = tableToMarkdown('"Umbrella Investigate" WHOIS Record Data for: ' + domain, whois, headers,
date_fields=["Last Retrieved"])
readable_name_servers = tableToMarkdown('Name Servers:', {'Name Servers': name_servers}, headers)
readable_emails = tableToMarkdown('Emails:', emails, ['Emails'])
readable_domain = tableToMarkdown('Domain Categorization:', domain_categorization_table, headers)
readable = readable_domain_reputation + readable_whois + readable_name_servers + readable_emails + readable_domain
results.append(CommandResults(
readable_output=readable,
outputs=context,
raw_response=[contents, whois, name_servers, emails, domain_categorization_table]
))
execution_metrics.success += 1
except RequestException as r:
if r.response.status_code == 429:
execution_metrics.quota_error += 1
results.append(
CommandResults(
readable_output=f"Quota exceeded.",
entry_type=entryTypes['note'],
content_format=formats['json'],
outputs=context,
raw_response=contents
))
continue
execution_metrics.general_error += 1
if r.response.status_code == 404:
human_readable = tableToMarkdown(name='Cisco Umbrella Investigate:',
t={'DOMAIN': domain, 'Result': 'Not found'},
headers=['DOMAIN', 'Result'])
context[outputPaths['domain']] = {'Name': domain}
context[outputPaths['dbotscore']] = {'Indicator': domain,
'Type': 'domain',
'Vendor': 'Cisco Umbrella Investigate',
'Score': 0,
'Message': 'No results found',
'Reliability': reliability}
results.append(
CommandResults(
entry_type=entryTypes['note'],
content_format=formats['json'],
readable_output=human_readable,
outputs=context,
raw_response=contents
))
else:
if execution_metrics.metrics is not None and execution_metrics.is_supported():
results.append(execution_metrics.metrics)
return_results(results)
return_error(r.response.text)
if execution_metrics.metrics is not None and execution_metrics.is_supported():
results.append(execution_metrics.metrics)
return results
|
52,751 |
def _prepare_rec(spec, ignorenets, neverignore):
# First of all, let's see if we are supposed to ignore this spec,
# and if so, do so.
if 'addr' in spec and \
spec.get('source') not in neverignore.get(spec['recontype'], []):
for start, stop in ignorenets.get(spec['recontype'], ()):
if start <= utils.force_ip2int(spec['addr']) <= stop:
return None
# Then, let's clean up the records.
# Change Symantec's random user agents (matching SYMANTEC_UA) to
# the constant string 'SymantecRandomUserAgent'.
if spec['recontype'] == 'HTTP_CLIENT_HEADER' and \
spec.get('source') == 'USER-AGENT':
if SYMANTEC_UA.match(spec['value']):
spec['value'] = 'SymantecRandomUserAgent'
elif KASPERSKY_UA.match(spec['value']):
spec['value'] = 'KasperskyWeirdUserAgent'
else:
match = SYMANTEC_SEP_UA.match(spec['value'])
if match is not None:
spec['value'] = '%s%s' % match.groups()
# Change any Digest authorization header to remove non-constant
# information. On one hand we loose the necessary information to
# try to recover the passwords, but on the other hand we store
# specs with different challenges but the same username, realm,
# host and sensor in the same records.
elif (
spec['recontype'] in {'HTTP_CLIENT_HEADER',
'HTTP_CLIENT_HEADER_SERVER'} and
spec.get('source') in {'AUTHORIZATION', 'PROXY-AUTHORIZATION'}
):
value = spec['value']
if value:
authtype = value.split(None, 1)[0]
if authtype.lower() == 'digest':
try:
# we only keep relevant info
spec['value'] = '%s %s' % (authtype, ','.join(
val for val in
_split_digest_auth(value[6:].strip())
if DIGEST_AUTH_INFOS.match(val)
))
except Exception:
utils.LOGGER.warning("Cannot parse digest error for %r",
spec, exc_info=True)
elif ntlm._is_ntlm_message(value):
# NTLM_NEGOTIATE and NTLM_AUTHENTICATE
auth = utils.decode_b64(value.split(' ', 1)[1].encode())
spec['value'] = "NTLM %s" % \
ntlm._ntlm_dict2string(ntlm.ntlm_extract_info(auth))
elif authtype.lower() in {'negotiate', 'kerberos', 'oauth'}:
spec['value'] = authtype
elif (
spec['recontype'] == 'HTTP_SERVER_HEADER' and
spec.get('source') in {'WWW-AUTHENTICATE', 'PROXY-AUTHENTICATE'}
):
value = spec['value']
if value:
authtype = value.split(None, 1)[0]
if authtype.lower() == 'digest':
try:
# we only keep relevant info
spec['value'] = '%s %s' % (authtype, ','.join(
val for val in
_split_digest_auth(value[6:].strip())
if DIGEST_AUTH_INFOS.match(val)
))
except Exception:
utils.LOGGER.warning("Cannot parse digest error for %r",
spec, exc_info=True)
elif ntlm._is_ntlm_message(value):
# NTLM_CHALLENGE
auth = utils.decode_b64(value.split(' ', 1)[1].encode())
spec['value'] = "NTLM %s" % \
ntlm._ntlm_dict2string(ntlm.ntlm_extract_info(auth))
elif authtype.lower() in {'negotiate', 'kerberos', 'oauth'}:
spec['value'] = authtype
# TCP server banners: try to normalize data
elif spec['recontype'] == 'TCP_SERVER_BANNER':
newvalue = value = utils.nmap_decode_data(spec['value'])
for pattern, replace in TCP_SERVER_PATTERNS:
if pattern.search(newvalue):
newvalue = pattern.sub(replace, newvalue)
if newvalue != value:
spec['value'] = utils.nmap_encode_data(newvalue)
# SSL_{CLIENT,SERVER} JA3
elif ((spec['recontype'] == 'SSL_CLIENT' and spec['source'] == 'ja3') or
(spec['recontype'] == 'SSL_SERVER' and
spec['source'].startswith('ja3-'))):
value = spec['value']
spec.setdefault('infos', {})['raw'] = value
spec['value'] = hashlib.new("md5", value.encode()).hexdigest()
if spec['recontype'] == 'SSL_SERVER':
clientvalue = spec['source'][4:]
spec['infos'].setdefault('client', {})['raw'] = clientvalue
spec['source'] = 'ja3-%s' % hashlib.new(
"md5",
clientvalue.encode(),
).hexdigest()
# SSH_{CLIENT,SERVER}_HASSH
elif spec['recontype'] in ['SSH_CLIENT_HASSH', 'SSH_SERVER_HASSH']:
value = spec['value']
spec.setdefault('infos', {})['raw'] = value
spec['value'] = hashlib.new("md5", value.encode()).hexdigest()
# Check DNS Blacklist answer
elif spec['recontype'] == 'DNS_ANSWER':
if any((spec.get('value') or "").endswith(dnsbl)
for dnsbl in config.DNS_BLACKLIST_DOMAINS):
dnsbl_val = spec['value']
match = DNSBL_START.search(dnsbl_val)
if match is not None:
spec['recontype'] = 'DNS_BLACKLIST'
spec['value'] = spec.get('addr')
spec.update({'source': "%s-%s" %
(dnsbl_val[match.end():], spec['source'])})
addr = match.group()
# IPv4
if addr.count('.') == 4:
spec['addr'] = '.'.join(addr.split('.')[3::-1])
# IPv6
else:
spec['addr'] = utils.int2ip6(int(addr
.replace('.', '')[::-1],
16))
return spec
|
def _prepare_rec(spec, ignorenets, neverignore):
# First of all, let's see if we are supposed to ignore this spec,
# and if so, do so.
if 'addr' in spec and \
spec.get('source') not in neverignore.get(spec['recontype'], []):
for start, stop in ignorenets.get(spec['recontype'], ()):
if start <= utils.force_ip2int(spec['addr']) <= stop:
return None
# Then, let's clean up the records.
# Change Symantec's random user agents (matching SYMANTEC_UA) to
# the constant string 'SymantecRandomUserAgent'.
if spec['recontype'] == 'HTTP_CLIENT_HEADER' and \
spec.get('source') == 'USER-AGENT':
if SYMANTEC_UA.match(spec['value']):
spec['value'] = 'SymantecRandomUserAgent'
elif KASPERSKY_UA.match(spec['value']):
spec['value'] = 'KasperskyWeirdUserAgent'
else:
match = SYMANTEC_SEP_UA.match(spec['value'])
if match is not None:
spec['value'] = '%s%s' % match.groups()
# Change any Digest authorization header to remove non-constant
# information. On one hand we loose the necessary information to
# try to recover the passwords, but on the other hand we store
# specs with different challenges but the same username, realm,
# host and sensor in the same records.
elif (
spec['recontype'] in {'HTTP_CLIENT_HEADER',
'HTTP_CLIENT_HEADER_SERVER'} and
spec.get('source') in {'AUTHORIZATION', 'PROXY-AUTHORIZATION'}
):
value = spec['value']
if value:
authtype = value.split(None, 1)[0]
if authtype.lower() == 'digest':
try:
# we only keep relevant info
spec['value'] = '%s %s' % (authtype, ','.join(
val for val in
_split_digest_auth(value[6:].strip())
if DIGEST_AUTH_INFOS.match(val)
))
except Exception:
utils.LOGGER.warning("Cannot parse digest error for %r",
spec, exc_info=True)
elif ntlm._is_ntlm_message(value):
# NTLM_NEGOTIATE and NTLM_AUTHENTICATE
auth = utils.decode_b64(value.split(' ', 1)[1].encode())
spec['value'] = "NTLM %s" % \
ntlm._ntlm_dict2string(ntlm.ntlm_extract_info(auth))
elif authtype.lower() in {'negotiate', 'kerberos', 'oauth'}:
spec['value'] = authtype
elif (
spec['recontype'] == 'HTTP_SERVER_HEADER' and
spec.get('source') in {'WWW-AUTHENTICATE', 'PROXY-AUTHENTICATE'}
):
value = spec['value']
if value:
authtype = value.split(None, 1)[0]
if authtype.lower() == 'digest':
try:
# we only keep relevant info
spec['value'] = '%s %s' % (authtype, ','.join(
val for val in
_split_digest_auth(value[6:].strip())
if DIGEST_AUTH_INFOS.match(val)
))
except Exception:
utils.LOGGER.warning("Cannot parse digest error for %r",
spec, exc_info=True)
elif ntlm._is_ntlm_message(value):
# NTLM_CHALLENGE
try:
auth = utils.decode_b64(value.split(None, 1)[1].encode())
except (UnicodeDecodeError, TypeError, ValueError):
pass
else:
spec['value'] = "NTLM %s" % \
ntlm._ntlm_dict2string(ntlm.ntlm_extract_info(auth))
elif authtype.lower() in {'negotiate', 'kerberos', 'oauth'}:
spec['value'] = authtype
# TCP server banners: try to normalize data
elif spec['recontype'] == 'TCP_SERVER_BANNER':
newvalue = value = utils.nmap_decode_data(spec['value'])
for pattern, replace in TCP_SERVER_PATTERNS:
if pattern.search(newvalue):
newvalue = pattern.sub(replace, newvalue)
if newvalue != value:
spec['value'] = utils.nmap_encode_data(newvalue)
# SSL_{CLIENT,SERVER} JA3
elif ((spec['recontype'] == 'SSL_CLIENT' and spec['source'] == 'ja3') or
(spec['recontype'] == 'SSL_SERVER' and
spec['source'].startswith('ja3-'))):
value = spec['value']
spec.setdefault('infos', {})['raw'] = value
spec['value'] = hashlib.new("md5", value.encode()).hexdigest()
if spec['recontype'] == 'SSL_SERVER':
clientvalue = spec['source'][4:]
spec['infos'].setdefault('client', {})['raw'] = clientvalue
spec['source'] = 'ja3-%s' % hashlib.new(
"md5",
clientvalue.encode(),
).hexdigest()
# SSH_{CLIENT,SERVER}_HASSH
elif spec['recontype'] in ['SSH_CLIENT_HASSH', 'SSH_SERVER_HASSH']:
value = spec['value']
spec.setdefault('infos', {})['raw'] = value
spec['value'] = hashlib.new("md5", value.encode()).hexdigest()
# Check DNS Blacklist answer
elif spec['recontype'] == 'DNS_ANSWER':
if any((spec.get('value') or "").endswith(dnsbl)
for dnsbl in config.DNS_BLACKLIST_DOMAINS):
dnsbl_val = spec['value']
match = DNSBL_START.search(dnsbl_val)
if match is not None:
spec['recontype'] = 'DNS_BLACKLIST'
spec['value'] = spec.get('addr')
spec.update({'source': "%s-%s" %
(dnsbl_val[match.end():], spec['source'])})
addr = match.group()
# IPv4
if addr.count('.') == 4:
spec['addr'] = '.'.join(addr.split('.')[3::-1])
# IPv6
else:
spec['addr'] = utils.int2ip6(int(addr
.replace('.', '')[::-1],
16))
return spec
|
48,445 |
def main():
ORIGINAL_FILE = 'requirements.txt'
VENDORED_COPY = 'test/lib/ansible_test/_data/requirements/sanity.import-plugins.txt'
requirements_1 = read_file(ORIGINAL_FILE)
requirements_2 = read_file(VENDORED_COPY)
if requirements_1 is not None and requirements_2 is not None:
if requirements_1 != requirements_2:
print('%s:%d:%d: Not identical to %s' % (VENDORED_COPY, 0, 0, ORIGINAL_FILE))
sys.exit()
|
def main():
ORIGINAL_FILE = 'requirements.txt'
VENDORED_COPY = 'test/lib/ansible_test/_data/requirements/sanity.import-plugins.txt'
original_requirements = read_file(ORIGINAL_FILE)
vendored_requirements = read_file(VENDORED_COPY)
if original_requirements is not None and vendored_requirements is not None:
if original_requirements != vendored_requirements:
print('%s:%d:%d: Not identical to %s' % (VENDORED_COPY, 0, 0, ORIGINAL_FILE))
sys.exit()
|
7,610 |
def _old_version_workarounds(version):
"""
Function to keep track of workarounds for older ECSV versions.
Parameters
----------
version : tuple
tuple of 1-3 strings, representing the ECSV major,
minor, or bugfix version if not None
Returns
-------
string: str
Text to match before implementing workaround.
"""
# Workaround for ECSV <= V0.9 backwards incompatibility of
# non-standard dtypes.
if int(version[0]) == 0 and int(version[1]) <= 9:
return "Disable strict dtype checking for V0.9"
elif int(version[0]) > 0: # Example to show other implementations
return "No known workarounds"
else:
return "No known workarounds"
|
def _old_version_workarounds(version):
"""
Function to keep track of workarounds for older ECSV versions.
Parameters
----------
version : tuple
tuple of 1-3 strings, representing the ECSV major,
minor, or bugfix version if not None
Returns
-------
string : str
Text to match before implementing workaround.
"""
# Workaround for ECSV <= V0.9 backwards incompatibility of
# non-standard dtypes.
if int(version[0]) == 0 and int(version[1]) <= 9:
return "Disable strict dtype checking for V0.9"
elif int(version[0]) > 0: # Example to show other implementations
return "No known workarounds"
else:
return "No known workarounds"
|
56,386 |
def _backprop_to_all(outputs, retain_grad, loss_scale):
"""Backprop to all input variables
Args:
outputs (list of tuple): each tuple is (y_node, y_grad_var).
y_grad_var should not be None.
retain_grad (bool): see docstring of Variable.backward
loss_scale (float): see docstring of Variable.backward
"""
OrderedDict = chainer.utils._collections.OrderedDict # fix py2 memory leak
cand_funcs = []
seen_set = set()
def add_cand(cand):
if cand not in seen_set:
# Negate since heapq is min-heap
heapq.heappush(cand_funcs, (-cand.rank, len(seen_set), cand))
seen_set.add(cand)
grads = _backprop_utils.GradTable(accumulate_grad_inputs=True)
leaf_nodes = set()
for y, gy in outputs:
grads.accumulate(y, gy)
func = y.creator_node
if func is None: # leaf
leaf_nodes.add(y)
else:
add_cand(func)
# Fix F812 (Python 2)
y = None
del y
is_debug = chainer.is_debug()
base_hooks = chainer.get_function_hooks().values()
while cand_funcs:
_, _, func = heapq.heappop(cand_funcs)
inputs = func.inputs
target_input_indexes = tuple([
i for i, x in enumerate(inputs) if x.requires_grad
])
outputs = [y() for y in func.outputs] # access via weak ref
out_grad = tuple([grads.pop(y) if y.creator_node else None
for y in outputs])
if not target_input_indexes:
continue
in_data = [x.data for x in inputs]
out_grad_array = [None if g is None else g.array for g in out_grad]
if func._n_local_function_hooks != 0:
local_hooks = collections.OrderedDict(chainer.get_function_hooks())
local_hooks.update(func.local_function_hooks)
hooks = local_hooks.values() # avoid six for performance
else:
hooks = base_hooks
with cuda.get_device_from_array(*(in_data + out_grad_array)):
for hook in hooks:
hook.backward_preprocess(
func, tuple(in_data), tuple(out_grad_array))
# Collect the current input gradients.
target_inputs = [inputs[i] for i in target_input_indexes]
# Keep the order for the portability, rather than
# in_grad = {x: grads.get_as_list(x)
# for x in set(target_inputs)}
in_grad = OrderedDict()
for x in target_inputs:
if x not in in_grad:
in_grad[x] = grads.get_as_list(x)
_backprop_utils.backprop_step(
func, target_input_indexes, out_grad, in_grad, is_debug)
for hook in hooks:
hook.backward_postprocess(
func, tuple(in_data), tuple(out_grad_array))
if retain_grad:
# The gradients of the outputs of `func` are final. Store them if
# retain_grad=True.
for y, gy in six.moves.zip(outputs, out_grad):
if y is not None:
y._set_grad_var_if_available(gy)
del gy # to reduce memory usage
del out_grad # to reduce memory usage
for x, gx in in_grad.items():
if not gx: # gradient == None
continue
for gx_elem in gx:
if gx_elem is not None:
_check_grad_type(func, x, True, gx_elem.array)
del gx_elem # to reduce memory usage
if x.creator_node is None: # leaf
leaf_nodes.add(x)
else:
add_cand(x.creator_node)
del gx, in_grad # to reduce memory usage
for x in leaf_nodes:
x_var = x.get_variable_or_none()
gx = grads.pop(x)
if x_var is not None:
x_var._set_grad_var_without_check(gx)
x_var._loss_scale = loss_scale
grads.assert_no_grads()
|
def _backprop_to_all(outputs, retain_grad, loss_scale):
"""Backprop to all input variables
Args:
outputs (list of tuple): each tuple is (y_node, y_grad_var).
y_grad_var should not be None.
retain_grad (bool): see docstring of Variable.backward
loss_scale (float): see docstring of Variable.backward
"""
OrderedDict = chainer.utils._collections.OrderedDict # fix py2 memory leak
cand_funcs = []
seen_set = set()
def add_cand(cand):
if cand not in seen_set:
# Negate since heapq is min-heap
heapq.heappush(cand_funcs, (-cand.rank, len(seen_set), cand))
seen_set.add(cand)
grads = _backprop_utils.GradTable(accumulate_grad_inputs=True)
leaf_nodes = set()
for y, gy in outputs:
grads.accumulate(y, gy)
func = y.creator_node
if func is None: # leaf
leaf_nodes.add(y)
else:
add_cand(func)
# Fix F812 (Python 2)
y = None
del y
is_debug = chainer.is_debug()
base_hooks = chainer.get_function_hooks().values()
while cand_funcs:
_, _, func = heapq.heappop(cand_funcs)
inputs = func.inputs
target_input_indexes = tuple([
i for i, x in enumerate(inputs) if x.requires_grad
])
outputs = [y() for y in func.outputs] # access via weak ref
out_grad = tuple([grads.pop(y)
if y is not None and y.creator_node is not None
else None
for y in outputs])
if not target_input_indexes:
continue
in_data = [x.data for x in inputs]
out_grad_array = [None if g is None else g.array for g in out_grad]
if func._n_local_function_hooks != 0:
local_hooks = collections.OrderedDict(chainer.get_function_hooks())
local_hooks.update(func.local_function_hooks)
hooks = local_hooks.values() # avoid six for performance
else:
hooks = base_hooks
with cuda.get_device_from_array(*(in_data + out_grad_array)):
for hook in hooks:
hook.backward_preprocess(
func, tuple(in_data), tuple(out_grad_array))
# Collect the current input gradients.
target_inputs = [inputs[i] for i in target_input_indexes]
# Keep the order for the portability, rather than
# in_grad = {x: grads.get_as_list(x)
# for x in set(target_inputs)}
in_grad = OrderedDict()
for x in target_inputs:
if x not in in_grad:
in_grad[x] = grads.get_as_list(x)
_backprop_utils.backprop_step(
func, target_input_indexes, out_grad, in_grad, is_debug)
for hook in hooks:
hook.backward_postprocess(
func, tuple(in_data), tuple(out_grad_array))
if retain_grad:
# The gradients of the outputs of `func` are final. Store them if
# retain_grad=True.
for y, gy in six.moves.zip(outputs, out_grad):
if y is not None:
y._set_grad_var_if_available(gy)
del gy # to reduce memory usage
del out_grad # to reduce memory usage
for x, gx in in_grad.items():
if not gx: # gradient == None
continue
for gx_elem in gx:
if gx_elem is not None:
_check_grad_type(func, x, True, gx_elem.array)
del gx_elem # to reduce memory usage
if x.creator_node is None: # leaf
leaf_nodes.add(x)
else:
add_cand(x.creator_node)
del gx, in_grad # to reduce memory usage
for x in leaf_nodes:
x_var = x.get_variable_or_none()
gx = grads.pop(x)
if x_var is not None:
x_var._set_grad_var_without_check(gx)
x_var._loss_scale = loss_scale
grads.assert_no_grads()
|
43,705 |
def observable(me_tables, init_term=0, mapping="jordan_wigner", wires=None):
r"""Builds the many-body observable whose expectation value can be
measured in PennyLane.
This function can be used to build second-quantized operators in the basis
of single-particle states (e.g., HF states) and to transform them into
PennyLane observables. In general, the many-body observable :math:`\hat{O}` can combine
one-particle and two-particle operators as it is the case for electronic Hamiltonians
.. math::
\hat{O} = \sum_{\alpha, \beta} \langle \alpha \vert \hat{t}^{(1)} + \hat{t}^{(2)}
\cdots + \hat{t}^{(n)} \vert \beta \rangle ~ \hat{c}_\alpha^\dagger \hat{c}_\beta
+ \frac{1}{2} \sum_{\alpha, \beta, \gamma, \delta}
\langle \alpha, \beta \vert \hat{v}^{(1)} + \hat{v}^{(2)} \cdots + \hat{v}^{(n)}
\vert \gamma, \delta \rangle ~ \hat{c}_\alpha^\dagger \hat{c}_\beta^\dagger
\hat{c}_\gamma \hat{c}_\delta.
In the latter equations the indices :math:`\alpha, \beta, \gamma, \delta` run over the
basis of single-particle states. The operators :math:`\hat{c}^\dagger` and :math:`\hat{c}`
are the particle creation and annihilation operators, respectively.
:math:`\langle \alpha \vert \hat{t} \vert \beta \rangle` denotes the matrix element of
the single-particle operator :math:`\hat{t}` entering the observable. For example,
in electronic structure calculations this is the case for the kinetic energy operator,
the nuclei Coulomb potential or any other external fields included in the model Hamiltonian.
On the other hand, :math:`\langle \alpha, \beta \vert \hat{v} \vert \gamma, \delta \rangle`
denotes the matrix element of the two-particle operator :math:`\hat{v}`, for example, the
Coulomb interaction between the electrons.
If an `active space <https://en.wikipedia.org/wiki/Complete_active_space>`_ is defined the
observable is expanded over the truncated basis of active orbitals. The contribution of
core orbitals, if any, can be passed to the function using the keyword argument ``init_term``.
The function utilizes tools of `OpenFermion <https://github.com/quantumlib/OpenFermion>`_
to build the second-quantized operator and map it to basis of Pauli matrices via the
Jordan-Wigner or Bravyi-Kitaev transformation. Finally, the qubit operator is
converted to a a PennyLane observable by the function :func:`~.convert_observable`.
Args:
me_tables (list(array[float])): list containing the tables of matrix elements
of the operators :math:`\hat{t}` and :math:`\hat{v}`.
For single-particle operators the :math:`ith` array in the list will have shape
``(me_tables[i].shape[0], 3)`` with each row containing the indices
:math:`\alpha`, :math:`\beta` and the matrix element
:math:`\langle \alpha \vert \hat{t}^{(i)}\vert \beta \rangle`.
For two-particle operators the :math:`jth` array in the list
will have shape ``(me_tables[j].shape[0], 5)`` with each row containing
the indices :math:`\alpha`, :math:`\beta`, :math:`\gamma`, :math:`\delta` and
the matrix element
:math:`\langle \alpha, \beta \vert \hat{v}^{(j)}\vert \gamma, \delta \rangle`.
init_term: the contribution of core orbitals, if any, or other quantity
required to initialize the many-body observable.
mapping (str): specifies the fermion-to-qubit mapping. Input values can
be ``'jordan_wigner'`` or ``'bravyi_kitaev'``.
wires (Wires, list, tuple, dict): Custom wire mapping used to convert the qubit operator
to an observable measurable in a PennyLane ansatz.
For types Wires/list/tuple, each item in the iterable represents a wire label
corresponding to the qubit number equal to its index.
For type dict, only int-keyed dict (for qubit-to-wire conversion) is accepted.
If None, will use identity map (e.g. 0->0, 1->1, ...).
Returns:
pennylane.Hamiltonian: the fermionic-to-qubit transformed observable
**Example**
>>> t = np.array([[0., 0., 0.5], [1.0, 1.0, -0.5], [1.0, 0., 0.]])
>>> v = np.array([[ 0., 0., 0., 0., 0.25], [ 0., 1., 1., 0., -0.25], [ 1., 0., 0., 1., -0.5]])
>>> me_tables = []
>>> me_tables.append(t)
>>> me_tables.append(v)
>>> print(observable(me_tables, init_term=1/4, mapping="bravyi_kitaev"))
(0.0625) [I0]
+ (-0.0625) [Z0]
+ (0.4375) [Z0 Z1]
+ (-0.1875) [Z1]
>>> print(observable(me_tables, init_term=1/4, mapping="bravyi_kitaev", wires=['w0','w1']))
(0.0625) [Iw0]
+ (-0.0625) [Zw0]
+ (0.4375) [Zw0 Zw1]
+ (-0.1875) [Zw1]
"""
if mapping.strip().lower() not in ("jordan_wigner", "bravyi_kitaev"):
raise TypeError(
"The '{}' transformation is not available. \n "
"Please set 'mapping' to 'jordan_wigner' or 'bravyi_kitaev'.".format(mapping)
)
sp_op_shape = (3,)
tp_op_shape = (5,)
for table in me_tables:
for row in table:
if np.array(row).shape not in (sp_op_shape, tp_op_shape):
raise ValueError(
"Expected entries of matrix element tables to be of shape (3,) or (5,); got {}".format(
np.array(row).shape
)
)
# Initialize the FermionOperator
mb_obs = FermionOperator() + FermionOperator("") * init_term
for table in me_tables:
for i in table:
if i.shape == (5,):
# two-particle operator
mb_obs += FermionOperator(
((int(i[0]), 1), (int(i[1]), 1), (int(i[2]), 0), (int(i[3]), 0)), i[4]
)
elif i.shape == (3,):
# single-particle operator
mb_obs += FermionOperator(((int(i[0]), 1), (int(i[1]), 0)), i[2])
# Map the fermionic operator to a qubit operator
if mapping.strip().lower() == "bravyi_kitaev":
return structure.convert_observable(bravyi_kitaev(mb_obs), wires=wires)
return structure.convert_observable(jordan_wigner(mb_obs), wires=wires)
|
def observable(me_tables, init_term=0, mapping="jordan_wigner", wires=None):
r"""Builds the many-body observable whose expectation value can be
measured in PennyLane.
This function can be used to build second-quantized operators in the basis
of single-particle states (e.g., HF states) and to transform them into
PennyLane observables. In general, the many-body observable :math:`\hat{O}` can combine
one-particle and two-particle operators as it is the case for electronic Hamiltonians
.. math::
\hat{O} = \sum_{\alpha, \beta} \langle \alpha \vert \hat{t}^{(1)} + \hat{t}^{(2)}
\cdots + \hat{t}^{(n)} \vert \beta \rangle ~ \hat{c}_\alpha^\dagger \hat{c}_\beta
+ \frac{1}{2} \sum_{\alpha, \beta, \gamma, \delta}
\langle \alpha, \beta \vert \hat{v}^{(1)} + \hat{v}^{(2)} \cdots + \hat{v}^{(n)}
\vert \gamma, \delta \rangle ~ \hat{c}_\alpha^\dagger \hat{c}_\beta^\dagger
\hat{c}_\gamma \hat{c}_\delta.
In the latter equations the indices :math:`\alpha, \beta, \gamma, \delta` run over the
basis of single-particle states. The operators :math:`\hat{c}^\dagger` and :math:`\hat{c}`
are the particle creation and annihilation operators, respectively.
:math:`\langle \alpha \vert \hat{t} \vert \beta \rangle` denotes the matrix element of
the single-particle operator :math:`\hat{t}` entering the observable. For example,
in electronic structure calculations this is the case for the kinetic energy operator,
the nuclei Coulomb potential or any other external fields included in the model Hamiltonian.
On the other hand, :math:`\langle \alpha, \beta \vert \hat{v} \vert \gamma, \delta \rangle`
denotes the matrix element of the two-particle operator :math:`\hat{v}`, for example, the
Coulomb interaction between the electrons.
If an `active space <https://en.wikipedia.org/wiki/Complete_active_space>`_ is defined the
observable is expanded over the truncated basis of active orbitals. The contribution of
core orbitals, if any, can be passed to the function using the keyword argument ``init_term``.
The function utilizes tools of `OpenFermion <https://github.com/quantumlib/OpenFermion>`_
to build the second-quantized operator and map it to basis of Pauli matrices via the
Jordan-Wigner or Bravyi-Kitaev transformation. Finally, the qubit operator is
converted to a a PennyLane observable by the function :func:`~.convert_observable`.
Args:
me_tables (list(array[float])): list containing the tables of matrix elements
of the operators :math:`\hat{t}` and :math:`\hat{v}`.
For single-particle operators the :math:`ith` array in the list will have shape
``(me_tables[i].shape[0], 3)`` with each row containing the indices
:math:`\alpha`, :math:`\beta` and the matrix element
:math:`\langle \alpha \vert \hat{t}^{(i)}\vert \beta \rangle`.
For two-particle operators the :math:`jth` array in the list
will have shape ``(me_tables[j].shape[0], 5)`` with each row containing
the indices :math:`\alpha`, :math:`\beta`, :math:`\gamma`, :math:`\delta` and
the matrix element
:math:`\langle \alpha, \beta \vert \hat{v}^{(j)}\vert \gamma, \delta \rangle`.
init_term (float): the contribution of core orbitals, if any, or other quantity
required to initialize the many-body observable.
mapping (str): specifies the fermion-to-qubit mapping. Input values can
be ``'jordan_wigner'`` or ``'bravyi_kitaev'``.
wires (Wires, list, tuple, dict): Custom wire mapping used to convert the qubit operator
to an observable measurable in a PennyLane ansatz.
For types Wires/list/tuple, each item in the iterable represents a wire label
corresponding to the qubit number equal to its index.
For type dict, only int-keyed dict (for qubit-to-wire conversion) is accepted.
If None, will use identity map (e.g. 0->0, 1->1, ...).
Returns:
pennylane.Hamiltonian: the fermionic-to-qubit transformed observable
**Example**
>>> t = np.array([[0., 0., 0.5], [1.0, 1.0, -0.5], [1.0, 0., 0.]])
>>> v = np.array([[ 0., 0., 0., 0., 0.25], [ 0., 1., 1., 0., -0.25], [ 1., 0., 0., 1., -0.5]])
>>> me_tables = []
>>> me_tables.append(t)
>>> me_tables.append(v)
>>> print(observable(me_tables, init_term=1/4, mapping="bravyi_kitaev"))
(0.0625) [I0]
+ (-0.0625) [Z0]
+ (0.4375) [Z0 Z1]
+ (-0.1875) [Z1]
>>> print(observable(me_tables, init_term=1/4, mapping="bravyi_kitaev", wires=['w0','w1']))
(0.0625) [Iw0]
+ (-0.0625) [Zw0]
+ (0.4375) [Zw0 Zw1]
+ (-0.1875) [Zw1]
"""
if mapping.strip().lower() not in ("jordan_wigner", "bravyi_kitaev"):
raise TypeError(
"The '{}' transformation is not available. \n "
"Please set 'mapping' to 'jordan_wigner' or 'bravyi_kitaev'.".format(mapping)
)
sp_op_shape = (3,)
tp_op_shape = (5,)
for table in me_tables:
for row in table:
if np.array(row).shape not in (sp_op_shape, tp_op_shape):
raise ValueError(
"Expected entries of matrix element tables to be of shape (3,) or (5,); got {}".format(
np.array(row).shape
)
)
# Initialize the FermionOperator
mb_obs = FermionOperator() + FermionOperator("") * init_term
for table in me_tables:
for i in table:
if i.shape == (5,):
# two-particle operator
mb_obs += FermionOperator(
((int(i[0]), 1), (int(i[1]), 1), (int(i[2]), 0), (int(i[3]), 0)), i[4]
)
elif i.shape == (3,):
# single-particle operator
mb_obs += FermionOperator(((int(i[0]), 1), (int(i[1]), 0)), i[2])
# Map the fermionic operator to a qubit operator
if mapping.strip().lower() == "bravyi_kitaev":
return structure.convert_observable(bravyi_kitaev(mb_obs), wires=wires)
return structure.convert_observable(jordan_wigner(mb_obs), wires=wires)
|
31,567 |
def get_logpoints_command(client):
result = client.get_logpoints()
if not result.get('success'):
raise DemistoException(result['message'])
table_header = []
display_title = "LogPoints"
allowed_loginspects = result.get('allowed_loginspects')
if allowed_loginspects and len(allowed_loginspects) > 0:
table_header = list(allowed_loginspects[0].keys())
markdown = tableToMarkdown(display_title, allowed_loginspects, headers=table_header)
return CommandResults(
readable_output=markdown,
outputs_prefix='LogPoint.LogPoints',
outputs_key_field='ip',
outputs=allowed_loginspects
)
|
def get_logpoints_command(client):
result = client.get_logpoints()
if not result.get('success'):
raise DemistoException(result.get('message'))
table_header = []
display_title = "LogPoints"
allowed_loginspects = result.get('allowed_loginspects')
if allowed_loginspects and len(allowed_loginspects) > 0:
table_header = list(allowed_loginspects[0].keys())
markdown = tableToMarkdown(display_title, allowed_loginspects, headers=table_header)
return CommandResults(
readable_output=markdown,
outputs_prefix='LogPoint.LogPoints',
outputs_key_field='ip',
outputs=allowed_loginspects
)
|
49,665 |
def set_run_validators(run):
"""
Set whether or not validators are run. By default, they are run.
.. deprecated:: 21.3.0 will not be moved to new ``attrs`` namespace.
Use :func:`attr.validators.set_disabled()` instead.
"""
if not isinstance(run, bool):
raise TypeError("'run' must be bool.")
global _run_validators
_run_validators = run
|
def set_run_validators(run):
"""
Set whether or not validators are run. By default, they are run.
.. deprecated:: 21.3.0 It will not be removed, but it also will not be
moved to new ``attrs`` namespace. Use `attr.validators.set_disabled()`
instead.
"""
if not isinstance(run, bool):
raise TypeError("'run' must be bool.")
global _run_validators
_run_validators = run
|
31,680 |
def copy_notes_to_target_incident(args: Dict[str, Any]) -> CommandResults:
target_incident = args.get('target_incident', None)
if not target_incident:
raise ValueError('Target Incident ID not specified')
tags = argToList(args.get('tags'))
entries = demisto.executeCommand('getEntries', {'filter': {'tags': tags}})
note_entries: List = []
md: str = ''
if isinstance(entries, list) and len(entries) > 0:
for entry in entries:
if entry.get('Note') is True:
note_entries.append(entry)
if len(note_entries) > 0:
result = []
retries = 3
sleep_time = 1
for i in range(retries):
result = demisto.executeCommand("addEntries", {"id": target_incident, "entries": note_entries})
if result:
break
sleep(sleep_time)
sleep_time += 1
if result:
md = f'## {len(note_entries)} notes copied'
else:
raise DemistoException('Something went wrong with addEntries command, please try again.')
else:
md = '## No notes found'
else:
md = '## No notes found'
return CommandResults(readable_output=md)
|
def copy_notes_to_target_incident(args: Dict[str, Any]) -> CommandResults:
target_incident = args.get('target_incident', None)
if not target_incident:
raise ValueError('Target Incident ID not specified')
tags = argToList(args.get('tags'))
entries = demisto.executeCommand('getEntries', {'filter': {'tags': tags}})
note_entries: List = []
md: str = ''
if isinstance(entries, list) and len(entries) > 0:
for entry in entries:
if entry.get('Note') is True:
note_entries.append(entry)
if len(note_entries) > 0:
result = []
retries = 3
sleep_time = 1
for i in range(retries):
result = demisto.executeCommand("addEntries", {"id": target_incident, "entries": note_entries})
if result:
break
sleep(sleep_time)
sleep_time += 1
if result and not isError(result):
md = f'## {len(note_entries)} notes copied'
else:
raise DemistoException('Something went wrong with addEntries command, please try again.')
else:
md = '## No notes found'
else:
md = '## No notes found'
return CommandResults(readable_output=md)
|
22,743 |
def execute_command(cmd_name, shell_cmd, env=None):
# type: (str, str, dict) -> Tuple[str, str]
"""
Run a command:
- on Linux command will be run by the standard shell selected with Popen(shell=True)
- on Windows command will be run in a Powershell shell
:param str cmd_name: the user facing name of the hook being run
:param str shell_cmd: shell command to execute
:param dict env: environ to pass into Popen on Linux
:returns: `tuple` (`str` stderr, `str` stdout)
"""
logger.info("Running %s command: %s", cmd_name, shell_cmd)
if POSIX_MODE:
cmd = subprocess.Popen(shell_cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, universal_newlines=True,
env=env)
else:
line = ['powershell.exe', '-Command', shell_cmd]
cmd = subprocess.Popen(line, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True)
# universal_newlines causes Popen.communicate()
# to return str objects instead of bytes in Python 3
out, err = cmd.communicate()
base_cmd = os.path.basename(shell_cmd.split(None, 1)[0])
if out:
logger.info('Output from %s command %s:\n%s', cmd_name, base_cmd, out)
if cmd.returncode != 0:
logger.error('%s command "%s" returned error code %d',
cmd_name, shell_cmd, cmd.returncode)
if err:
logger.error('Error output from %s command %s:\n%s', cmd_name, base_cmd, err)
return err, out
|
def execute_command(cmd_name, shell_cmd, env=None):
# type: (str, str, Optional[dict]) -> Tuple[str, str]
"""
Run a command:
- on Linux command will be run by the standard shell selected with Popen(shell=True)
- on Windows command will be run in a Powershell shell
:param str cmd_name: the user facing name of the hook being run
:param str shell_cmd: shell command to execute
:param dict env: environ to pass into Popen on Linux
:returns: `tuple` (`str` stderr, `str` stdout)
"""
logger.info("Running %s command: %s", cmd_name, shell_cmd)
if POSIX_MODE:
cmd = subprocess.Popen(shell_cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, universal_newlines=True,
env=env)
else:
line = ['powershell.exe', '-Command', shell_cmd]
cmd = subprocess.Popen(line, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True)
# universal_newlines causes Popen.communicate()
# to return str objects instead of bytes in Python 3
out, err = cmd.communicate()
base_cmd = os.path.basename(shell_cmd.split(None, 1)[0])
if out:
logger.info('Output from %s command %s:\n%s', cmd_name, base_cmd, out)
if cmd.returncode != 0:
logger.error('%s command "%s" returned error code %d',
cmd_name, shell_cmd, cmd.returncode)
if err:
logger.error('Error output from %s command %s:\n%s', cmd_name, base_cmd, err)
return err, out
|
7,162 |
def tvl1(I0, I1, dt=0.2, lambda_=15, tau=0.3, nwarp=5, niter=10,
tol=1e-4, prefilter=False):
"""Coarse to fine TV-L1 optical flow estimator.
TV-L1 ia popular algorithm for optical flow estimation intrudced
by Zack et al. [1]_, improved in [2]_ and detailed in [3]_.
Parameters
----------
I0 : ~numpy.ndarray
The first gray scale image of the sequence.
I1 : ~numpy.ndarray
The second gray scale image of the sequence.
dt : float
Time step of the numerical scheme. Convergence is proved for
values dt < 0.125, but it can be larger for faster
convergence.
lambda_ : float
Attachement parameter. The smaller this parameter is,
the smoother is the solutions.
tau : float
Tightness parameter. It should have a small value in order to
maintain attachement and regularization parts in
correspondence.
nwarp : int
Number of times I1 is warped.
niter : int
Number of fixed point iteration.
tol : float
Tolerance used as stopping criterion based on the L² distance
between two consecutive values of (u, v).
prefilter : bool
whether to prefilter the estimated optical flow before each
image warp.
Returns
-------
flow : tuple[~numpy.ndarray]
The estimated optical flow.
References
----------
.. [1] Zach, C., Pock, T., & Bischof, H. (2007, September). A
duality based approach for realtime TV-L 1 optical flow. In Joint
pattern recognition symposium (pp. 214-223). Springer, Berlin,
Heidelberg.
.. [2] Wedel, A., Pock, T., Zach, C., Bischof, H., & Cremers,
D. (2009). An improved algorithm for TV-L 1 optical flow. In
Statistical and geometrical approaches to visual motion analysis
(pp. 23-45). Springer, Berlin, Heidelberg.
.. [3] Pérez, J. S., Meinhardt-Llopis, E., & Facciolo,
G. (2013). TV-L1 optical flow estimation. Image Processing On
Line, 2013, 137-150.
Examples
--------
>>> from skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from skimage.registration import tvl1
>>> I0, I1, disp = stereo_motorcycle()
>>> # --- Convert the images to gray level: color is not supported.
>>> I0 = rgb2gray(I0)
>>> I1 = rgb2gray(I1)
>>> flow = tvl1(I1, I0)
"""
solver = partial(_tvl1, dt=dt, lambda_=lambda_, tau=tau,
nwarp=nwarp, niter=niter, tol=tol,
prefilter=prefilter)
return coarse_to_fine(I0, I1, solver)
|
def tvl1(I0, I1, dt=0.2, lambda_=15, tau=0.3, nwarp=5, niter=10,
tol=1e-4, prefilter=False):
"""Coarse to fine TV-L1 optical flow estimator.
TV-L1 is a popular algorithm for optical flow estimation introduced
by Zack et al. [1]_, improved in [2]_ and detailed in [3]_.
Parameters
----------
I0 : ~numpy.ndarray
The first gray scale image of the sequence.
I1 : ~numpy.ndarray
The second gray scale image of the sequence.
dt : float
Time step of the numerical scheme. Convergence is proved for
values dt < 0.125, but it can be larger for faster
convergence.
lambda_ : float
Attachement parameter. The smaller this parameter is,
the smoother is the solutions.
tau : float
Tightness parameter. It should have a small value in order to
maintain attachement and regularization parts in
correspondence.
nwarp : int
Number of times I1 is warped.
niter : int
Number of fixed point iteration.
tol : float
Tolerance used as stopping criterion based on the L² distance
between two consecutive values of (u, v).
prefilter : bool
whether to prefilter the estimated optical flow before each
image warp.
Returns
-------
flow : tuple[~numpy.ndarray]
The estimated optical flow.
References
----------
.. [1] Zach, C., Pock, T., & Bischof, H. (2007, September). A
duality based approach for realtime TV-L 1 optical flow. In Joint
pattern recognition symposium (pp. 214-223). Springer, Berlin,
Heidelberg.
.. [2] Wedel, A., Pock, T., Zach, C., Bischof, H., & Cremers,
D. (2009). An improved algorithm for TV-L 1 optical flow. In
Statistical and geometrical approaches to visual motion analysis
(pp. 23-45). Springer, Berlin, Heidelberg.
.. [3] Pérez, J. S., Meinhardt-Llopis, E., & Facciolo,
G. (2013). TV-L1 optical flow estimation. Image Processing On
Line, 2013, 137-150.
Examples
--------
>>> from skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from skimage.registration import tvl1
>>> I0, I1, disp = stereo_motorcycle()
>>> # --- Convert the images to gray level: color is not supported.
>>> I0 = rgb2gray(I0)
>>> I1 = rgb2gray(I1)
>>> flow = tvl1(I1, I0)
"""
solver = partial(_tvl1, dt=dt, lambda_=lambda_, tau=tau,
nwarp=nwarp, niter=niter, tol=tol,
prefilter=prefilter)
return coarse_to_fine(I0, I1, solver)
|
624 |
def corpus_bleu(
list_of_references,
hypotheses,
weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None,
auto_reweigh=False,
):
"""
Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all
the hypotheses and their respective references.
Instead of averaging the sentence level BLEU scores (i.e. macro-average
precision), the original BLEU metric (Papineni et al. 2002) accounts for
the micro-average precision (i.e. summing the numerators and denominators
for each hypothesis-reference(s) pairs before the division).
>>> hyp1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which',
... 'ensures', 'that', 'the', 'military', 'always',
... 'obeys', 'the', 'commands', 'of', 'the', 'party']
>>> ref1a = ['It', 'is', 'a', 'guide', 'to', 'action', 'that',
... 'ensures', 'that', 'the', 'military', 'will', 'forever',
... 'heed', 'Party', 'commands']
>>> ref1b = ['It', 'is', 'the', 'guiding', 'principle', 'which',
... 'guarantees', 'the', 'military', 'forces', 'always',
... 'being', 'under', 'the', 'command', 'of', 'the', 'Party']
>>> ref1c = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the',
... 'army', 'always', 'to', 'heed', 'the', 'directions',
... 'of', 'the', 'party']
>>> hyp2 = ['he', 'read', 'the', 'book', 'because', 'he', 'was',
... 'interested', 'in', 'world', 'history']
>>> ref2a = ['he', 'was', 'interested', 'in', 'world', 'history',
... 'because', 'he', 'read', 'the', 'book']
>>> list_of_references = [[ref1a, ref1b, ref1c], [ref2a]]
>>> hypotheses = [hyp1, hyp2]
>>> corpus_bleu(list_of_references, hypotheses) # doctest: +ELLIPSIS
0.5920...
The example below show that corpus_bleu() is different from averaging
sentence_bleu() for hypotheses
>>> score1 = sentence_bleu([ref1a, ref1b, ref1c], hyp1)
>>> score2 = sentence_bleu([ref2a], hyp2)
>>> (score1 + score2) / 2 # doctest: +ELLIPSIS
0.6223...
:param list_of_references: a corpus of lists of reference sentences, w.r.t. hypotheses
:type list_of_references: list(list(list(str)))
:param hypotheses: a list of hypothesis sentences
:type hypotheses: list(list(str))
:param weights: weights for unigrams, bigrams, trigrams and so on, (one or list of weights)
:type weights: tuple(float) or list(tuple(float))
:param smoothing_function:
:type smoothing_function: SmoothingFunction
:param auto_reweigh: Option to re-normalize the weights uniformly.
:type auto_reweigh: bool
:return: The corpus-level BLEU score.
:rtype: float
"""
# Before proceeding to compute BLEU, perform sanity checks.
p_numerators = Counter() # Key = ngram order, and value = no. of ngram matches.
p_denominators = Counter() # Key = ngram order, and value = no. of ngram in ref.
hyp_lengths, ref_lengths = 0, 0
assert len(list_of_references) == len(hypotheses), (
"The number of hypotheses and their reference(s) should be the " "same "
)
if isinstance(weights, tuple):
weights = [weights]
max_weight_length = max(len(weight) for weight in weights)
# Iterate through each hypothesis and their corresponding references.
for references, hypothesis in zip(list_of_references, hypotheses):
# For each order of ngram, calculate the numerator and
# denominator for the corpus-level modified precision.
for i in range(1, max_weight_length + 1):
p_i = modified_precision(references, hypothesis, i)
p_numerators[i] += p_i.numerator
p_denominators[i] += p_i.denominator
# Calculate the hypothesis length and the closest reference length.
# Adds them to the corpus-level hypothesis and reference counts.
hyp_len = len(hypothesis)
hyp_lengths += hyp_len
ref_lengths += closest_ref_length(references, hyp_len)
# Calculate corpus-level brevity penalty.
bp = brevity_penalty(ref_lengths, hyp_lengths)
# # Uniformly re-weighting based on maximum hypothesis lengths if largest
# # order of n-grams < 4 and weights is set at default.
# if auto_reweigh:
# if hyp_lengths < 4 and weights == (0.25, 0.25, 0.25, 0.25):
# weights = (1 / hyp_lengths,) * hyp_lengths
# Collects the various precision values for the different ngram orders.
p_n = [
Fraction(p_numerators[i], p_denominators[i], _normalize=False)
for i in range(1, max_weight_length + 1)
]
# Returns 0 if there's no matching n-grams
# We only need to check for p_numerators[1] == 0, since if there's
# no unigrams, there won't be any higher order ngrams.
if p_numerators[1] == 0:
return 0 if len(weights) == 1 else [0] * len(weights)
# If there's no smoothing, set use method0 from SmoothinFunction class.
if not smoothing_function:
smoothing_function = SmoothingFunction().method0
# Smoothen the modified precision.
# Note: smoothing_function() may convert values into floats;
# it tries to retain the Fraction object as much as the
# smoothing method allows.
p_n = smoothing_function(
p_n, references=references, hypothesis=hypothesis, hyp_len=hyp_lengths
)
bleu_scores = []
for weight in weights:
# Uniformly re-weighting based on maximum hypothesis lengths if largest
# order of n-grams < 4 and weights is set at default.
if auto_reweigh:
if hyp_lengths < 4 and weight == (0.25, 0.25, 0.25, 0.25):
weight = (1 / hyp_lengths,) * hyp_lengths
s = (w_i * math.log(p_i) for w_i, p_i in zip(weight, p_n))
s = bp * math.exp(math.fsum(s))
bleu_scores.append(s)
return bleu_scores[0] if len(weights) == 1 else bleu_scores
|
def corpus_bleu(
list_of_references,
hypotheses,
weights=(0.25, 0.25, 0.25, 0.25),
smoothing_function=None,
auto_reweigh=False,
):
"""
Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all
the hypotheses and their respective references.
Instead of averaging the sentence level BLEU scores (i.e. macro-average
precision), the original BLEU metric (Papineni et al. 2002) accounts for
the micro-average precision (i.e. summing the numerators and denominators
for each hypothesis-reference(s) pairs before the division).
>>> hyp1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which',
... 'ensures', 'that', 'the', 'military', 'always',
... 'obeys', 'the', 'commands', 'of', 'the', 'party']
>>> ref1a = ['It', 'is', 'a', 'guide', 'to', 'action', 'that',
... 'ensures', 'that', 'the', 'military', 'will', 'forever',
... 'heed', 'Party', 'commands']
>>> ref1b = ['It', 'is', 'the', 'guiding', 'principle', 'which',
... 'guarantees', 'the', 'military', 'forces', 'always',
... 'being', 'under', 'the', 'command', 'of', 'the', 'Party']
>>> ref1c = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the',
... 'army', 'always', 'to', 'heed', 'the', 'directions',
... 'of', 'the', 'party']
>>> hyp2 = ['he', 'read', 'the', 'book', 'because', 'he', 'was',
... 'interested', 'in', 'world', 'history']
>>> ref2a = ['he', 'was', 'interested', 'in', 'world', 'history',
... 'because', 'he', 'read', 'the', 'book']
>>> list_of_references = [[ref1a, ref1b, ref1c], [ref2a]]
>>> hypotheses = [hyp1, hyp2]
>>> corpus_bleu(list_of_references, hypotheses) # doctest: +ELLIPSIS
0.5920...
The example below show that corpus_bleu() is different from averaging
sentence_bleu() for hypotheses
>>> score1 = sentence_bleu([ref1a, ref1b, ref1c], hyp1)
>>> score2 = sentence_bleu([ref2a], hyp2)
>>> (score1 + score2) / 2 # doctest: +ELLIPSIS
0.6223...
:param list_of_references: a corpus of lists of reference sentences, w.r.t. hypotheses
:type list_of_references: list(list(list(str)))
:param hypotheses: a list of hypothesis sentences
:type hypotheses: list(list(str))
:param weights: weights for unigrams, bigrams, trigrams and so on, (one or list of weights)
:type weights: tuple(float) / list(tuple(float))
:param smoothing_function:
:type smoothing_function: SmoothingFunction
:param auto_reweigh: Option to re-normalize the weights uniformly.
:type auto_reweigh: bool
:return: The corpus-level BLEU score.
:rtype: float
"""
# Before proceeding to compute BLEU, perform sanity checks.
p_numerators = Counter() # Key = ngram order, and value = no. of ngram matches.
p_denominators = Counter() # Key = ngram order, and value = no. of ngram in ref.
hyp_lengths, ref_lengths = 0, 0
assert len(list_of_references) == len(hypotheses), (
"The number of hypotheses and their reference(s) should be the " "same "
)
if isinstance(weights, tuple):
weights = [weights]
max_weight_length = max(len(weight) for weight in weights)
# Iterate through each hypothesis and their corresponding references.
for references, hypothesis in zip(list_of_references, hypotheses):
# For each order of ngram, calculate the numerator and
# denominator for the corpus-level modified precision.
for i in range(1, max_weight_length + 1):
p_i = modified_precision(references, hypothesis, i)
p_numerators[i] += p_i.numerator
p_denominators[i] += p_i.denominator
# Calculate the hypothesis length and the closest reference length.
# Adds them to the corpus-level hypothesis and reference counts.
hyp_len = len(hypothesis)
hyp_lengths += hyp_len
ref_lengths += closest_ref_length(references, hyp_len)
# Calculate corpus-level brevity penalty.
bp = brevity_penalty(ref_lengths, hyp_lengths)
# # Uniformly re-weighting based on maximum hypothesis lengths if largest
# # order of n-grams < 4 and weights is set at default.
# if auto_reweigh:
# if hyp_lengths < 4 and weights == (0.25, 0.25, 0.25, 0.25):
# weights = (1 / hyp_lengths,) * hyp_lengths
# Collects the various precision values for the different ngram orders.
p_n = [
Fraction(p_numerators[i], p_denominators[i], _normalize=False)
for i in range(1, max_weight_length + 1)
]
# Returns 0 if there's no matching n-grams
# We only need to check for p_numerators[1] == 0, since if there's
# no unigrams, there won't be any higher order ngrams.
if p_numerators[1] == 0:
return 0 if len(weights) == 1 else [0] * len(weights)
# If there's no smoothing, set use method0 from SmoothinFunction class.
if not smoothing_function:
smoothing_function = SmoothingFunction().method0
# Smoothen the modified precision.
# Note: smoothing_function() may convert values into floats;
# it tries to retain the Fraction object as much as the
# smoothing method allows.
p_n = smoothing_function(
p_n, references=references, hypothesis=hypothesis, hyp_len=hyp_lengths
)
bleu_scores = []
for weight in weights:
# Uniformly re-weighting based on maximum hypothesis lengths if largest
# order of n-grams < 4 and weights is set at default.
if auto_reweigh:
if hyp_lengths < 4 and weight == (0.25, 0.25, 0.25, 0.25):
weight = (1 / hyp_lengths,) * hyp_lengths
s = (w_i * math.log(p_i) for w_i, p_i in zip(weight, p_n))
s = bp * math.exp(math.fsum(s))
bleu_scores.append(s)
return bleu_scores[0] if len(weights) == 1 else bleu_scores
|
2,193 |
def test_lasso_dual_gap():
"""
Check that Lasso.dual_gap_ matches its objective formulation, with the
datafit normalized by n_samples
"""
X, y, _, _ = build_dataset(n_samples=10, n_features=30)
n_samples = len(y)
alpha = 0.01 * np.max(np.abs(X.T @ y)) / n_samples
clf = Lasso(alpha=alpha, fit_intercept=False).fit(X, y)
w = clf.coef_
R = y - X @ w
primal = 0.5 * np.mean(R ** 2) + clf.alpha * np.sum(np.abs(w))
# dual pt: R / n_samples, dual constraint: norm(X.T @ theta, inf) <= alpha
R /= np.max(np.abs(X.T @ R) / (n_samples * alpha))
dual = 0.5 * (np.mean(y ** 2) - np.mean((y - R) ** 2))
np.testing.assert_allclose(clf.dual_gap_, primal - dual)
|
def test_lasso_dual_gap():
"""
Check that Lasso.dual_gap_ matches its objective formulation, with the
datafit normalized by n_samples
"""
X, y, _, _ = build_dataset(n_samples=10, n_features=30)
n_samples = len(y)
alpha = 0.01 * np.max(np.abs(X.T @ y)) / n_samples
clf = Lasso(alpha=alpha, fit_intercept=False).fit(X, y)
w = clf.coef_
R = y - X @ w
primal = 0.5 * np.mean(R ** 2) + clf.alpha * np.sum(np.abs(w))
# dual pt: R / n_samples, dual constraint: norm(X.T @ theta, inf) <= alpha
R /= np.max(np.abs(X.T @ R) / (n_samples * alpha))
dual = 0.5 * (np.mean(y ** 2) - np.mean((y - R) ** 2))
assert_allclose(clf.dual_gap_, primal - dual)
|
35,551 |
def create_lkas11(packer, frame, car_fingerprint, apply_steer, steer_req,
lkas11, sys_warning, sys_state, enabled,
left_lane, right_lane,
left_lane_depart, right_lane_depart):
values = lkas11
values["CF_Lkas_LdwsSysState"] = sys_state
values["CF_Lkas_SysWarning"] = 3 if sys_warning else 0
values["CF_Lkas_LdwsLHWarning"] = left_lane_depart
values["CF_Lkas_LdwsRHWarning"] = right_lane_depart
values["CR_Lkas_StrToqReq"] = apply_steer
values["CF_Lkas_ActToi"] = steer_req
values["CF_Lkas_MsgCount"] = frame % 0x10
if car_fingerprint in (CAR.SONATA, CAR.PALISADE, CAR.KIA_NIRO_EV, CAR.KIA_NIRO_HEV_2021, CAR.SANTA_FE,
CAR.IONIQ_EV_2020, CAR.IONIQ_PHEV, CAR.KIA_SELTOS, CAR.ELANTRA_2021, CAR.GENESIS_G70_2020,
CAR.ELANTRA_HEV_2021, CAR.SONATA_HYBRID, CAR.KONA_EV, CAR.KONA_HEV, CAR.KONA_EV_2022,
CAR.SANTA_FE_2022, CAR.KIA_K5_2021, CAR.IONIQ_HEV_2022, CAR.SANTA_FE_HEV_2022, CAR.SANTA_FE_PHEV_2022):
values["CF_Lkas_LdwsActivemode"] = int(left_lane) + (int(right_lane) << 1)
values["CF_Lkas_LdwsOpt_USM"] = 2
# FcwOpt_USM 5 = Orange blinking car + lanes
# FcwOpt_USM 4 = Orange car + lanes
# FcwOpt_USM 3 = Green blinking car + lanes
# FcwOpt_USM 2 = Green car + lanes
# FcwOpt_USM 1 = White car + lanes
# FcwOpt_USM 0 = No car + lanes
values["CF_Lkas_FcwOpt_USM"] = 2 if enabled else 1
# SysWarning 4 = keep hands on wheel
# SysWarning 5 = keep hands on wheel (red)
# SysWarning 6 = keep hands on wheel (red) + beep
# Note: the warning is hidden while the blinkers are on
values["CF_Lkas_SysWarning"] = 4 if sys_warning else 0
# Likely cars without the ability to show individual lane lines in the dash
elif car_fingerprint in (CAR.KIA_OPTIMA,):
# SysWarning 4 = keep hands on wheel + beep
values["CF_Lkas_SysWarning"] = 4 if sys_warning else 0
# SysState 0 = no icons
# SysState 1-2 = white car + lanes
# SysState 3 = green car + lanes, green steering wheel
# SysState 4 = green car + lanes
values["CF_Lkas_LdwsSysState"] = 3 if enabled else 1
values["CF_Lkas_LdwsOpt_USM"] = 2 # non-2 changes above SysState definition
# these have no effect
values["CF_Lkas_LdwsActivemode"] = 0
values["CF_Lkas_FcwOpt_USM"] = 0
elif car_fingerprint == CAR.HYUNDAI_GENESIS:
# This field is actually LdwsActivemode
# Genesis and Optima fault when forwarding while engaged
values["CF_Lkas_LdwsActivemode"] = 2
dat = packer.make_can_msg("LKAS11", 0, values)[2]
if car_fingerprint in CHECKSUM["crc8"]:
# CRC Checksum as seen on 2019 Hyundai Santa Fe
dat = dat[:6] + dat[7:8]
checksum = hyundai_checksum(dat)
elif car_fingerprint in CHECKSUM["6B"]:
# Checksum of first 6 Bytes, as seen on 2018 Kia Sorento
checksum = sum(dat[:6]) % 256
else:
# Checksum of first 6 Bytes and last Byte as seen on 2018 Kia Stinger
checksum = (sum(dat[:6]) + dat[7]) % 256
values["CF_Lkas_Chksum"] = checksum
return packer.make_can_msg("LKAS11", 0, values)
|
def create_lkas11(packer, frame, car_fingerprint, apply_steer, steer_req,
lkas11, sys_warning, sys_state, enabled,
left_lane, right_lane,
left_lane_depart, right_lane_depart):
values = lkas11
values["CF_Lkas_LdwsSysState"] = sys_state
values["CF_Lkas_SysWarning"] = 3 if sys_warning else 0
values["CF_Lkas_LdwsLHWarning"] = left_lane_depart
values["CF_Lkas_LdwsRHWarning"] = right_lane_depart
values["CR_Lkas_StrToqReq"] = apply_steer
values["CF_Lkas_ActToi"] = steer_req
values["CF_Lkas_MsgCount"] = frame % 0x10
if car_fingerprint in (CAR.SONATA, CAR.PALISADE, CAR.KIA_NIRO_EV, CAR.KIA_NIRO_HEV_2021, CAR.SANTA_FE,
CAR.IONIQ_EV_2020, CAR.IONIQ_PHEV, CAR.KIA_SELTOS, CAR.ELANTRA_2021, CAR.GENESIS_G70_2020,
CAR.ELANTRA_HEV_2021, CAR.SONATA_HYBRID, CAR.KONA_EV, CAR.KONA_HEV, CAR.KONA_EV_2022,
CAR.SANTA_FE_2022, CAR.KIA_K5_2021, CAR.IONIQ_HEV_2022, CAR.SANTA_FE_HEV_2022, CAR.SANTA_FE_PHEV_2022):
values["CF_Lkas_LdwsActivemode"] = int(left_lane) + (int(right_lane) << 1)
values["CF_Lkas_LdwsOpt_USM"] = 2
# FcwOpt_USM 5 = Orange blinking car + lanes
# FcwOpt_USM 4 = Orange car + lanes
# FcwOpt_USM 3 = Green blinking car + lanes
# FcwOpt_USM 2 = Green car + lanes
# FcwOpt_USM 1 = White car + lanes
# FcwOpt_USM 0 = No car + lanes
values["CF_Lkas_FcwOpt_USM"] = 2 if enabled else 1
# SysWarning 4 = keep hands on wheel
# SysWarning 5 = keep hands on wheel (red)
# SysWarning 6 = keep hands on wheel (red) + beep
# Note: the warning is hidden while the blinkers are on
values["CF_Lkas_SysWarning"] = 4 if sys_warning else 0
# Likely cars lacking the ability to show individual lane lines in the dash
elif car_fingerprint in (CAR.KIA_OPTIMA,):
# SysWarning 4 = keep hands on wheel + beep
values["CF_Lkas_SysWarning"] = 4 if sys_warning else 0
# SysState 0 = no icons
# SysState 1-2 = white car + lanes
# SysState 3 = green car + lanes, green steering wheel
# SysState 4 = green car + lanes
values["CF_Lkas_LdwsSysState"] = 3 if enabled else 1
values["CF_Lkas_LdwsOpt_USM"] = 2 # non-2 changes above SysState definition
# these have no effect
values["CF_Lkas_LdwsActivemode"] = 0
values["CF_Lkas_FcwOpt_USM"] = 0
elif car_fingerprint == CAR.HYUNDAI_GENESIS:
# This field is actually LdwsActivemode
# Genesis and Optima fault when forwarding while engaged
values["CF_Lkas_LdwsActivemode"] = 2
dat = packer.make_can_msg("LKAS11", 0, values)[2]
if car_fingerprint in CHECKSUM["crc8"]:
# CRC Checksum as seen on 2019 Hyundai Santa Fe
dat = dat[:6] + dat[7:8]
checksum = hyundai_checksum(dat)
elif car_fingerprint in CHECKSUM["6B"]:
# Checksum of first 6 Bytes, as seen on 2018 Kia Sorento
checksum = sum(dat[:6]) % 256
else:
# Checksum of first 6 Bytes and last Byte as seen on 2018 Kia Stinger
checksum = (sum(dat[:6]) + dat[7]) % 256
values["CF_Lkas_Chksum"] = checksum
return packer.make_can_msg("LKAS11", 0, values)
|
28,054 |
def get_blame_info(repo: Repo, file_path: str):
""" Get blame info for the given file in the given git repo. """
tracking_branch = None
try:
# If a commit is checked out, accessing the active_branch member will
# throw a type error. In this case we will use the current commit hash.
tracking_branch = str(repo.active_branch.tracking_branch())
except Exception:
tracking_branch = repo.head.commit.hexsha
try:
blame = repo.blame_incremental(repo.head.commit.hexsha, file_path)
res = {
'version': 'v1',
'tracking_branch': tracking_branch,
'remote_url': next(repo.remote().urls, None),
'commits': {},
'blame': []}
for b in blame:
commit = b.commit
if commit.hexsha not in res['commits']:
res['commits'][commit.hexsha] = {
'author': {
'name': commit.author.name,
'email': commit.author.email,
},
'summary': commit.summary,
'message': commit.message,
'committed_datetime': str(commit.committed_datetime)}
res['blame'].append({
'from': b.linenos[0],
'to': b.linenos[-1],
'commit': commit.hexsha})
return res
except Exception as ex:
LOG.warning("Failed to get blame information for %s: %s",
file_path, ex)
|
def get_blame_info(repo: Repo, file_path: str):
""" Get blame info for the given file in the given git repo. """
tracking_branch = None
try:
# If a commit is checked out, accessing the active_branch member will
# throw a type error. In this case we will use the current commit hash.
tracking_branch = str(repo.active_branch.tracking_branch())
except AttributeError:
tracking_branch = repo.head.commit.hexsha
try:
blame = repo.blame_incremental(repo.head.commit.hexsha, file_path)
res = {
'version': 'v1',
'tracking_branch': tracking_branch,
'remote_url': next(repo.remote().urls, None),
'commits': {},
'blame': []}
for b in blame:
commit = b.commit
if commit.hexsha not in res['commits']:
res['commits'][commit.hexsha] = {
'author': {
'name': commit.author.name,
'email': commit.author.email,
},
'summary': commit.summary,
'message': commit.message,
'committed_datetime': str(commit.committed_datetime)}
res['blame'].append({
'from': b.linenos[0],
'to': b.linenos[-1],
'commit': commit.hexsha})
return res
except Exception as ex:
LOG.warning("Failed to get blame information for %s: %s",
file_path, ex)
|
9,079 |
def _install(tarball, install_args=()):
# extracting the tarball
tmpdir = tempfile.mkdtemp()
log.warn("Extracting in %s", tmpdir)
old_wd = os.getcwd()
try:
os.chdir(tmpdir)
tar = tarfile.open(tarball)
_extractall(tar)
tar.close()
# going in the directory
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
os.chdir(subdir)
log.warn("Now working in %s", subdir)
# installing
log.warn("Installing Setuptools")
if not _python_cmd("setup.py", "install", *install_args):
log.warn("Something went wrong during the installation.")
log.warn("See the error message above.")
# exitcode will be 2
return 2
finally:
os.chdir(old_wd)
shutil.rmtree(tmpdir)
|
def _install(tarball, install_args=()):
# extracting the tarball
tmpdir = tempfile.mkdtemp()
log.warn("Extracting in %s", tmpdir)
old_wd = os.getcwd()
try:
os.chdir(tmpdir)
tar = tarfile.open(tarball)
_extractall(tar)
tar.close()
# going in the directory
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
os.chdir(subdir)
log.warn(f"Now working in {subdir}")
# installing
log.warn("Installing Setuptools")
if not _python_cmd("setup.py", "install", *install_args):
log.warn("Something went wrong during the installation.")
log.warn("See the error message above.")
# exitcode will be 2
return 2
finally:
os.chdir(old_wd)
shutil.rmtree(tmpdir)
|
14,082 |
def _get_number_of_unique_geometry_types(gdf):
return sum(gdf.geom_type.isin(geom_type).any() for geom_type in _GEOMETRY_TYPES)
|
def _get_number_of_unique_geometry_types(gdf):
return gdf.geom_type.nunique()
|
25,767 |
def run_and_read_cplex(n, problem_fn, solution_fn, solver_logfile,
solver_options, warmstart=None, store_basis=True):
"""
Solving function. Reads the linear problem file and passes it to the cplex
solver. If the solution is sucessful it returns variable solutions and
constraint dual values. Cplex must be installed for using this function
"""
import cplex
m = cplex.Cplex()
out = m.set_log_stream(solver_logfile)
if solver_options is not None:
for key, value in solver_options.items():
getattr(m.parameters, key).set(value)
m.read(problem_fn)
if warmstart:
m.start.read_basis(warmstart)
m.solve()
is_lp = m.problem_type[m.get_problem_type()] == 'LP'
termination_condition = m.solution.get_status_string()
if 'optimal' in termination_condition:
status = 'ok'
termination_condition = 'optimal'
else:
status = 'warning'
if (status == 'ok') and store_basis and is_lp:
n.basis_fn = solution_fn.replace('.sol', '.bas')
m.solution.basis.write(n.basis_fn)
objective = m.solution.get_objective_value()
sol = pd.Series(m.solution.get_values(), m.variables.get_names())
if is_lp:
dual = pd.Series(m.solution.get_dual_values(),
m.linear_constraints.get_names())
else:
logger.warning("Shadow prices of MILP couldn't be parsed")
dual = pd.Series(index=m.linear_constraints.get_names())
return (status, termination_condition, sol, dual, objective)
|
def run_and_read_cplex(n, problem_fn, solution_fn, solver_logfile,
solver_options, warmstart=None, store_basis=True):
"""
Solving function. Reads the linear problem file and passes it to the cplex
solver. If the solution is sucessful it returns variable solutions and
constraint dual values. Cplex must be installed for using this function
"""
import cplex
m = cplex.Cplex()
out = m.set_log_stream(solver_logfile)
if solver_options is not None:
for key, value in solver_options.items():
getattr(m.parameters, key).set(value)
m.read(problem_fn)
if warmstart:
m.start.read_basis(warmstart)
m.solve()
is_lp = m.problem_type[m.get_problem_type()] == 'LP'
termination_condition = m.solution.get_status_string()
if 'optimal' in termination_condition:
status = 'ok'
termination_condition = 'optimal'
else:
status = 'warning'
if (status == 'ok') and store_basis and is_lp:
n.basis_fn = solution_fn.replace('.sol', '.bas')
m.solution.basis.write(n.basis_fn)
objective = m.solution.get_objective_value()
sol = pd.Series(m.solution.get_values(), m.variables.get_names())
if is_lp:
dual = pd.Series(m.solution.get_dual_values(),
m.linear_constraints.get_names()).pipe(set_int_index)
else:
logger.warning("Shadow prices of MILP couldn't be parsed")
dual = pd.Series(index=m.linear_constraints.get_names())
return (status, termination_condition, sol, dual, objective)
|
36,257 |
def highly_variable_genes_seurat_v3(
adata: AnnData,
n_top_genes: int = 2000,
batch_key: Optional[str] = None,
lowess_frac: Optional[float] = 0.15,
):
"""\
Annotate highly variable genes [Stuart19]_.
Expects raw count data.
The major difference in this implementation is the use of lowess insted of loess.
For further details of the sparse arithmetic see https://www.overleaf.com/read/ckptrbgzzzpg
Parameters
----------
adata
The annotated data matrix of shape `n_obs` × `n_vars`. Rows correspond
to cells and columns to genes.
n_top_genes
Number of highly-variable genes to keep.
batch_key
If specified, highly-variable genes are selected within each batch separately and merged.
This simple process avoids the selection of batch-specific genes and acts as a
lightweight batch correction method.
lowess_frac
The fraction of the data (cells) used when estimating the variance in the lowess model fit.
"""
import statsmodels
lowess = statsmodels.nonparametric.lowess
if batch_key is None:
batch_info = pd.Categorical(np.zeros((adata.X.shape[0])).astype(int))
else:
batch_info = adata.obs[batch_key]
norm_gene_vars = []
for b in np.unique(batch_info):
mean, var = materialize_as_ndarray(_get_mean_var(adata[batch_info == b].X))
not_const = var > 0
estimat_var = np.zeros((adata.X.shape[1]))
y = np.log10(var[not_const])
x = np.log10(mean[not_const])
# output is sorted by x
v = lowess(y, x, frac=lowess_frac)
estimat_var[not_const][np.argsort(x)] = v[:, 1]
# get normalized variance
reg_std = np.sqrt(10 ** estimat_var)
batch_counts = adata[batch_info == b].X.copy()
# clip large values as in Seurat
N = np.sum(batch_info == b)
vmax = np.sqrt(N)
clip_val = reg_std * vmax + mean
# could be something faster here
for g in range(batch_counts.shape[1]):
batch_counts[:, g][batch_counts[:, g] > vmax] = clip_val[g]
if sp_sparse.issparse(batch_counts):
squared_batch_counts_sum = np.array(batch_counts.power(2).sum(axis=0))
batch_counts_sum = np.array(batch_counts.sum(axis=0))
else:
squared_batch_counts_sum = np.square(batch_counts).sum(axis=0)
batch_counts_sum = batch_counts.sum(axis=0)
norm_gene_var = (1 / ((N - 1) * np.square(reg_std))) * (
(N * np.square(mean))
+ squared_batch_counts_sum
- 2 * batch_counts_sum * mean
)
norm_gene_vars.append(norm_gene_var.reshape(1, -1))
norm_gene_vars = np.concatenate(norm_gene_vars, axis=0)
# argsort twice gives ranks
ranked_norm_gene_vars = np.argsort(np.argsort(norm_gene_vars, axis=1), axis=1)
median_ranked = np.median(ranked_norm_gene_vars, axis=0)
num_batches_high_var = np.sum(
ranked_norm_gene_vars >= (adata.X.shape[1] - n_top_genes), axis=0
)
df = pd.DataFrame(index=np.array(adata.var_names))
df["highly_variable_nbatches"] = num_batches_high_var
df["highly_variable_median_rank"] = median_ranked
df.sort_values(
["highly_variable_nbatches", "highly_variable_median_rank"],
ascending=False,
na_position="last",
inplace=True,
)
df["highly_variable"] = False
df.loc[:n_top_genes, "highly_variable"] = True
df = df.loc[adata.var_names]
adata.var["highly_variable"] = df["highly_variable"].values
if batch_key is not None:
batches = adata.obs[batch_key].cat.categories
adata.var["highly_variable_nbatches"] = df["highly_variable_nbatches"].values
adata.var["highly_variable_intersection"] = df[
"highly_variable_nbatches"
] == len(batches)
adata.var["highly_variable_median_rank"] = df["highly_variable_median_rank"].values
|
def highly_variable_genes_seurat_v3(
adata: AnnData,
n_top_genes: int = 2000,
batch_key: Optional[str] = None,
lowess_frac: Optional[float] = 0.15,
):
"""\
Annotate highly variable genes [Stuart19]_.
Expects raw count data.
The major difference in this implementation is the use of lowess insted of loess.
For further details of the sparse arithmetic see https://www.overleaf.com/read/ckptrbgzzzpg
Parameters
----------
adata
The annotated data matrix of shape `n_obs` × `n_vars`. Rows correspond
to cells and columns to genes.
n_top_genes
Number of highly-variable genes to keep.
batch_key
If specified, highly-variable genes are selected within each batch separately and merged.
This simple process avoids the selection of batch-specific genes and acts as a
lightweight batch correction method.
lowess_frac
The fraction of the data (cells) used when estimating the variance in the lowess model fit.
"""
import statsmodels
lowess = statsmodels.nonparametric.lowess
if batch_key is None:
batch_info = pd.Categorical(np.zeros((adata.X.shape[0])).astype(int))
else:
batch_info = adata.obs[batch_key]
norm_gene_vars = []
for b in np.unique(batch_info):
mean, var = _get_mean_var(adata[batch_info == b].X)
not_const = var > 0
estimat_var = np.zeros((adata.X.shape[1]))
y = np.log10(var[not_const])
x = np.log10(mean[not_const])
# output is sorted by x
v = lowess(y, x, frac=lowess_frac)
estimat_var[not_const][np.argsort(x)] = v[:, 1]
# get normalized variance
reg_std = np.sqrt(10 ** estimat_var)
batch_counts = adata[batch_info == b].X.copy()
# clip large values as in Seurat
N = np.sum(batch_info == b)
vmax = np.sqrt(N)
clip_val = reg_std * vmax + mean
# could be something faster here
for g in range(batch_counts.shape[1]):
batch_counts[:, g][batch_counts[:, g] > vmax] = clip_val[g]
if sp_sparse.issparse(batch_counts):
squared_batch_counts_sum = np.array(batch_counts.power(2).sum(axis=0))
batch_counts_sum = np.array(batch_counts.sum(axis=0))
else:
squared_batch_counts_sum = np.square(batch_counts).sum(axis=0)
batch_counts_sum = batch_counts.sum(axis=0)
norm_gene_var = (1 / ((N - 1) * np.square(reg_std))) * (
(N * np.square(mean))
+ squared_batch_counts_sum
- 2 * batch_counts_sum * mean
)
norm_gene_vars.append(norm_gene_var.reshape(1, -1))
norm_gene_vars = np.concatenate(norm_gene_vars, axis=0)
# argsort twice gives ranks
ranked_norm_gene_vars = np.argsort(np.argsort(norm_gene_vars, axis=1), axis=1)
median_ranked = np.median(ranked_norm_gene_vars, axis=0)
num_batches_high_var = np.sum(
ranked_norm_gene_vars >= (adata.X.shape[1] - n_top_genes), axis=0
)
df = pd.DataFrame(index=np.array(adata.var_names))
df["highly_variable_nbatches"] = num_batches_high_var
df["highly_variable_median_rank"] = median_ranked
df.sort_values(
["highly_variable_nbatches", "highly_variable_median_rank"],
ascending=False,
na_position="last",
inplace=True,
)
df["highly_variable"] = False
df.loc[:n_top_genes, "highly_variable"] = True
df = df.loc[adata.var_names]
adata.var["highly_variable"] = df["highly_variable"].values
if batch_key is not None:
batches = adata.obs[batch_key].cat.categories
adata.var["highly_variable_nbatches"] = df["highly_variable_nbatches"].values
adata.var["highly_variable_intersection"] = df[
"highly_variable_nbatches"
] == len(batches)
adata.var["highly_variable_median_rank"] = df["highly_variable_median_rank"].values
|
11,807 |
def logical_and(image1, image2):
"""Logical AND between two images. At least one of the images must be "1"
mode.
.. code-block:: python
out = ((image1 and image2) % MAX)
:rtype: :py:class:`~PIL.Image.Image`
"""
image1.load()
image2.load()
return image1._new(image1.im.chop_and(image2.im))
|
def logical_and(image1, image2):
"""Logical AND between two images. At least one of the images must have
mode.
.. code-block:: python
out = ((image1 and image2) % MAX)
:rtype: :py:class:`~PIL.Image.Image`
"""
image1.load()
image2.load()
return image1._new(image1.im.chop_and(image2.im))
|
53,801 |
def initialize_kolibri_plugin(plugin_name, initialize_hooks=True):
"""
Try to load kolibri_plugin from given plugin module identifier
In so doing, it will instantiate the KolibriPlugin object if it
exists, and also register any hooks found in the module.
Use the initialize_hooks argument to just retrieve the kolibri plugin without registering
its hooks.
:returns: the KolibriPlugin object for the module
"""
was_configured = django_settings.configured
# First import the bare plugin name to see if it exists
# This will raise an exception if not
_import_python_module(plugin_name)
try:
# Exceptions are expected to be thrown from here.
plugin_module = importlib.import_module(plugin_name + ".kolibri_plugin")
if not was_configured and django_settings.configured:
raise PluginLoadsApp(
"Importing plugin module {} caused Django settings to be configured".format(
plugin_name
)
)
logger.debug("Loaded kolibri plugin: {}".format(plugin_name))
# If no exception is thrown, use this to find the plugin class.
# Load a list of all class types in module
# Filter the list to only match the ones that belong to the module
# and not the ones that have been imported
plugin_package = (
plugin_module.__package__
if plugin_module.__package__
else plugin_module.__name__.rpartition(".")[0]
)
def is_plugin_module(x):
return (
hasattr(x, "__module__")
and plugin_package + ".kolibri_plugin" == x.__module__
)
all_classes = [
cls
for cls in plugin_module.__dict__.values()
if is_plugin_module(cls) and isinstance(cls, type)
]
return initialize_plugins_and_hooks(
all_classes, plugin_name, initialize_hooks=initialize_hooks
)
except ImportError as e:
# Python 2: message, Python 3: msg
exc_message = getattr(e, "message", getattr(e, "msg", None))
if exc_message.startswith("No module named"):
msg = (
"Plugin '{}' exists but does not have an importable kolibri_plugin module"
).format(plugin_name)
raise PluginDoesNotExist(msg)
else:
raise
except AppRegistryNotReady:
msg = (
"Plugin '{}' loads the Django app registry, which it isn't "
"allowed to do while enabling or disabling itself."
).format(plugin_name)
raise PluginLoadsApp(msg)
|
def initialize_kolibri_plugin(plugin_name, initialize_hooks=True):
"""
Try to load kolibri_plugin from given plugin module identifier
In so doing, it will instantiate the KolibriPlugin object if it
exists, and also register any hooks found in the module.
Set the initialize_hooks argument to False to just retrieve the kolibri plugin without registering
its hooks.
:returns: the KolibriPlugin object for the module
"""
was_configured = django_settings.configured
# First import the bare plugin name to see if it exists
# This will raise an exception if not
_import_python_module(plugin_name)
try:
# Exceptions are expected to be thrown from here.
plugin_module = importlib.import_module(plugin_name + ".kolibri_plugin")
if not was_configured and django_settings.configured:
raise PluginLoadsApp(
"Importing plugin module {} caused Django settings to be configured".format(
plugin_name
)
)
logger.debug("Loaded kolibri plugin: {}".format(plugin_name))
# If no exception is thrown, use this to find the plugin class.
# Load a list of all class types in module
# Filter the list to only match the ones that belong to the module
# and not the ones that have been imported
plugin_package = (
plugin_module.__package__
if plugin_module.__package__
else plugin_module.__name__.rpartition(".")[0]
)
def is_plugin_module(x):
return (
hasattr(x, "__module__")
and plugin_package + ".kolibri_plugin" == x.__module__
)
all_classes = [
cls
for cls in plugin_module.__dict__.values()
if is_plugin_module(cls) and isinstance(cls, type)
]
return initialize_plugins_and_hooks(
all_classes, plugin_name, initialize_hooks=initialize_hooks
)
except ImportError as e:
# Python 2: message, Python 3: msg
exc_message = getattr(e, "message", getattr(e, "msg", None))
if exc_message.startswith("No module named"):
msg = (
"Plugin '{}' exists but does not have an importable kolibri_plugin module"
).format(plugin_name)
raise PluginDoesNotExist(msg)
else:
raise
except AppRegistryNotReady:
msg = (
"Plugin '{}' loads the Django app registry, which it isn't "
"allowed to do while enabling or disabling itself."
).format(plugin_name)
raise PluginLoadsApp(msg)
|
40,338 |
def homophily(edge_index: Adj, y: Tensor, batch: OptTensor = None,
method: str = 'edge') -> Union[float, Tensor]:
r"""The homophily of a graph characterizes how likely nodes with the same
label are near each other in a graph.
There are many measures of homophily that fits this definition.
In particular:
- In the `"Beyond Homophily in Graph Neural Networks: Current Limitations
and Effective Designs" <https://arxiv.org/abs/2006.11468>`_ paper, the
homophily is the fraction of edges in a graph which connects nodes
that have the same class label:
.. math::
\text{homophily} = \frac{| \{ (v,w) : (v,w) \in \mathcal{E} \wedge
y_v = y_w \} | } {|\mathcal{E}|}
That measure is called the *edge homophily ratio*.
- In the `"Geom-GCN: Geometric Graph Convolutional Networks"
<https://arxiv.org/abs/2002.05287>`_ paper, edge homophily is normalized
across neighborhoods:
.. math::
\text{homophily} = \frac{1}{|\mathcal{V}|} \sum_{v \in \mathcal{V}}
\frac{ | \{ (w,v) : w \in \mathcal{N}(v) \wedge y_v = y_w \} | }
{ |\mathcal{N}(v)| }
That measure is called the *node homophily ratio*.
- In the "Large-scale learning on non-homophilous graphs: \
New benchmarks and strong simple methods" paper, the class insensitive
homophily metric better captures the presence or absence of homophily:
.. math::
\text{homophily} = \frac{1}{C-1}\sum_{k=0}^{C-1}\begin{bmatrix}
{h_k - \frac{\lvert C_k \rvert}{n}}\end{bmatrix}_+,
.. math::
h_k = \frac{\sum_{u \in C_k}d_u^{(k_u)}}{\sum_{u \in C_k}d_u}
That measure is called the *class insensitive edge homophily ratio*
Args:
edge_index (Tensor or SparseTensor): The graph connectivity.
y (Tensor): The labels.
batch (LongTensor, optional): Batch vector
:math:`\mathbf{b} \in {\{ 0, \ldots,B-1\}}^N`, which assigns
each node to a specific example. (default: :obj:`None`)
method (str, optional): The method used to calculate the homophily,
either :obj:`"edge"` (first formula), :obj:`"node"`
(second formula) or `"edge-insensitive"`. (default: :obj:`"edge"`)
"""
assert method in ['edge', 'node', 'edge-insensitive']
y = y.squeeze(-1) if y.dim() > 1 else y
if isinstance(edge_index, SparseTensor):
col, row, _ = edge_index.coo()
else:
row, col = edge_index
if method == 'edge':
out = torch.zeros(row.size(0), device=row.device)
out[y[row] == y[col]] = 1.
if batch is None:
return float(out.mean())
else:
return scatter_mean(out, batch[col], dim=0)
elif method == 'node':
out = torch.zeros(row.size(0), device=row.device)
out[y[row] == y[col]] = 1.
out = scatter_mean(out, col, 0, dim_size=y.size(0))
if batch is None:
return float(out.mean())
else:
return scatter_mean(out, batch, dim=0)
else:
c = y.squeeze().max() + 1
nonzero_labels = y[y >= 0]
counts = nonzero_labels.unique(return_counts=True)[1]
proportions = counts.float() / nonzero_labels.shape[0]
h = homophily(edge_index, y, batch=y, method='edge')
out = 0
for k in range(c):
class_add = torch.clamp(h[k] - proportions[k], min=0)
if not torch.isnan(class_add):
out += class_add
out /= c - 1
return out
|
def homophily(edge_index: Adj, y: Tensor, batch: OptTensor = None,
method: str = 'edge') -> Union[float, Tensor]:
r"""The homophily of a graph characterizes how likely nodes with the same
label are near each other in a graph.
There are many measures of homophily that fits this definition.
In particular:
- In the `"Beyond Homophily in Graph Neural Networks: Current Limitations
and Effective Designs" <https://arxiv.org/abs/2006.11468>`_ paper, the
homophily is the fraction of edges in a graph which connects nodes
that have the same class label:
.. math::
\text{homophily} = \frac{| \{ (v,w) : (v,w) \in \mathcal{E} \wedge
y_v = y_w \} | } {|\mathcal{E}|}
That measure is called the *edge homophily ratio*.
- In the `"Geom-GCN: Geometric Graph Convolutional Networks"
<https://arxiv.org/abs/2002.05287>`_ paper, edge homophily is normalized
across neighborhoods:
.. math::
\text{homophily} = \frac{1}{|\mathcal{V}|} \sum_{v \in \mathcal{V}}
\frac{ | \{ (w,v) : w \in \mathcal{N}(v) \wedge y_v = y_w \} | }
{ |\mathcal{N}(v)| }
That measure is called the *node homophily ratio*.
- In the "Large-scale learning on non-homophilous graphs: \
New benchmarks and strong simple methods" paper, the class insensitive
homophily metric better captures the presence or absence of homophily:
.. math::
\text{homophily} = \frac{1}{C-1}\sum_{k=0}^{C-1}\begin{bmatrix}
{h_k - \frac{\lvert C_k \rvert}{n}}\end{bmatrix}_+,
.. math::
h_k = \frac{\sum_{u \in C_k}d_u^{(k_u)}}{\sum_{u \in C_k}d_u}
That measure is called the *class insensitive edge homophily ratio*
Args:
edge_index (Tensor or SparseTensor): The graph connectivity.
y (Tensor): The labels.
batch (LongTensor, optional): Batch vector
:math:`\mathbf{b} \in {\{ 0, \ldots,B-1\}}^N`, which assigns
each node to a specific example. (default: :obj:`None`)
method (str, optional): The method used to calculate the homophily,
either :obj:`"edge"` (first formula), :obj:`"node"`
(second formula) or :obj:`"edge-insensitive"` (third formula). (default: :obj:`"edge"`)
"""
assert method in ['edge', 'node', 'edge-insensitive']
y = y.squeeze(-1) if y.dim() > 1 else y
if isinstance(edge_index, SparseTensor):
col, row, _ = edge_index.coo()
else:
row, col = edge_index
if method == 'edge':
out = torch.zeros(row.size(0), device=row.device)
out[y[row] == y[col]] = 1.
if batch is None:
return float(out.mean())
else:
return scatter_mean(out, batch[col], dim=0)
elif method == 'node':
out = torch.zeros(row.size(0), device=row.device)
out[y[row] == y[col]] = 1.
out = scatter_mean(out, col, 0, dim_size=y.size(0))
if batch is None:
return float(out.mean())
else:
return scatter_mean(out, batch, dim=0)
else:
c = y.squeeze().max() + 1
nonzero_labels = y[y >= 0]
counts = nonzero_labels.unique(return_counts=True)[1]
proportions = counts.float() / nonzero_labels.shape[0]
h = homophily(edge_index, y, batch=y, method='edge')
out = 0
for k in range(c):
class_add = torch.clamp(h[k] - proportions[k], min=0)
if not torch.isnan(class_add):
out += class_add
out /= c - 1
return out
|
25,570 |
def channel_deposit_with_the_same_node_and_token_network(deposit_queue: JoinableQueue) -> None:
"""Because of how the ERC20 standard is defined, two concurrent approve
calls overwrite each other.
Additionally, to prevent a node from trying to deposit more tokens than it
has, and by consequence sending an unnecessary transaction, a lock is used.
(e.g.: When two transactions that are individually valid, but together use
more than the account's balance). This has the side effect of forbiding
concurrent deposits on the same token network. (Issue #5447)
"""
while True:
deposit = deposit_queue.get()
channel = channel_details(deposit.endpoint, deposit.token_address, deposit.partner)
if channel is None:
raise RuntimeError(f"Channel does not exist! {deposit}")
channel_deposit_if_necessary(channel, deposit)
|
def channel_deposit_with_the_same_node_and_token_network(deposit_queue: JoinableQueue) -> None:
"""Because of how the ERC20 standard is defined, two concurrent approve
calls overwrite each other.
Additionally, to prevent a node from trying to deposit more tokens than it
has, and by consequence sending an unnecessary transaction, a lock is used.
(e.g.: When two transactions that are individually valid, but together use
more than the account's balance). This has the side effect of forbidding
concurrent deposits on the same token network. (Issue #5447)
"""
while True:
deposit = deposit_queue.get()
channel = channel_details(deposit.endpoint, deposit.token_address, deposit.partner)
if channel is None:
raise RuntimeError(f"Channel does not exist! {deposit}")
channel_deposit_if_necessary(channel, deposit)
|
41,046 |
def _fetch_cambridge_functional(n_subjects, data_dir, url, resume,
verbose):
"""Helper function to fetch_cambridge.
This function helps in downloading multi-echo functional MRI data
for each subject in the Cambridge dataset.
Files are downloaded from Open Science Framework (OSF).
For more information on the data and its preprocessing, see:
https://osf.io/9wcb8/
Parameters
----------
n_subjects : int
The number of subjects to load. If None, all the subjects are
loaded. Total 88 subjects.
data_dir : str
Path of the data directory. Used to force data storage in a specified
location. If None is given, data are stored in home directory.
url : str
Override download URL. Used for test only (or if you setup a mirror of
the data).
resume : bool
Whether to resume download of a partly-downloaded file.
verbose : int
Defines the level of verbosity of the output.
Returns
-------
func : list of str (Nifti files)
Paths to functional MRI data (4D) for each subject.
"""
dataset_name = 'cambridge'
data_dir = _get_dataset_dir(dataset_name, data_dir=data_dir,
verbose=verbose)
if url is None:
# Download from the relevant OSF project, using hashes generated
# from the OSF API. Note the trailing slash. For more info, see:
# https://gist.github.com/emdupre/3cb4d564511d495ea6bf89c6a577da74
url = 'https://osf.io/download/{}/'
func = '{0}_task-rest_{1}_space-scanner_desc-partialPreproc_bold.nii.gz'
# The gzip contains unique download keys per Nifti file and confound
# pre-extracted from OSF. Required for downloading files.
package_directory = os.path.dirname(os.path.abspath(__file__))
dtype = [('participant_id', 'U12'), ('echo_id', 'U12'),
('key_bold', 'U24')]
names = ['participant_id', 'echo_id', 'key_b']
# csv file contains download information related to OpenScience(osf)
osf_data = csv_to_array(os.path.join(package_directory, "data",
"cambridge_echos.csv"),
skip_header=True, dtype=dtype, names=names)
funcs = []
participant_id, echo_id, uuid = zip(*osf_data)
participants = np.unique(participant_id)[:n_subjects]
for participant_id in participants:
this_osf_id = osf_data[osf_data['participant_id'] == participant_id]
participant_funcs = []
for entry in this_osf_id:
echo_id = entry['echo_id']
# Download bold images for each echo
func_url = url.format(entry['key_b'])
func_file = [(func.format(participant_id, echo_id),
func_url,
{'move': func.format(participant_id, echo_id)})]
path_to_func = _fetch_files(data_dir, func_file, resume=resume,
verbose=verbose)[0]
participant_funcs.append(path_to_func)
funcs.append[tuple(participant_funcs)]
return funcs
|
def _fetch_cambridge_functional(n_subjects, data_dir, url, resume,
verbose):
"""Helper function to fetch_cambridge.
This function helps in downloading multi-echo functional MRI data
for each subject in the Cambridge dataset.
Files are downloaded from Open Science Framework (OSF).
For more information on the data and its preprocessing, see:
https://osf.io/9wcb8/
Parameters
----------
n_subjects : int
The number of subjects to load. If None, all the subjects are
loaded. Total 88 subjects.
data_dir : str
Path of the data directory. Used to force data storage in a specified
location. If None is given, data are stored in home directory.
url : str
Override download URL. Used for test only (or if you setup a mirror of
the data).
resume : bool
Whether to resume download of a partly-downloaded file.
verbose : int
Defines the level of verbosity of the output.
Returns
-------
func : list of str (Nifti files)
Paths to functional MRI data (4D) for each subject.
"""
dataset_name = 'cambridge'
data_dir = _get_dataset_dir(dataset_name, data_dir=data_dir,
verbose=verbose)
if url is None:
# Download from the relevant OSF project, using hashes generated
# from the OSF API. Note the trailing slash. For more info, see:
# https://gist.github.com/emdupre/3cb4d564511d495ea6bf89c6a577da74
url = 'https://osf.io/download/{}/'
func = '{0}_task-rest_{1}_space-scanner_desc-partialPreproc_bold.nii.gz'
# The gzip contains unique download keys per Nifti file and confound
# pre-extracted from OSF. Required for downloading files.
package_directory = os.path.dirname(os.path.abspath(__file__))
dtype = [('participant_id', 'U12'), ('echo_id', 'U12'),
('key_bold', 'U24')]
names = ['participant_id', 'echo_id', 'key_b']
# csv file contains download information related to OpenScience(osf)
osf_data = csv_to_array(os.path.join(package_directory, "data",
"cambridge_echos.csv"),
skip_header=True, dtype=dtype, names=names)
funcs = []
participant_id, echo_id, uuid = zip(*osf_data)
participants = np.unique(participant_id)[:n_subjects]
for participant_id in participants:
this_osf_id = osf_data[osf_data['participant_id'] == participant_id]
participant_funcs = []
for entry in this_osf_id:
echo_id = entry['echo_id']
# Download bold images for each echo
func_url = url.format(entry['key_b'])
func_file = [(func.format(participant_id, echo_id),
func_url,
{'move': func.format(participant_id, echo_id)})]
path_to_func = _fetch_files(data_dir, func_file, resume=resume,
verbose=verbose)[0]
participant_funcs.append(path_to_func)
funcs.append(tuple(participant_funcs))
return funcs
|
14,333 |
def test(glyphsets, glyphs=None, names=None):
if names is None:
names = glyphsets
if glyphs is None:
glyphs = glyphsets[0].keys()
hist = []
for glyph_name in glyphs:
#print()
#print(glyph_name)
try:
allVectors = []
allNodeTypes = []
for glyphset,name in zip(glyphsets, names):
#print('.', end='')
glyph = glyphset[glyph_name]
perContourPen = PerContourOrComponentPen(RecordingPen, glyphset=glyphset)
glyph.draw(perContourPen)
contourPens = perContourPen.value
del perContourPen
contourVectors = []
nodeTypes = []
allNodeTypes.append(nodeTypes)
allVectors.append(contourVectors)
for contour in contourPens:
nodeTypes.append(tuple([ instruction[0] for instruction in contour.value ]))
stats = StatisticsPen(glyphset=glyphset)
contour.replay(stats)
size = abs(stats.area) ** .5 * .5
vector = (
int(size),
int(stats.meanX),
int(stats.meanY),
int(stats.stddevX * 2),
int(stats.stddevY * 2),
int(stats.correlation * size),
)
contourVectors.append(vector)
#print(vector)
# Check each master against the next one in the list.
for i,(m0,m1) in enumerate(zip(allNodeTypes[:-1],allNodeTypes[1:])):
if len(m0) != len(m1):
print('%s: %s+%s: Glyphs not compatible (wrong number of paths %i+%i)!!!!!' % (glyph_name, names[i], names[i+1], len(m0), len(m1)))
if m0 == m1:
continue
for pathIx, (nodes1, nodes2) in enumerate(zip(m0,m1)):
if nodes1 == nodes2:
continue
print('%s: %s+%s: Glyphs not compatible at path %i!!!!!' % (glyph_name, names[i], names[i+1], pathIx))
if len(nodes1) != len(nodes2):
print("%s has %i nodes, %s has %i nodes" % (names[i], len(nodes1), names[i+1], len(nodes2)))
continue
for nodeIx, (n1, n2) in enumerate(zip(nodes1, nodes2)):
if n1 != n2:
print("At node %i, %s has %s, %s has %s" % (nodeIx, names[i], n1, names[i+1], n2))
continue
for i,(m0,m1) in enumerate(zip(allVectors[:-1],allVectors[1:])):
if len(m0) != len(m1):
print('%s: %s+%s: Glyphs not compatible!!!!!' % (glyph_name, names[i], names[i+1]))
continue
if not m0:
continue
costs = [[_vlen(_vdiff(v0,v1)) for v1 in m1] for v0 in m0]
matching, matching_cost = min_cost_perfect_bipartite_matching(costs)
if matching != list(range(len(m0))):
print('%s: %s+%s: Glyph has wrong contour/component order: %s' % (glyph_name, names[i], names[i+1], matching)) #, m0, m1)
break
upem = 2048
item_cost = round((matching_cost / len(m0) / len(m0[0])) ** .5 / upem * 100)
hist.append(item_cost)
threshold = 7
if item_cost >= threshold:
print('%s: %s+%s: Glyph has very high cost: %d%%' % (glyph_name, names[i], names[i+1], item_cost))
except ValueError as e:
print('%s: %s: math error %s; skipping glyph.' % (glyph_name, name, e))
print(contour.value)
#raise
|
def test(glyphsets, glyphs=None, names=None):
if names is None:
names = glyphsets
if glyphs is None:
glyphs = glyphsets[0].keys()
hist = []
for glyph_name in glyphs:
#print()
#print(glyph_name)
try:
allVectors = []
allNodeTypes = []
for glyphset,name in zip(glyphsets, names):
#print('.', end='')
glyph = glyphset[glyph_name]
perContourPen = PerContourOrComponentPen(RecordingPen, glyphset=glyphset)
glyph.draw(perContourPen)
contourPens = perContourPen.value
del perContourPen
contourVectors = []
nodeTypes = []
allNodeTypes.append(nodeTypes)
allVectors.append(contourVectors)
for contour in contourPens:
nodeTypes.append(tuple(instruction[0] for instruction in contour.value))
stats = StatisticsPen(glyphset=glyphset)
contour.replay(stats)
size = abs(stats.area) ** .5 * .5
vector = (
int(size),
int(stats.meanX),
int(stats.meanY),
int(stats.stddevX * 2),
int(stats.stddevY * 2),
int(stats.correlation * size),
)
contourVectors.append(vector)
#print(vector)
# Check each master against the next one in the list.
for i,(m0,m1) in enumerate(zip(allNodeTypes[:-1],allNodeTypes[1:])):
if len(m0) != len(m1):
print('%s: %s+%s: Glyphs not compatible (wrong number of paths %i+%i)!!!!!' % (glyph_name, names[i], names[i+1], len(m0), len(m1)))
if m0 == m1:
continue
for pathIx, (nodes1, nodes2) in enumerate(zip(m0,m1)):
if nodes1 == nodes2:
continue
print('%s: %s+%s: Glyphs not compatible at path %i!!!!!' % (glyph_name, names[i], names[i+1], pathIx))
if len(nodes1) != len(nodes2):
print("%s has %i nodes, %s has %i nodes" % (names[i], len(nodes1), names[i+1], len(nodes2)))
continue
for nodeIx, (n1, n2) in enumerate(zip(nodes1, nodes2)):
if n1 != n2:
print("At node %i, %s has %s, %s has %s" % (nodeIx, names[i], n1, names[i+1], n2))
continue
for i,(m0,m1) in enumerate(zip(allVectors[:-1],allVectors[1:])):
if len(m0) != len(m1):
print('%s: %s+%s: Glyphs not compatible!!!!!' % (glyph_name, names[i], names[i+1]))
continue
if not m0:
continue
costs = [[_vlen(_vdiff(v0,v1)) for v1 in m1] for v0 in m0]
matching, matching_cost = min_cost_perfect_bipartite_matching(costs)
if matching != list(range(len(m0))):
print('%s: %s+%s: Glyph has wrong contour/component order: %s' % (glyph_name, names[i], names[i+1], matching)) #, m0, m1)
break
upem = 2048
item_cost = round((matching_cost / len(m0) / len(m0[0])) ** .5 / upem * 100)
hist.append(item_cost)
threshold = 7
if item_cost >= threshold:
print('%s: %s+%s: Glyph has very high cost: %d%%' % (glyph_name, names[i], names[i+1], item_cost))
except ValueError as e:
print('%s: %s: math error %s; skipping glyph.' % (glyph_name, name, e))
print(contour.value)
#raise
|
53,833 |
def _resolve_api_identifier_to_app_id(graph_client, api_identifier):
"""Resolves an API identifier to an app ID
The identifier can be an object ID or service principal name for an existing service principal
object. If no matching service principal object is found but the identifier is a Guid, it is
assumed to be the app ID of an API which is not instantiated in the tenant.
"""
try:
resource_sp = show_service_principal(graph_client.service_principals, api_identifier)
return resource_sp.app_id
except CLIError:
if _is_guid(api_identifier):
return api_identifier
raise CLIError("No API was found for identifier '{}'.".format(api_identifier))
|
def _resolve_api_identifier_to_app_id(graph_client, api_identifier):
"""Resolves an API identifier to an app ID
The identifier can be an object ID or service principal name, like `https://graph.microsoft.com`
for an existing service principal object.
If no matching service principal object is found but the identifier is a Guid, it is
assumed to be the app ID of an API which is not instantiated in the tenant.
"""
try:
resource_sp = show_service_principal(graph_client.service_principals, api_identifier)
return resource_sp.app_id
except CLIError:
if _is_guid(api_identifier):
return api_identifier
raise CLIError("No API was found for identifier '{}'.".format(api_identifier))
|
31,914 |
def list_user_policies(args, aws_client):
client = aws_client.aws_session(
service=SERVICE,
role_arn=args.get('roleArn'),
role_session_name=args.get('roleSessionName'),
role_session_duration=args.get('roleSessionDuration'),
)
user_name = args.get('userName', "")
marker = args.get('marker', None)
limit, is_manual, page_size = get_limit(args)
kwargs = {
'UserName': user_name,
'MaxItems': limit
}
if marker:
kwargs.update({'Marker': marker})
response = client.list_user_policies(**kwargs)
data = response.get('PolicyNames', [])
marker = response.get('Marker', None)
if is_manual and page_size and len(data) > page_size:
data = data[-1 * args.get('page_size'):]
policy_data = []
for policy in data:
policy_data.append({
'UserName': user_name,
'PolicyName': policy,
})
ec = {'AWS.IAM.UserPolicies(val.PolicyName && val.UserName && val.PolicyName === obj.PolicyName && '
'val.UserName === obj.UserName)': policy_data,
'AWS.IAM.Users(val.UserName === \'{}\').InlinePoliciesMarker'.format(user_name): marker}
human_readable = tableToMarkdown('AWS IAM Policies for user {}'.format(user_name),
headers=["PolicyNames"],
headerTransform=pascalToSpace,
t=data)
return_outputs(human_readable, ec)
|
def list_user_policies(args, aws_client):
client = aws_client.aws_session(
service=SERVICE,
role_arn=args.get('roleArn'),
role_session_name=args.get('roleSessionName'),
role_session_duration=args.get('roleSessionDuration'),
)
user_name = args.get('userName', "")
marker = args.get('marker', None)
limit, is_manual, page_size = get_limit(args)
kwargs = {
'UserName': user_name,
'MaxItems': limit
}
if marker:
kwargs.update({'Marker': marker})
response = client.list_user_policies(**kwargs)
data = response.get('PolicyNames', [])
marker = response.get('Marker', None)
if is_manual and page_size and len(data) > page_size:
data = data[-1 * args.get('page_size'):]
policy_data = []
for policy in data:
policy_data.append({
'UserName': user_name,
'PolicyName': policy,
})
ec = {'AWS.IAM.UserPolicies(val.PolicyName && val.UserName && val.PolicyName === obj.PolicyName && '
'val.UserName === obj.UserName)': policy_data,
'AWS.IAM.Users(val.UserName === \'{}\').InlinePoliciesMarker'.format(user_name): marker}
human_readable = tableToMarkdown('AWS IAM Policies for user {}'.format(user_name),
headers=["PolicyNames"],
headerTransform=pascalToSpace,
t=data)
return_outputs(human_readable, ec, response)
|
23,567 |
def initialize_scheduler():
"""
Start the scheduled background tasks. Re-schedule if interval settings changed.
"""
with SCHED_LOCK:
# Check if scheduler should be started
start_jobs = not len(SCHED.get_jobs())
# Update check
github_minutes = CONFIG.CHECK_GITHUB_INTERVAL if CONFIG.CHECK_GITHUB_INTERVAL and CONFIG.CHECK_GITHUB else 0
pms_update_notify_hours = CONFIG.PMS_UPDATE_NOTIFY_INTERVAL if 1 <= CONFIG.PMS_UPDATE_NOTIFY_INTERVAL <= 999 else 24
schedule_job(versioncheck.check_update, 'Check GitHub for updates',
hours=0, minutes=github_minutes, seconds=0, args=(bool(CONFIG.PLEXPY_AUTO_UPDATE), True))
backup_hours = CONFIG.BACKUP_INTERVAL if 1 <= CONFIG.BACKUP_INTERVAL <= 24 else 6
schedule_job(database.make_backup, 'Backup Tautulli database',
hours=backup_hours, minutes=0, seconds=0, args=(True, True))
schedule_job(config.make_backup, 'Backup Tautulli config',
hours=backup_hours, minutes=0, seconds=0, args=(True, True))
if WS_CONNECTED and CONFIG.PMS_IP and CONFIG.PMS_TOKEN:
schedule_job(plextv.get_server_resources, 'Refresh Plex server URLs',
hours=12 * (not bool(CONFIG.PMS_URL_MANUAL)), minutes=0, seconds=0)
schedule_job(activity_pinger.check_server_access, 'Check for Plex remote access',
hours=0, minutes=0, seconds=60 * bool(CONFIG.MONITOR_REMOTE_ACCESS))
schedule_job(activity_pinger.check_server_updates, 'Check for Plex updates',
hours=pms_update_notify_hours * bool(CONFIG.MONITOR_PMS_UPDATES), minutes=0, seconds=0)
# Refresh the users list and libraries list
user_hours = CONFIG.REFRESH_USERS_INTERVAL if 1 <= CONFIG.REFRESH_USERS_INTERVAL <= 24 else 12
library_hours = CONFIG.REFRESH_LIBRARIES_INTERVAL if 1 <= CONFIG.REFRESH_LIBRARIES_INTERVAL <= 24 else 12
schedule_job(users.refresh_users, 'Refresh users list',
hours=user_hours, minutes=0, seconds=0)
schedule_job(libraries.refresh_libraries, 'Refresh libraries list',
hours=library_hours, minutes=0, seconds=0)
schedule_job(activity_pinger.connect_server, 'Check for server response',
hours=0, minutes=0, seconds=0)
schedule_job(web_socket.send_ping, 'Websocket ping',
hours=0, minutes=0, seconds=10 * bool(CONFIG.WEBSOCKET_MONITOR_PING_PONG))
else:
# Cancel all jobs
schedule_job(plextv.get_server_resources, 'Refresh Plex server URLs',
hours=0, minutes=0, seconds=0)
schedule_job(activity_pinger.check_server_access, 'Check for Plex remote access',
hours=0, minutes=0, seconds=0)
schedule_job(activity_pinger.check_server_updates, 'Check for Plex updates',
hours=0, minutes=0, seconds=0)
schedule_job(users.refresh_users, 'Refresh users list',
hours=0, minutes=0, seconds=0)
schedule_job(libraries.refresh_libraries, 'Refresh libraries list',
hours=0, minutes=0, seconds=0)
# Schedule job to reconnect server
schedule_job(activity_pinger.connect_server, 'Check for server response',
hours=0, minutes=0, seconds=60, args=(False,))
schedule_job(web_socket.send_ping, 'Websocket ping',
hours=0, minutes=0, seconds=0)
# Start scheduler
if start_jobs and len(SCHED.get_jobs()):
try:
SCHED.start()
except Exception as e:
logger.error(e)
|
def initialize_scheduler():
"""
Start the scheduled background tasks. Re-schedule if interval settings changed.
"""
with SCHED_LOCK:
# Check if scheduler should be started
start_jobs = not len(SCHED.get_jobs())
# Update check
github_minutes = CONFIG.CHECK_GITHUB_INTERVAL if CONFIG.CHECK_GITHUB_INTERVAL and CONFIG.CHECK_GITHUB else 0
pms_update_check_hours = CONFIG.PMS_UPDATE_CHECK_INTERVAL if 1 <= CONFIG.PMS_UPDATE_CHECK_INTERVAL else 24
schedule_job(versioncheck.check_update, 'Check GitHub for updates',
hours=0, minutes=github_minutes, seconds=0, args=(bool(CONFIG.PLEXPY_AUTO_UPDATE), True))
backup_hours = CONFIG.BACKUP_INTERVAL if 1 <= CONFIG.BACKUP_INTERVAL <= 24 else 6
schedule_job(database.make_backup, 'Backup Tautulli database',
hours=backup_hours, minutes=0, seconds=0, args=(True, True))
schedule_job(config.make_backup, 'Backup Tautulli config',
hours=backup_hours, minutes=0, seconds=0, args=(True, True))
if WS_CONNECTED and CONFIG.PMS_IP and CONFIG.PMS_TOKEN:
schedule_job(plextv.get_server_resources, 'Refresh Plex server URLs',
hours=12 * (not bool(CONFIG.PMS_URL_MANUAL)), minutes=0, seconds=0)
schedule_job(activity_pinger.check_server_access, 'Check for Plex remote access',
hours=0, minutes=0, seconds=60 * bool(CONFIG.MONITOR_REMOTE_ACCESS))
schedule_job(activity_pinger.check_server_updates, 'Check for Plex updates',
hours=pms_update_notify_hours * bool(CONFIG.MONITOR_PMS_UPDATES), minutes=0, seconds=0)
# Refresh the users list and libraries list
user_hours = CONFIG.REFRESH_USERS_INTERVAL if 1 <= CONFIG.REFRESH_USERS_INTERVAL <= 24 else 12
library_hours = CONFIG.REFRESH_LIBRARIES_INTERVAL if 1 <= CONFIG.REFRESH_LIBRARIES_INTERVAL <= 24 else 12
schedule_job(users.refresh_users, 'Refresh users list',
hours=user_hours, minutes=0, seconds=0)
schedule_job(libraries.refresh_libraries, 'Refresh libraries list',
hours=library_hours, minutes=0, seconds=0)
schedule_job(activity_pinger.connect_server, 'Check for server response',
hours=0, minutes=0, seconds=0)
schedule_job(web_socket.send_ping, 'Websocket ping',
hours=0, minutes=0, seconds=10 * bool(CONFIG.WEBSOCKET_MONITOR_PING_PONG))
else:
# Cancel all jobs
schedule_job(plextv.get_server_resources, 'Refresh Plex server URLs',
hours=0, minutes=0, seconds=0)
schedule_job(activity_pinger.check_server_access, 'Check for Plex remote access',
hours=0, minutes=0, seconds=0)
schedule_job(activity_pinger.check_server_updates, 'Check for Plex updates',
hours=0, minutes=0, seconds=0)
schedule_job(users.refresh_users, 'Refresh users list',
hours=0, minutes=0, seconds=0)
schedule_job(libraries.refresh_libraries, 'Refresh libraries list',
hours=0, minutes=0, seconds=0)
# Schedule job to reconnect server
schedule_job(activity_pinger.connect_server, 'Check for server response',
hours=0, minutes=0, seconds=60, args=(False,))
schedule_job(web_socket.send_ping, 'Websocket ping',
hours=0, minutes=0, seconds=0)
# Start scheduler
if start_jobs and len(SCHED.get_jobs()):
try:
SCHED.start()
except Exception as e:
logger.error(e)
|
45,631 |
def create_local_files(data, file_name):
"""Create local .JSON file and .CSV files."""
data.to_csv("{}.csv".format(file_name), encoding="utf-8", index=False)
data = data.to_dict(orient='records')
json_file = open("{}.json".format(file_name), 'w')
out = json.dumps(data)
json_file.write(out)
|
def create_local_files(data, file_name):
"""Create local JSON and CSV files."""
data.to_csv("{}.csv".format(file_name), encoding="utf-8", index=False)
data = data.to_dict(orient='records')
json_file = open("{}.json".format(file_name), 'w')
out = json.dumps(data)
json_file.write(out)
|
57,882 |
def invite_member(args):
try:
client = aws_session(
region=args.get('region'),
roleArn=args.get('roleArn'),
roleSessionName=args.get('roleSessionName'),
roleSessionDuration=args.get('roleSessionDuration'),
)
accountIds = []
accountIds.append(args.get('accountId'))
response = client.invite_members(
DetectorId=args.get('detectorId'),
AccountIds=accountIds
)
unprocessed_accounts = response.get('UnprocessedAccounts', [])
ec = {"AWS.GuardDuty.InviteMember.UnprocessedAccounts": unprocessed_accounts} \
if unprocessed_accounts else None
return create_entry('AWS GuardDuty Invite Member', unprocessed_accounts, ec)
except Exception as e:
return raise_error(e)
|
def invite_member(args):
try:
client = aws_session(
region=args.get('region'),
roleArn=args.get('roleArn'),
roleSessionName=args.get('roleSessionName'),
roleSessionDuration=args.get('roleSessionDuration'),
)
account_ids = argToList(args.get('accountId'))
response = client.invite_members(
DetectorId=args.get('detectorId'),
AccountIds=account_ids
)
unprocessed_accounts = response.get('UnprocessedAccounts', [])
ec = {"AWS.GuardDuty.InviteMember.UnprocessedAccounts": unprocessed_accounts} \
if unprocessed_accounts else None
return create_entry('AWS GuardDuty Invite Member', unprocessed_accounts, ec)
except Exception as e:
return raise_error(e)
|
41,483 |
def dedupe_parameters(parameters):
duplicates = {}
for p in parameters:
duplicates.setdefault(p['name'], []).append(p)
for pname in duplicates.keys():
parameter_list = duplicates[pname]
if len(parameter_list) == 1:
continue
elif any(p != parameter_list[0] for p in parameter_list[1:]):
for p in parameter_list:
log.warning(p)
raise RuntimeError(
'cannot import workspace due to incompatible parameter configurations for {0:s}.'.format(
pname
)
)
# no errors raised, de-dupe and return
return list({v['name']: v for v in parameters}.values())
|
def dedupe_parameters(parameters):
duplicates = {}
for p in parameters:
duplicates.setdefault(p['name'], []).append(p)
for pname in duplicates.keys():
parameter_list = duplicates[pname]
if len(parameter_list) == 1:
continue
elif any(p != parameter_list[0] for p in parameter_list[1:]):
for p in parameter_list:
log.warning(p)
raise RuntimeError(
'cannot import workspace due to incompatible parameter configurations for {0:s}.'.format(
parname
)
)
# no errors raised, de-dupe and return
return list({v['name']: v for v in parameters}.values())
|
23,842 |
def environment_wrap_command(env_filenames, env_folder, cmd, subsystem=None, accept=None):
if not env_filenames:
return cmd
filenames = [env_filenames] if not isinstance(env_filenames, list) else env_filenames
bats, shs, ps1s = [], [], []
accept = accept or ("ps1", "bat", "sh")
# TODO: This implemantation is dirty, improve it
for f in filenames:
f = f if os.path.isabs(f) else os.path.join(env_folder, f)
if f.lower().endswith(".sh"):
if os.path.isfile(f) and "sh" in accept:
f = subsystem_path(subsystem, f)
shs.append(f)
elif f.lower().endswith(".bat"):
if os.path.isfile(f) and "bat" in accept:
bats.append(f)
elif f.lower().endswith(".ps1") and "ps1" in accept:
if os.path.isfile(f):
ps1s.append(f)
else: # Simple name like "conanrunenv"
path_bat = "{}.bat".format(f)
path_sh = "{}.sh".format(f)
path_ps1 = "{}.ps1".format(f)
if os.path.isfile(path_bat) and "bat" in accept:
bats.append(path_bat)
if os.path.isfile(path_ps1) and "ps1" in accept:
ps1s.append(path_ps1)
if os.path.isfile(path_sh) and "sh" in accept:
path_sh = subsystem_path(subsystem, path_sh)
shs.append(path_sh)
if bool(bats) + bool(shs) + bool(ps1s) > 1:
raise ConanException("Cannot wrap command with different envs,"
" {} - {} - {}".format(bats, shs, ps1s))
if bats:
launchers = " && ".join('"{}"'.format(b) for b in bats)
return '{} && {}'.format(launchers, cmd)
elif shs:
launchers = " && ".join('. "{}"'.format(f) for f in shs)
return '{} && {}'.format(launchers, cmd)
elif ps1s:
# TODO: at the moment it only works with path without spaces
launchers = " ; ".join('{}'.format(f) for f in ps1s)
return 'powershell.exe {} ; cmd /c {}'.format(launchers, cmd)
else:
return cmd
|
def environment_wrap_command(env_filenames, env_folder, cmd, subsystem=None, accepted_extensions=None):
if not env_filenames:
return cmd
filenames = [env_filenames] if not isinstance(env_filenames, list) else env_filenames
bats, shs, ps1s = [], [], []
accept = accept or ("ps1", "bat", "sh")
# TODO: This implemantation is dirty, improve it
for f in filenames:
f = f if os.path.isabs(f) else os.path.join(env_folder, f)
if f.lower().endswith(".sh"):
if os.path.isfile(f) and "sh" in accept:
f = subsystem_path(subsystem, f)
shs.append(f)
elif f.lower().endswith(".bat"):
if os.path.isfile(f) and "bat" in accept:
bats.append(f)
elif f.lower().endswith(".ps1") and "ps1" in accept:
if os.path.isfile(f):
ps1s.append(f)
else: # Simple name like "conanrunenv"
path_bat = "{}.bat".format(f)
path_sh = "{}.sh".format(f)
path_ps1 = "{}.ps1".format(f)
if os.path.isfile(path_bat) and "bat" in accept:
bats.append(path_bat)
if os.path.isfile(path_ps1) and "ps1" in accept:
ps1s.append(path_ps1)
if os.path.isfile(path_sh) and "sh" in accept:
path_sh = subsystem_path(subsystem, path_sh)
shs.append(path_sh)
if bool(bats) + bool(shs) + bool(ps1s) > 1:
raise ConanException("Cannot wrap command with different envs,"
" {} - {} - {}".format(bats, shs, ps1s))
if bats:
launchers = " && ".join('"{}"'.format(b) for b in bats)
return '{} && {}'.format(launchers, cmd)
elif shs:
launchers = " && ".join('. "{}"'.format(f) for f in shs)
return '{} && {}'.format(launchers, cmd)
elif ps1s:
# TODO: at the moment it only works with path without spaces
launchers = " ; ".join('{}'.format(f) for f in ps1s)
return 'powershell.exe {} ; cmd /c {}'.format(launchers, cmd)
else:
return cmd
|
32,471 |
def handle_incoming_closing_incident(incident_data):
closing_entry = {} # type: Dict
if incident_data.get('status') in XDR_RESOLVED_STATUS_TO_XSOAR:
demisto.debug(f"Closing XDR issue {incident_data.get('incident_id')}")
closing_entry = {
'Type': EntryType.NOTE,
'Contents': {
'dbotIncidentClose': True,
'closeReason': XDR_RESOLVED_STATUS_TO_XSOAR.get(incident_data.get("status")),
'closeNotes': incident_data.get('resolve_comment')
},
'ContentsFormat': EntryFormat.JSON
}
incident_data['closeReason'] = XDR_RESOLVED_STATUS_TO_XSOAR.get(incident_data.get("status"))
incident_data['closeNotes'] = MIRROR_IN_CLOSE_REASON + f'\n{incident_data.get("resolve_comment")}'
if incident_data.get('status') == 'resolved_known_issue':
close_notes = 'Known Issue.\n' + incident_data.get('closeNotes', '')
closing_entry['Contents']['closeNotes'] = close_notes
incident_data['closeNotes'] = close_notes
return closing_entry
|
def handle_incoming_closing_incident(incident_data):
closing_entry = {} # type: Dict
if incident_data.get('status') in XDR_RESOLVED_STATUS_TO_XSOAR:
demisto.debug(f"Closing XDR issue {incident_data.get('incident_id')}")
closing_entry = {
'Type': EntryType.NOTE,
'Contents': {
'dbotIncidentClose': True,
'closeReason': XDR_RESOLVED_STATUS_TO_XSOAR.get(incident_data.get("status")),
'closeNotes': incident_data.get('resolve_comment')
},
'ContentsFormat': EntryFormat.JSON
}
incident_data['closeReason'] = XDR_RESOLVED_STATUS_TO_XSOAR.get(incident_data.get("status"))
incident_data['closeNotes'] = '{MIRROR_IN_CLOSE_REASON}\n{incident_data.get("resolve_comment")}'
if incident_data.get('status') == 'resolved_known_issue':
close_notes = 'Known Issue.\n' + incident_data.get('closeNotes', '')
closing_entry['Contents']['closeNotes'] = close_notes
incident_data['closeNotes'] = close_notes
return closing_entry
|
40,150 |
def _get_test_config_tuple(defaults: Dict = None) -> Tuple[Config, ConfigParser]:
"""Returns a tuple containing a `config.Config` instance and a `ConfigParser` instance.
Both instances are equivalent and the latter is legacy only.
The "docker-mount-base-dir" and "firmware-file-storage-directory" in the section "data-storage"
are created and must be cleaned up manually.
:arg defaults: Sections to overwrite
"""
config.load_config()
docker_mount_base_dir = create_docker_mount_base_dir()
firmware_file_storage_directory = Path(tempfile.mkdtemp())
# This dict must exactly match the one that a ConfigParser instance would
# read from the config file
sections = {
'data-storage': {
'postgres-server': 'localhost',
'postgres-port': '5432',
'postgres-database': 'fact_test',
'postgres-test-database': 'fact_test',
'postgres-ro-user': config.cfg.data_storage.postgres_ro_user,
'postgres-ro-pw': config.cfg.data_storage.postgres_ro_pw,
'postgres-rw-user': config.cfg.data_storage.postgres_rw_user,
'postgres-rw-pw': config.cfg.data_storage.postgres_rw_pw,
'postgres-del-user': config.cfg.data_storage.postgres_del_user,
'postgres-del-pw': config.cfg.data_storage.postgres_del_pw,
'postgres-admin-user': config.cfg.data_storage.postgres_del_user,
'postgres-admin-pw': config.cfg.data_storage.postgres_del_pw,
'redis-fact-db': config.cfg.data_storage.redis_test_db, # Note: This is unused in testing
'redis-test-db': config.cfg.data_storage.redis_test_db, # Note: This is unused in production
'redis-host': config.cfg.data_storage.redis_host,
'redis-port': config.cfg.data_storage.redis_port,
'firmware-file-storage-directory': str(firmware_file_storage_directory),
'user-database': 'sqlite:////media/data/fact_auth_data/fact_users.db',
'password-salt': '1234',
'structural-threshold': '40', # TODO
'temp-dir-path': '/tmp',
'docker-mount-base-dir': str(docker_mount_base_dir),
'variety-path': 'bin/variety.js',
},
'database': {
'ajax-stats-reload-time': '10000', # TODO
'number-of-latest-firmwares-to-display': '10',
'results-per-page': '10'
},
'default-plugins': {
'default': '',
'minimal': '',
},
'expert-settings': {
'authentication': 'false',
'block-delay': '0.1',
'communication-timeout': '60',
'intercom-poll-delay': '0.5',
'nginx': 'false',
'radare2-host': 'localhost',
'ssdeep-ignore': '1',
'throw-exceptions': 'false',
'unpack-threshold': '0.8',
'unpack_throttle_limit': '50'
},
'logging': {
'logfile': '/tmp/fact_main.log',
'loglevel': 'WARNING',
},
'unpack': {
'max-depth': '10',
'memory-limit': '2048',
'threads': '4',
'whitelist': ''
},
'statistics': {
'max_elements_per_chart': '10'
},
}
# Update recursively
for section_name in defaults if defaults else {}:
sections.setdefault(section_name, {}).update(defaults[section_name])
configparser_cfg = ConfigParser()
configparser_cfg.read_dict(sections)
config._parse_dict(sections)
cfg = Config(**sections)
return cfg, configparser_cfg
|
def _get_test_config_tuple(defaults: dict | None = None) -> tuple[Config, ConfigParser]:
"""Returns a tuple containing a `config.Config` instance and a `ConfigParser` instance.
Both instances are equivalent and the latter is legacy only.
The "docker-mount-base-dir" and "firmware-file-storage-directory" in the section "data-storage"
are created and must be cleaned up manually.
:arg defaults: Sections to overwrite
"""
config.load_config()
docker_mount_base_dir = create_docker_mount_base_dir()
firmware_file_storage_directory = Path(tempfile.mkdtemp())
# This dict must exactly match the one that a ConfigParser instance would
# read from the config file
sections = {
'data-storage': {
'postgres-server': 'localhost',
'postgres-port': '5432',
'postgres-database': 'fact_test',
'postgres-test-database': 'fact_test',
'postgres-ro-user': config.cfg.data_storage.postgres_ro_user,
'postgres-ro-pw': config.cfg.data_storage.postgres_ro_pw,
'postgres-rw-user': config.cfg.data_storage.postgres_rw_user,
'postgres-rw-pw': config.cfg.data_storage.postgres_rw_pw,
'postgres-del-user': config.cfg.data_storage.postgres_del_user,
'postgres-del-pw': config.cfg.data_storage.postgres_del_pw,
'postgres-admin-user': config.cfg.data_storage.postgres_del_user,
'postgres-admin-pw': config.cfg.data_storage.postgres_del_pw,
'redis-fact-db': config.cfg.data_storage.redis_test_db, # Note: This is unused in testing
'redis-test-db': config.cfg.data_storage.redis_test_db, # Note: This is unused in production
'redis-host': config.cfg.data_storage.redis_host,
'redis-port': config.cfg.data_storage.redis_port,
'firmware-file-storage-directory': str(firmware_file_storage_directory),
'user-database': 'sqlite:////media/data/fact_auth_data/fact_users.db',
'password-salt': '1234',
'structural-threshold': '40', # TODO
'temp-dir-path': '/tmp',
'docker-mount-base-dir': str(docker_mount_base_dir),
'variety-path': 'bin/variety.js',
},
'database': {
'ajax-stats-reload-time': '10000', # TODO
'number-of-latest-firmwares-to-display': '10',
'results-per-page': '10'
},
'default-plugins': {
'default': '',
'minimal': '',
},
'expert-settings': {
'authentication': 'false',
'block-delay': '0.1',
'communication-timeout': '60',
'intercom-poll-delay': '0.5',
'nginx': 'false',
'radare2-host': 'localhost',
'ssdeep-ignore': '1',
'throw-exceptions': 'false',
'unpack-threshold': '0.8',
'unpack_throttle_limit': '50'
},
'logging': {
'logfile': '/tmp/fact_main.log',
'loglevel': 'WARNING',
},
'unpack': {
'max-depth': '10',
'memory-limit': '2048',
'threads': '4',
'whitelist': ''
},
'statistics': {
'max_elements_per_chart': '10'
},
}
# Update recursively
for section_name in defaults if defaults else {}:
sections.setdefault(section_name, {}).update(defaults[section_name])
configparser_cfg = ConfigParser()
configparser_cfg.read_dict(sections)
config._parse_dict(sections)
cfg = Config(**sections)
return cfg, configparser_cfg
|
1,202 |
def test_is_fancy():
slices = (2, [2], [2, 3], Ellipsis, np.array(2), np.array((2, 3)))
for slice0 in slices:
_check_slice(slice0)
_check_slice((slice0,)) # tuple is same
# Double ellipsis illegal in np 1.12dev - set up check for that case
maybe_bad = slice0 is Ellipsis
for slice1 in slices:
if maybe_bad and slice1 is Ellipsis:
continue
_check_slice((slice0, slice1))
assert not is_fancy((None,))
assert not is_fancy((None, 1))
assert not is_fancy((1, None))
# Chack that actual False returned (rather than falsey)
assert is_fancy(1) == False
|
def test_is_fancy():
slices = (2, [2], [2, 3], Ellipsis, np.array(2), np.array((2, 3)))
for slice0 in slices:
_check_slice(slice0)
_check_slice((slice0,)) # tuple is same
# Double ellipsis illegal in np 1.12dev - set up check for that case
maybe_bad = slice0 is Ellipsis
for slice1 in slices:
if maybe_bad and slice1 is Ellipsis:
continue
_check_slice((slice0, slice1))
assert not is_fancy((None,))
assert not is_fancy((None, 1))
assert not is_fancy((1, None))
# Chack that actual False returned (rather than falsey)
assert is_fancy(1) is False
|
34,652 |
def _generate_lookup_regex(
lookup_table: Dict[Text, Union[Text, List[Text]]], use_word_boundaries: bool = True
) -> Text:
r"""Creates a regex pattern from the given lookup table.
The lookup table is either a file or a list of entries.
Args:
lookup_table: The lookup table.
use_word_boundaries: If True add `\b` around the regex expression
for each lookup table expressions.
Returns:
The regex pattern.
"""
lookup_elements = lookup_table["elements"]
# if it's a list, it should be the elements directly
if isinstance(lookup_elements, list):
elements_to_regex = lookup_elements
# otherwise it's a file path.
else:
elements_to_regex = read_lookup_table_file(lookup_elements)
# sanitize the regex, escape special characters
elements_sanitized = [re.escape(e) for e in elements_to_regex]
if use_word_boundaries:
# regex matching elements with word boundaries on either side
return "(\\b" + "\\b|\\b".join(elements_sanitized) + "\\b)"
else:
return "(" + "|".join(elements_sanitized) + ")"
|
def _generate_lookup_regex(
lookup_table: Dict[Text, Union[Text, List[Text]]], use_word_boundaries: bool = True
) -> Text:
"""Creates a regex pattern from the given lookup table.
The lookup table is either a file or a list of entries.
Args:
lookup_table: The lookup table.
use_word_boundaries: If True add `\b` around the regex expression
for each lookup table expressions.
Returns:
The regex pattern.
"""
lookup_elements = lookup_table["elements"]
# if it's a list, it should be the elements directly
if isinstance(lookup_elements, list):
elements_to_regex = lookup_elements
# otherwise it's a file path.
else:
elements_to_regex = read_lookup_table_file(lookup_elements)
# sanitize the regex, escape special characters
elements_sanitized = [re.escape(e) for e in elements_to_regex]
if use_word_boundaries:
# regex matching elements with word boundaries on either side
return "(\\b" + "\\b|\\b".join(elements_sanitized) + "\\b)"
else:
return "(" + "|".join(elements_sanitized) + ")"
|
15,432 |
def test_auto_purge(hass_recorder):
"""Test saving and restoring a state."""
hass = hass_recorder()
original_tz = dt_util.DEFAULT_TIME_ZONE
tz = dt_util.get_time_zone("Europe/Copenhagen")
dt_util.set_default_time_zone(tz)
now = dt_util.utcnow()
test_time = now + timedelta(days=365)
async_fire_time_changed(hass, test_time)
with patch(
"homeassistant.components.recorder.purge.purge_old_data", return_value=True
) as purge_old_data:
for delta in (-1, 0, 1):
async_fire_time_changed(hass, test_time + timedelta(seconds=delta))
hass.block_till_done()
hass.data[DATA_INSTANCE].block_till_done()
assert len(purge_old_data.mock_calls) == 1
dt_util.set_default_time_zone(original_tz)
|
def test_auto_purge(hass_recorder):
"""Test saving and restoring a state."""
hass = hass_recorder()
original_tz = dt_util.DEFAULT_TIME_ZONE
tz = dt_util.get_time_zone("Europe/Copenhagen")
dt_util.set_default_time_zone(tz)
now = dt_util.utcnow()
test_time = now + timedelta(days=365)
fire_time_changed(hass, test_time)
with patch(
"homeassistant.components.recorder.purge.purge_old_data", return_value=True
) as purge_old_data:
for delta in (-1, 0, 1):
async_fire_time_changed(hass, test_time + timedelta(seconds=delta))
hass.block_till_done()
hass.data[DATA_INSTANCE].block_till_done()
assert len(purge_old_data.mock_calls) == 1
dt_util.set_default_time_zone(original_tz)
|
11,361 |
def get_client_credential(certificate_path, password=None, certificate_data=None, send_certificate_chain=False, **_):
# type: (Optional[str], Optional[Union[bytes, str]], Optional[bytes], bool, **Any) -> dict
"""Load a certificate from a filesystem path or bytes, return it as a dict suitable for msal.ClientApplication"""
if certificate_path:
if certificate_data:
raise ValueError('Please specify either "certificate_path" or "certificate_data"')
with open(certificate_path, "rb") as f:
certificate_data = f.read()
elif not certificate_data:
raise ValueError('CertificateCredential requires a value for "certificate_path" or "certificate_data"')
if isinstance(password, six.text_type):
password = password.encode(encoding="utf-8")
private_key = serialization.load_pem_private_key(certificate_data, password=password, backend=default_backend())
if not isinstance(private_key, RSAPrivateKey):
raise ValueError("CertificateCredential requires an RSA private key because it uses RS256 for signing")
cert = x509.load_pem_x509_certificate(certificate_data, default_backend())
fingerprint = cert.fingerprint(hashes.SHA1()) # nosec
client_credential = {"private_key": certificate_data, "thumbprint": hexlify(fingerprint).decode("utf-8")}
if password:
client_credential["passphrase"] = password
if send_certificate_chain:
try:
# the JWT needs the whole chain but load_pem_x509_certificate deserializes only the signing cert
chain = extract_cert_chain(certificate_data)
client_credential["public_certificate"] = six.ensure_str(chain)
except ValueError as ex:
# we shouldn't land here--cryptography already loaded the cert and would have raised if it were malformed
six.raise_from(ValueError("Malformed certificate"), ex)
return client_credential
|
def get_client_credential(certificate_path, password=None, certificate_data=None, send_certificate_chain=False, **_):
# type: (Optional[str], Optional[Union[bytes, str]], Optional[bytes], bool, **Any) -> dict
"""Load a certificate from a filesystem path or bytes, return it as a dict suitable for msal.ClientApplication"""
if certificate_path:
if certificate_data:
raise ValueError('Please specify exactly one of either "certificate_path" or "certificate_data"')
with open(certificate_path, "rb") as f:
certificate_data = f.read()
elif not certificate_data:
raise ValueError('CertificateCredential requires a value for "certificate_path" or "certificate_data"')
if isinstance(password, six.text_type):
password = password.encode(encoding="utf-8")
private_key = serialization.load_pem_private_key(certificate_data, password=password, backend=default_backend())
if not isinstance(private_key, RSAPrivateKey):
raise ValueError("CertificateCredential requires an RSA private key because it uses RS256 for signing")
cert = x509.load_pem_x509_certificate(certificate_data, default_backend())
fingerprint = cert.fingerprint(hashes.SHA1()) # nosec
client_credential = {"private_key": certificate_data, "thumbprint": hexlify(fingerprint).decode("utf-8")}
if password:
client_credential["passphrase"] = password
if send_certificate_chain:
try:
# the JWT needs the whole chain but load_pem_x509_certificate deserializes only the signing cert
chain = extract_cert_chain(certificate_data)
client_credential["public_certificate"] = six.ensure_str(chain)
except ValueError as ex:
# we shouldn't land here--cryptography already loaded the cert and would have raised if it were malformed
six.raise_from(ValueError("Malformed certificate"), ex)
return client_credential
|
35,287 |
def fista(AtB, pseudo_inverse, x=None, n_iter_max=100, non_negative=True, gradient_step=None,
sparsity_coefficient=None):
"""
Fast Iterative Shrinkage Thresholding Algorithm (FISTA)
Computes and approximate solution for Ax=b linear system.
Parameters
----------
AtB: ndarray
Pre-computed product of the transposed of A and B.
pseudo_inverse: ndarray
Pre-computed product of the transposed of A and A.
x: initialized array
Default: None
n_iter_max : int
Maximum number of iteration
Default: 100
non_negative : bool, default is False
if True, result will be non-negative
gradient_step : float
sparsity_coefficient : float or None
Returns
-------
x : Updated ndarray
Reference
----------
[1] : Beck, A., & Teboulle, M. (2009). A fast iterative
shrinkage-thresholding algorithm for linear inverse problems.
SIAM journal on imaging sciences, 2(1), 183-202.
"""
if sparsity_coefficient[-1] is None:
sparse = 0
else:
sparse = sparsity_coefficient[-1]
if gradient_step is None:
gradient_step = 0.001
if x is None:
x = tl.zeros([tl.shape(pseudo_inverse)[0], tl.shape(AtB)[1]])
# Parameters
momentum_old = tl.tensor(1.0)
norm_0 = 0.0
x_upd = tl.copy(x)
for iteration in range(n_iter_max):
gradient = - AtB + tl.tenalg.multi_mode_dot(x_upd, pseudo_inverse, transpose=False) + sparse
if non_negative is True:
delta_x = tl.where(gradient_step * gradient < x, gradient_step * gradient, x_upd)
else:
delta_x = gradient_step * gradient
xnew = x_upd - delta_x
momentum = (1 + tl.sqrt(1 + 4 * momentum_old ** 2)) / 2
x_upd = xnew + ((momentum_old - 1) / momentum) * (xnew - x)
momentum_old = momentum
x = tl.copy(xnew)
norm = tl.norm(delta_x)
if iteration == 1:
norm_0 = norm
if norm < 0.01 * norm_0:
break
return x
|
def fista(AtB, pseudo_inverse, x=None, n_iter_max=100, non_negative=True, gradient_step=None,
sparsity_coefficient=None):
"""
Fast Iterative Shrinkage Thresholding Algorithm (FISTA)
Computes and approximate solution for Ax=b linear system.
Parameters
----------
AtB: ndarray
Pre-computed product of the transposed of A and B.
pseudo_inverse: ndarray
Pre-computed product of the transposed of A and A.
x: initialized array
Default: None
n_iter_max : int
Maximum number of iteration
Default: 100
non_negative : bool, default is False
if True, result will be non-negative
gradient_step : float
sparsity_coefficient : float or None
Returns
-------
x : Updated ndarray
Reference
----------
[1] : Beck, A., & Teboulle, M. (2009). A fast iterative
shrinkage-thresholding algorithm for linear inverse problems.
SIAM journal on imaging sciences, 2(1), 183-202.
"""
if sparsity_coefficient[-1] is None:
sparse = 0
else:
sparse = sparsity_coefficient[-1]
if gradient_step is None:
gradient_step = 0.001
if x is None:
x = tl.zeros([tl.shape(pseudo_inverse)[0], tl.shape(AtB)[1]])
# Parameters
momentum_old = tl.tensor(1.0)
norm_0 = 0.0
x_upd = tl.copy(x)
for iteration in range(n_iter_max):
gradient = - AtB + tl.tenalg.multi_mode_dot(x_upd, pseudo_inverse, transpose=False) + sparse
if non_negative is True:
delta_x = tl.where(gradient_step * gradient < x, gradient_step * gradient, x_upd)
else:
delta_x = gradient_step * gradient
x_new = x_update - gradient_step*x_gradient
momentum = (1 + tl.sqrt(1 + 4 * momentum_old ** 2)) / 2
x_upd = xnew + ((momentum_old - 1) / momentum) * (xnew - x)
momentum_old = momentum
x = tl.copy(xnew)
norm = tl.norm(delta_x)
if iteration == 1:
norm_0 = norm
if norm < 0.01 * norm_0:
break
return x
|
14,132 |
def clip(gdf, mask, keep_geom_type=False):
"""Clip points, lines, or polygon geometries to the mask extent.
Both layers must be in the same Coordinate Reference System (CRS).
The `gdf` will be clipped to the full extent of the clip object.
If there are multiple polygons in mask, data from `gdf` will be
clipped to the total boundary of all polygons in mask.
If the `mask` is a tuple of `(minx, miny, maxx, maxy)`, a faster rectangle
clipping algorithm will be used. Note that this can lead to slightly different
results in edge cases, e.g. if a line would be reduced to a point, this point might
not be returned.
Parameters
----------
gdf : GeoDataFrame or GeoSeries
Vector layer (point, line, polygon) to be clipped to mask.
mask : GeoDataFrame, GeoSeries, (Multi)Polygon, tuple
Polygon vector layer used to clip `gdf`.
The mask's geometry is dissolved into one geometric feature
and intersected with `gdf`.
If the mask is a tuple of `(minx, miny, maxx, maxy)`, `clip` will use a faster
rectangle clipping (`.clip_by_rect()`), possibly leading to slightly different
results.
keep_geom_type : boolean, default False
If True, return only geometries of original type in case of intersection
resulting in multiple geometry types or GeometryCollections.
If False, return all resulting geometries (potentially mixed-types).
Returns
-------
GeoDataFrame or GeoSeries
Vector data (points, lines, polygons) from `gdf` clipped to
polygon boundary from mask.
See also
--------
GeoDataFrame.clip : equivalent GeoDataFrame method
GeoSeries.clip : equivalent GeoSeries method
Examples
--------
Clip points (global cities) with a polygon (the South American continent):
>>> world = geopandas.read_file(
... geopandas.datasets.get_path('naturalearth_lowres'))
>>> south_america = world[world['continent'] == "South America"]
>>> capitals = geopandas.read_file(
... geopandas.datasets.get_path('naturalearth_cities'))
>>> capitals.shape
(202, 2)
>>> sa_capitals = geopandas.clip(capitals, south_america)
>>> sa_capitals.shape
(12, 2)
"""
if not isinstance(gdf, (GeoDataFrame, GeoSeries)):
raise TypeError(
"'gdf' should be GeoDataFrame or GeoSeries, got {}".format(type(gdf))
)
if not isinstance(mask, (GeoDataFrame, GeoSeries, Polygon, MultiPolygon, tuple)):
raise TypeError(
"'mask' should be GeoDataFrame, GeoSeries,"
f"(Multi)Polygon or 4 element tuple, got {type(mask)}"
)
if isinstance(mask, tuple) and len(mask) != 4:
raise TypeError(
"If 'mask' is a tuple, it must have four values (minx, miny, maxx, maxy)"
)
if isinstance(mask, (GeoDataFrame, GeoSeries)):
if not _check_crs(gdf, mask):
_crs_mismatch_warn(gdf, mask, stacklevel=3)
if isinstance(mask, (GeoDataFrame, GeoSeries)):
box_mask = mask.total_bounds
elif isinstance(mask, tuple):
box_mask = mask
else:
box_mask = mask.bounds
box_gdf = gdf.total_bounds
if not (
((box_mask[0] <= box_gdf[2]) and (box_gdf[0] <= box_mask[2]))
and ((box_mask[1] <= box_gdf[3]) and (box_gdf[1] <= box_mask[3]))
):
return gdf.iloc[:0]
if isinstance(mask, (GeoDataFrame, GeoSeries)):
combined_mask = mask.geometry.unary_union
else:
combined_mask = mask
clipped = _clip_gdf_with_mask(gdf, combined_mask)
if keep_geom_type:
geomcoll_concat = (clipped.geom_type == "GeometryCollection").any()
geomcoll_orig = (gdf.geom_type == "GeometryCollection").any()
new_collection = geomcoll_concat and not geomcoll_orig
if geomcoll_orig:
warnings.warn(
"keep_geom_type can not be called on a "
"GeoDataFrame with GeometryCollection."
)
else:
polys = ["Polygon", "MultiPolygon"]
lines = ["LineString", "MultiLineString", "LinearRing"]
points = ["Point", "MultiPoint"]
# Check that the gdf for multiple geom types (points, lines and/or polys)
orig_types_total = sum(
[
gdf.geom_type.isin(polys).any(),
gdf.geom_type.isin(lines).any(),
gdf.geom_type.isin(points).any(),
]
)
# Check how many geometry types are in the clipped GeoDataFrame
clip_types_total = sum(
[
clipped.geom_type.isin(polys).any(),
clipped.geom_type.isin(lines).any(),
clipped.geom_type.isin(points).any(),
]
)
# Check there aren't any new geom types in the clipped GeoDataFrame
more_types = orig_types_total < clip_types_total
if orig_types_total > 1:
warnings.warn(
"keep_geom_type can not be called on a mixed type GeoDataFrame."
)
elif new_collection or more_types:
orig_type = gdf.geom_type.iloc[0]
if new_collection:
clipped = clipped.explode(index_parts=False)
if orig_type in polys:
clipped = clipped.loc[clipped.geom_type.isin(polys)]
elif orig_type in lines:
clipped = clipped.loc[clipped.geom_type.isin(lines)]
return clipped
|
def clip(gdf, mask, keep_geom_type=False):
"""Clip points, lines, or polygon geometries to the mask extent.
Both layers must be in the same Coordinate Reference System (CRS).
The `gdf` will be clipped to the full extent of the clip object.
If there are multiple polygons in mask, data from `gdf` will be
clipped to the total boundary of all polygons in mask.
If the `mask` is a tuple of `(minx, miny, maxx, maxy)`, a faster rectangle
clipping algorithm will be used. Note that this can lead to slightly different
results in edge cases, e.g. if a line would be reduced to a point, this point might
not be returned.
Parameters
----------
gdf : GeoDataFrame or GeoSeries
Vector layer (point, line, polygon) to be clipped to mask.
mask : GeoDataFrame, GeoSeries, (Multi)Polygon, tuple
Polygon vector layer used to clip `gdf`.
The mask's geometry is dissolved into one geometric feature
and intersected with `gdf`.
If the mask is a tuple of `(minx, miny, maxx, maxy)`, `clip` will use a faster
rectangle clipping (:meth:`~GeoSeries.clip_by_rect`), possibly leading to slightly different
results.
keep_geom_type : boolean, default False
If True, return only geometries of original type in case of intersection
resulting in multiple geometry types or GeometryCollections.
If False, return all resulting geometries (potentially mixed-types).
Returns
-------
GeoDataFrame or GeoSeries
Vector data (points, lines, polygons) from `gdf` clipped to
polygon boundary from mask.
See also
--------
GeoDataFrame.clip : equivalent GeoDataFrame method
GeoSeries.clip : equivalent GeoSeries method
Examples
--------
Clip points (global cities) with a polygon (the South American continent):
>>> world = geopandas.read_file(
... geopandas.datasets.get_path('naturalearth_lowres'))
>>> south_america = world[world['continent'] == "South America"]
>>> capitals = geopandas.read_file(
... geopandas.datasets.get_path('naturalearth_cities'))
>>> capitals.shape
(202, 2)
>>> sa_capitals = geopandas.clip(capitals, south_america)
>>> sa_capitals.shape
(12, 2)
"""
if not isinstance(gdf, (GeoDataFrame, GeoSeries)):
raise TypeError(
"'gdf' should be GeoDataFrame or GeoSeries, got {}".format(type(gdf))
)
if not isinstance(mask, (GeoDataFrame, GeoSeries, Polygon, MultiPolygon, tuple)):
raise TypeError(
"'mask' should be GeoDataFrame, GeoSeries,"
f"(Multi)Polygon or 4 element tuple, got {type(mask)}"
)
if isinstance(mask, tuple) and len(mask) != 4:
raise TypeError(
"If 'mask' is a tuple, it must have four values (minx, miny, maxx, maxy)"
)
if isinstance(mask, (GeoDataFrame, GeoSeries)):
if not _check_crs(gdf, mask):
_crs_mismatch_warn(gdf, mask, stacklevel=3)
if isinstance(mask, (GeoDataFrame, GeoSeries)):
box_mask = mask.total_bounds
elif isinstance(mask, tuple):
box_mask = mask
else:
box_mask = mask.bounds
box_gdf = gdf.total_bounds
if not (
((box_mask[0] <= box_gdf[2]) and (box_gdf[0] <= box_mask[2]))
and ((box_mask[1] <= box_gdf[3]) and (box_gdf[1] <= box_mask[3]))
):
return gdf.iloc[:0]
if isinstance(mask, (GeoDataFrame, GeoSeries)):
combined_mask = mask.geometry.unary_union
else:
combined_mask = mask
clipped = _clip_gdf_with_mask(gdf, combined_mask)
if keep_geom_type:
geomcoll_concat = (clipped.geom_type == "GeometryCollection").any()
geomcoll_orig = (gdf.geom_type == "GeometryCollection").any()
new_collection = geomcoll_concat and not geomcoll_orig
if geomcoll_orig:
warnings.warn(
"keep_geom_type can not be called on a "
"GeoDataFrame with GeometryCollection."
)
else:
polys = ["Polygon", "MultiPolygon"]
lines = ["LineString", "MultiLineString", "LinearRing"]
points = ["Point", "MultiPoint"]
# Check that the gdf for multiple geom types (points, lines and/or polys)
orig_types_total = sum(
[
gdf.geom_type.isin(polys).any(),
gdf.geom_type.isin(lines).any(),
gdf.geom_type.isin(points).any(),
]
)
# Check how many geometry types are in the clipped GeoDataFrame
clip_types_total = sum(
[
clipped.geom_type.isin(polys).any(),
clipped.geom_type.isin(lines).any(),
clipped.geom_type.isin(points).any(),
]
)
# Check there aren't any new geom types in the clipped GeoDataFrame
more_types = orig_types_total < clip_types_total
if orig_types_total > 1:
warnings.warn(
"keep_geom_type can not be called on a mixed type GeoDataFrame."
)
elif new_collection or more_types:
orig_type = gdf.geom_type.iloc[0]
if new_collection:
clipped = clipped.explode(index_parts=False)
if orig_type in polys:
clipped = clipped.loc[clipped.geom_type.isin(polys)]
elif orig_type in lines:
clipped = clipped.loc[clipped.geom_type.isin(lines)]
return clipped
|
57,214 |
def motion_camera_ui_to_dict(ui, prev_config=None):
prev_config = dict(prev_config or {})
main_config = get_main() # needed for surveillance password
data = {
# device
'camera_name': ui['name'],
'@enabled': ui['enabled'],
'auto_brightness': ui['auto_brightness'],
'framerate': int(ui['framerate']),
'rotate': int(ui['rotation']),
'mask_privacy': '',
# file storage
'@storage_device': ui['storage_device'],
'@network_server': ui['network_server'],
'@network_share_name': ui['network_share_name'],
'@network_smb_ver': ui['network_smb_ver'],
'@network_username': ui['network_username'],
'@network_password': ui['network_password'],
'@upload_enabled': ui['upload_enabled'],
'@upload_movie': ui['upload_movie'],
'@upload_picture': ui['upload_picture'],
'@upload_service': ui['upload_service'],
'@upload_server': ui['upload_server'],
'@upload_port': ui['upload_port'],
'@upload_method': ui['upload_method'],
'@upload_location': ui['upload_location'],
'@upload_subfolders': ui['upload_subfolders'],
'@upload_username': ui['upload_username'],
'@upload_password': ui['upload_password'],
'@clean_cloud_enabled': ui['clean_cloud_enabled'],
# text overlay
'text_left': '',
'text_right': '',
'text_scale': ui['text_scale'],
# streaming
'stream_localhost': not ui['video_streaming'],
'stream_port': int(ui['streaming_port']),
'stream_maxrate': int(ui['streaming_framerate']),
'stream_quality': max(1, int(ui['streaming_quality'])),
'@webcam_resolution': max(1, int(ui['streaming_resolution'])),
'@webcam_server_resize': ui['streaming_server_resize'],
'stream_motion': ui['streaming_motion'],
'stream_auth_method': {'disabled': 0, 'basic': 1, 'digest': 2}.get(
ui['streaming_auth_mode'], 0
),
'stream_authentication': main_config['@normal_username']
+ ':'
+ main_config['@normal_password'],
# still images
'picture_output': False,
'snapshot_interval': 0,
'picture_filename': '',
'snapshot_filename': '',
'picture_quality': max(1, int(ui['image_quality'])),
'@preserve_pictures': int(ui['preserve_pictures']),
'@manual_snapshots': ui['manual_snapshots'],
# movies
'movie_output': False,
'movie_passthrough': bool(ui['movie_passthrough']),
'movie_filename': ui['movie_file_name'],
'movie_max_time': ui['max_movie_length'],
'@preserve_movies': int(ui['preserve_movies']),
# motion detection
'@motion_detection': ui['motion_detection'],
'emulate_motion': False,
'text_changes': ui['show_frame_changes'],
'locate_motion_mode': ui['show_frame_changes'],
'threshold_maximum': ui['max_frame_change_threshold'],
'threshold_tune': ui['auto_threshold_tuning'],
'noise_tune': ui['auto_noise_detect'],
'noise_level': max(1, int(round(int(ui['noise_level']) * 2.55))),
'lightswitch_percent': ui['light_switch_detect'],
'event_gap': int(ui['event_gap']),
'pre_capture': int(ui['pre_capture']),
'post_capture': int(ui['post_capture']),
'minimum_motion_frames': int(ui['minimum_motion_frames']),
'smart_mask_speed': 0,
'mask_file': '',
'picture_output_motion': ui['create_debug_media'],
'movie_output_motion': ui['create_debug_media'],
# working schedule
'@working_schedule': '',
# events
'on_event_start': '',
'on_event_end': '',
'on_movie_end': '',
'on_picture_save': '',
}
if utils.is_v4l2_camera(prev_config):
proto = 'v4l2'
elif utils.is_mmal_camera(prev_config):
proto = 'mmal'
else:
proto = 'netcam'
if proto in ('v4l2', 'mmal'):
# leave videodevice unchanged
# resolution
if not ui['resolution']:
ui['resolution'] = '320x240'
width = int(ui['resolution'].split('x')[0])
height = int(ui['resolution'].split('x')[1])
data['width'] = width
data['height'] = height
threshold = int(float(ui['frame_change_threshold']) * width * height / 100)
if proto == 'v4l2':
# video controls
vid_control_params = (
('{}={}'.format(n, c['value']))
for n, c in list(ui['video_controls'].items())
)
data['vid_control_params'] = ','.join(vid_control_params)
else: # assuming netcam
if re.match(
r'^rtsp|^rtmp', data.get('netcam_url', prev_config.get('netcam_url', ''))
):
# motion uses the configured width and height for RTSP/RTMP cameras
width = int(ui['resolution'].split('x')[0])
height = int(ui['resolution'].split('x')[1])
data['width'] = width
data['height'] = height
threshold = int(float(ui['frame_change_threshold']) * width * height / 100)
else: # width & height are not available for other netcams
threshold = int(float(ui['frame_change_threshold']) * 640 * 480 / 100)
data['threshold'] = threshold
if ui['privacy_mask']:
capture_width, capture_height = data.get('width'), data.get('height')
if data.get('rotate') in [90, 270]:
capture_width, capture_height = capture_height, capture_width
data['mask_privacy'] = utils.build_editable_mask_file(
prev_config['@id'],
'privacy',
ui['privacy_mask_lines'],
capture_width,
capture_height,
)
if (ui['storage_device'] == 'network-share') and settings.SMB_SHARES:
mount_point = smbctl.make_mount_point(
ui['network_server'], ui['network_share_name'], ui['network_username']
)
if ui['root_directory'].startswith('/'):
ui['root_directory'] = ui['root_directory'][1:]
data['target_dir'] = os.path.normpath(
os.path.join(mount_point, ui['root_directory'])
)
elif ui['storage_device'].startswith('local-disk'):
target_dev = ui['storage_device'][10:].replace('-', '/')
mounted_partitions = diskctl.list_mounted_partitions()
partition = mounted_partitions[target_dev]
mount_point = partition['mount_point']
if ui['root_directory'].startswith('/'):
ui['root_directory'] = ui['root_directory'][1:]
data['target_dir'] = os.path.normpath(
os.path.join(mount_point, ui['root_directory'])
)
else:
data['target_dir'] = ui['root_directory']
# try to create the target dir
try:
os.makedirs(data['target_dir'])
logging.debug(
'created root directory {} for camera {}'.format(
data['target_dir'], data['camera_name']
)
)
except OSError as e:
if isinstance(e, OSError) and e.errno == errno.EEXIST:
pass # already exists, things should be just fine
else:
logging.error(
'failed to create root directory "{}": {}'.format(
data['target_dir'], e
),
exc_info=True,
)
if ui['upload_enabled'] and '@id' in prev_config:
upload_settings = {
k[7:]: ui[k] for k in list(ui.keys()) if k.startswith('upload_')
}
tasks.add(
0,
uploadservices.update,
tag='uploadservices.update(%s)' % ui['upload_service'],
camera_id=prev_config['@id'],
service_name=ui['upload_service'],
settings=upload_settings,
)
if ui['text_overlay']:
left_text = ui['left_text']
if left_text == 'camera-name':
data['text_left'] = ui['name']
elif left_text == 'timestamp':
data['text_left'] = '%Y-%m-%d\\n%T'
elif left_text == 'disabled':
data['text_left'] = ''
else:
data['text_left'] = ui['custom_left_text']
right_text = ui['right_text']
if right_text == 'camera-name':
data['text_right'] = ui['name']
elif right_text == 'timestamp':
data['text_right'] = '%Y-%m-%d\\n%T'
elif right_text == 'disabled':
data['text_right'] = ''
else:
data['text_right'] = ui['custom_right_text']
if ui['still_images']:
data['picture_filename'] = ui['image_file_name']
data['snapshot_filename'] = ui['image_file_name']
capture_mode = ui['capture_mode']
if capture_mode == 'motion-triggered':
data['picture_output'] = True
elif capture_mode == 'motion-triggered-one':
data['picture_output'] = 'best'
elif capture_mode == 'interval-snapshots':
data['snapshot_interval'] = int(ui['snapshot_interval'])
elif capture_mode == 'all-frames':
data['picture_output'] = True
data['emulate_motion'] = True
elif capture_mode == 'manual':
data['picture_output'] = False
data['emulate_motion'] = False
if ui['movies']:
data['movie_output'] = True
recording_mode = ui['recording_mode']
if recording_mode == 'motion-triggered':
data['emulate_motion'] = False
elif recording_mode == 'continuous':
data['emulate_motion'] = True
data['movie_codec'] = ui['movie_format']
q = int(ui['movie_quality'])
data['movie_quality'] = max(1, q)
# motion detection
if ui['despeckle_filter']:
data['despeckle_filter'] = prev_config['despeckle_filter'] or 'EedDl'
else:
data['despeckle_filter'] = ''
if ui['motion_mask']:
if ui['motion_mask_type'] == 'smart':
data['smart_mask_speed'] = 11 - int(ui['smart_mask_sluggishness'])
elif ui['motion_mask_type'] == 'editable':
capture_width, capture_height = data.get('width'), data.get('height')
if data.get('rotate') in [90, 270]:
capture_width, capture_height = capture_height, capture_width
data['mask_file'] = utils.build_editable_mask_file(
prev_config['@id'],
'motion',
ui['motion_mask_lines'],
capture_width,
capture_height,
)
# working schedule
if ui['working_schedule']:
data['@working_schedule'] = (
ui['monday_from']
+ '-'
+ ui['monday_to']
+ '|'
+ ui['tuesday_from']
+ '-'
+ ui['tuesday_to']
+ '|'
+ ui['wednesday_from']
+ '-'
+ ui['wednesday_to']
+ '|'
+ ui['thursday_from']
+ '-'
+ ui['thursday_to']
+ '|'
+ ui['friday_from']
+ '-'
+ ui['friday_to']
+ '|'
+ ui['saturday_from']
+ '-'
+ ui['saturday_to']
+ '|'
+ ui['sunday_from']
+ '-'
+ ui['sunday_to']
)
data['@working_schedule_type'] = ui['working_schedule_type']
# event start
on_event_start = [
'%(script)s start %%t' % {'script': meyectl.find_command('relayevent')}
]
if ui['email_notifications_enabled']:
emails = re.sub('\\s', '', ui['email_notifications_addresses'])
line = (
"%(script)s '%(server)s' '%(port)s' '%(account)s' '%(password)s' '%(tls)s' '%(from)s' '%(to)s' "
"'motion_start' '%%t' '%%Y-%%m-%%dT%%H:%%M:%%S' '%(timespan)s'"
% {
'script': meyectl.find_command('sendmail'),
'server': ui['email_notifications_smtp_server'],
'port': ui['email_notifications_smtp_port'],
'account': ui['email_notifications_smtp_account'],
'password': ui['email_notifications_smtp_password']
.replace(';', '\\;')
.replace('%', '%%'),
'tls': ui['email_notifications_smtp_tls'],
'from': ui['email_notifications_from'],
'to': emails,
'timespan': ui['email_notifications_picture_time_span'],
}
)
on_event_start.append(line)
if ui['telegram_notifications_enabled']:
line = (
"%(script)s '%(api)s' '%(chatid)s' "
"'motion_start' '%%t' '%%Y-%%m-%%dT%%H:%%M:%%S' '%(timespan)s'"
% {
'script': meyectl.find_command('sendtelegram'),
'api': ui['telegram_notifications_api'],
'chatid': ui['telegram_notifications_chat_id'],
'timespan': ui['telegram_notifications_picture_time_span'],
}
)
on_event_start.append(line)
if ui['web_hook_notifications_enabled']:
url = re.sub('\\s', '+', ui['web_hook_notifications_url'])
on_event_start.append(
"{script} '{method}' '{url}'".format(
script=meyectl.find_command('webhook'),
method=ui['web_hook_notifications_http_method'],
url=url,
)
)
if ui['command_notifications_enabled']:
on_event_start += utils.split_semicolon(ui['command_notifications_exec'])
data['on_event_start'] = '; '.join(on_event_start)
# event end
on_event_end = [
'%(script)s stop %%t' % {'script': meyectl.find_command('relayevent')}
]
if ui['web_hook_end_notifications_enabled']:
url = re.sub(r'\s', '+', ui['web_hook_end_notifications_url'])
on_event_end.append(
"%(script)s '%(method)s' '%(url)s'"
% {
'script': meyectl.find_command('webhook'),
'method': ui['web_hook_end_notifications_http_method'],
'url': url,
}
)
if ui['command_end_notifications_enabled']:
on_event_end += utils.split_semicolon(ui['command_end_notifications_exec'])
data['on_event_end'] = '; '.join(on_event_end)
# movie end
on_movie_end = [
'%(script)s movie_end %%t %%f' % {'script': meyectl.find_command('relayevent')}
]
if ui['web_hook_storage_enabled']:
url = re.sub('\\s', '+', ui['web_hook_storage_url'])
on_movie_end.append(
"{script} '{method}' '{url}'".format(
script=meyectl.find_command('webhook'),
method=ui['web_hook_storage_http_method'],
url=url,
)
)
if ui['command_storage_enabled']:
on_movie_end += utils.split_semicolon(ui['command_storage_exec'])
data['on_movie_end'] = '; '.join(on_movie_end)
# picture save
on_picture_save = [
'%(script)s picture_save %%t %%f'
% {'script': meyectl.find_command('relayevent')}
]
if ui['web_hook_storage_enabled']:
url = re.sub('\\s', '+', ui['web_hook_storage_url'])
on_picture_save.append(
"{script} '{method}' '{url}'".format(
script=meyectl.find_command('webhook'),
method=ui['web_hook_storage_http_method'],
url=url,
)
)
if ui['command_storage_enabled']:
on_picture_save += utils.split_semicolon(ui['command_storage_exec'])
data['on_picture_save'] = '; '.join(on_picture_save)
# additional configs
for name, value in list(ui.items()):
if not name.startswith('_'):
continue
data['@' + name] = value
# extra motion options
for name in list(prev_config.keys()):
if name not in _USED_MOTION_OPTIONS and not name.startswith('@'):
prev_config.pop(name)
extra_options = ui.get('extra_options', [])
for name, value in extra_options:
data[name] = value or ''
prev_config.update(data)
return prev_config
|
def motion_camera_ui_to_dict(ui, prev_config=None):
prev_config = dict(prev_config or {})
main_config = get_main() # needed for surveillance password
data = {
# device
'camera_name': ui['name'],
'@enabled': ui['enabled'],
'auto_brightness': ui['auto_brightness'],
'framerate': int(ui['framerate']),
'rotate': int(ui['rotation']),
'mask_privacy': '',
# file storage
'@storage_device': ui['storage_device'],
'@network_server': ui['network_server'],
'@network_share_name': ui['network_share_name'],
'@network_smb_ver': ui['network_smb_ver'],
'@network_username': ui['network_username'],
'@network_password': ui['network_password'],
'@upload_enabled': ui['upload_enabled'],
'@upload_movie': ui['upload_movie'],
'@upload_picture': ui['upload_picture'],
'@upload_service': ui['upload_service'],
'@upload_server': ui['upload_server'],
'@upload_port': ui['upload_port'],
'@upload_method': ui['upload_method'],
'@upload_location': ui['upload_location'],
'@upload_subfolders': ui['upload_subfolders'],
'@upload_username': ui['upload_username'],
'@upload_password': ui['upload_password'],
'@clean_cloud_enabled': ui['clean_cloud_enabled'],
# text overlay
'text_left': '',
'text_right': '',
'text_scale': ui['text_scale'],
# streaming
'stream_localhost': not ui['video_streaming'],
'stream_port': int(ui['streaming_port']),
'stream_maxrate': int(ui['streaming_framerate']),
'stream_quality': max(1, int(ui['streaming_quality'])),
'@webcam_resolution': max(1, int(ui['streaming_resolution'])),
'@webcam_server_resize': ui['streaming_server_resize'],
'stream_motion': ui['streaming_motion'],
'stream_auth_method': {'disabled': 0, 'basic': 1, 'digest': 2}.get(
ui['streaming_auth_mode'], 0
),
'stream_authentication': main_config['@normal_username']
+ ':'
+ main_config['@normal_password'],
# still images
'picture_output': False,
'snapshot_interval': 0,
'picture_filename': '',
'snapshot_filename': '',
'picture_quality': max(1, int(ui['image_quality'])),
'@preserve_pictures': int(ui['preserve_pictures']),
'@manual_snapshots': ui['manual_snapshots'],
# movies
'movie_output': False,
'movie_passthrough': bool(ui['movie_passthrough']),
'movie_filename': ui['movie_file_name'],
'movie_max_time': ui['max_movie_length'],
'@preserve_movies': int(ui['preserve_movies']),
# motion detection
'@motion_detection': ui['motion_detection'],
'emulate_motion': False,
'text_changes': ui['show_frame_changes'],
'locate_motion_mode': ui['show_frame_changes'],
'threshold_maximum': ui['max_frame_change_threshold'],
'threshold_tune': ui['auto_threshold_tuning'],
'noise_tune': ui['auto_noise_detect'],
'noise_level': max(1, int(round(int(ui['noise_level']) * 2.55))),
'lightswitch_percent': ui['light_switch_detect'],
'event_gap': int(ui['event_gap']),
'pre_capture': int(ui['pre_capture']),
'post_capture': int(ui['post_capture']),
'minimum_motion_frames': int(ui['minimum_motion_frames']),
'smart_mask_speed': 0,
'mask_file': '',
'picture_output_motion': ui['create_debug_media'],
'movie_output_motion': ui['create_debug_media'],
# working schedule
'@working_schedule': '',
# events
'on_event_start': '',
'on_event_end': '',
'on_movie_end': '',
'on_picture_save': '',
}
if utils.is_v4l2_camera(prev_config):
proto = 'v4l2'
elif utils.is_mmal_camera(prev_config):
proto = 'mmal'
else:
proto = 'netcam'
if proto in ('v4l2', 'mmal'):
# leave videodevice unchanged
# resolution
if not ui['resolution']:
ui['resolution'] = '320x240'
width = int(ui['resolution'].split('x')[0])
height = int(ui['resolution'].split('x')[1])
data['width'] = width
data['height'] = height
threshold = int(float(ui['frame_change_threshold']) * width * height / 100)
if proto == 'v4l2':
# video controls
vid_control_params = (
('{}={}'.format(n, c['value']))
for n, c in list(ui['video_controls'].items())
)
data['vid_control_params'] = ','.join(vid_control_params)
else: # assuming netcam
if re.match(
r'^rtsp|^rtmp', data.get('netcam_url', prev_config.get('netcam_url', ''))
):
# motion uses the configured width and height for RTSP/RTMP cameras
width = int(ui['resolution'].split('x')[0])
height = int(ui['resolution'].split('x')[1])
data['width'] = width
data['height'] = height
threshold = int(float(ui['frame_change_threshold']) * width * height / 100)
else: # width & height are not available for other netcams
threshold = int(float(ui['frame_change_threshold']) * 640 * 480 / 100)
data['threshold'] = threshold
if ui['privacy_mask']:
capture_width, capture_height = data.get('width'), data.get('height')
if data.get('rotate') in [90, 270]:
capture_width, capture_height = capture_height, capture_width
data['mask_privacy'] = utils.build_editable_mask_file(
prev_config['@id'],
'privacy',
ui['privacy_mask_lines'],
capture_width,
capture_height,
)
if (ui['storage_device'] == 'network-share') and settings.SMB_SHARES:
mount_point = smbctl.make_mount_point(
ui['network_server'], ui['network_share_name'], ui['network_username']
)
if ui['root_directory'].startswith('/'):
ui['root_directory'] = ui['root_directory'][1:]
data['target_dir'] = os.path.normpath(
os.path.join(mount_point, ui['root_directory'])
)
elif ui['storage_device'].startswith('local-disk'):
target_dev = ui['storage_device'][10:].replace('-', '/')
mounted_partitions = diskctl.list_mounted_partitions()
partition = mounted_partitions[target_dev]
mount_point = partition['mount_point']
if ui['root_directory'].startswith('/'):
ui['root_directory'] = ui['root_directory'][1:]
data['target_dir'] = os.path.normpath(
os.path.join(mount_point, ui['root_directory'])
)
else:
data['target_dir'] = ui['root_directory']
# try to create the target dir
try:
os.makedirs(data['target_dir'])
logging.debug(
'created root directory {} for camera {}'.format(
data['target_dir'], data['camera_name']
)
)
except OSError as e:
if isinstance(e, OSError) and e.errno == errno.EEXIST:
pass # already exists, things should be just fine
else:
logging.error(
'failed to create root directory "{}": {}'.format(
data['target_dir'], e
),
exc_info=True,
)
if ui['upload_enabled'] and '@id' in prev_config:
upload_settings = {
k[7:]: ui[k] for k in list(ui.keys()) if k.startswith('upload_')
}
tasks.add(
0,
uploadservices.update,
tag='uploadservices.update(%s)' % ui['upload_service'],
camera_id=prev_config['@id'],
service_name=ui['upload_service'],
settings=upload_settings,
)
if ui['text_overlay']:
left_text = ui['left_text']
if left_text == 'camera-name':
data['text_left'] = ui['name']
elif left_text == 'timestamp':
data['text_left'] = '%Y-%m-%d\\n%T'
elif left_text == 'disabled':
data['text_left'] = ''
else:
data['text_left'] = ui['custom_left_text']
right_text = ui['right_text']
if right_text == 'camera-name':
data['text_right'] = ui['name']
elif right_text == 'timestamp':
data['text_right'] = '%Y-%m-%d\\n%T'
elif right_text == 'disabled':
data['text_right'] = ''
else:
data['text_right'] = ui['custom_right_text']
if ui['still_images']:
data['picture_filename'] = ui['image_file_name']
data['snapshot_filename'] = ui['image_file_name']
capture_mode = ui['capture_mode']
if capture_mode == 'motion-triggered':
data['picture_output'] = True
elif capture_mode == 'motion-triggered-one':
data['picture_output'] = 'best'
elif capture_mode == 'interval-snapshots':
data['snapshot_interval'] = int(ui['snapshot_interval'])
elif capture_mode == 'all-frames':
data['picture_output'] = True
data['emulate_motion'] = True
elif capture_mode == 'manual':
data['picture_output'] = False
data['emulate_motion'] = False
if ui['movies']:
data['movie_output'] = True
recording_mode = ui['recording_mode']
if recording_mode == 'motion-triggered':
data['emulate_motion'] = False
elif recording_mode == 'continuous':
data['emulate_motion'] = True
data['movie_codec'] = ui['movie_format']
q = int(ui['movie_quality'])
data['movie_quality'] = max(1, q)
# motion detection
if ui['despeckle_filter']:
data['despeckle_filter'] = prev_config['despeckle_filter'] or 'EedDl'
else:
data['despeckle_filter'] = ''
if ui['motion_mask']:
if ui['motion_mask_type'] == 'smart':
data['smart_mask_speed'] = 11 - int(ui['smart_mask_sluggishness'])
elif ui['motion_mask_type'] == 'editable':
capture_width, capture_height = data.get('width'), data.get('height')
if data.get('rotate') in [90, 270]:
capture_width, capture_height = capture_height, capture_width
data['mask_file'] = utils.build_editable_mask_file(
prev_config['@id'],
'motion',
ui['motion_mask_lines'],
capture_width,
capture_height,
)
# working schedule
if ui['working_schedule']:
data['@working_schedule'] = (
ui['monday_from']
+ '-'
+ ui['monday_to']
+ '|'
+ ui['tuesday_from']
+ '-'
+ ui['tuesday_to']
+ '|'
+ ui['wednesday_from']
+ '-'
+ ui['wednesday_to']
+ '|'
+ ui['thursday_from']
+ '-'
+ ui['thursday_to']
+ '|'
+ ui['friday_from']
+ '-'
+ ui['friday_to']
+ '|'
+ ui['saturday_from']
+ '-'
+ ui['saturday_to']
+ '|'
+ ui['sunday_from']
+ '-'
+ ui['sunday_to']
)
data['@working_schedule_type'] = ui['working_schedule_type']
# event start
on_event_start = [
'%(script)s start %%t' % {'script': meyectl.find_command('relayevent')}
]
if ui['email_notifications_enabled']:
emails = re.sub('\\s', '', ui['email_notifications_addresses'])
line = (
"%(script)s '%(server)s' '%(port)s' '%(account)s' '%(password)s' '%(tls)s' '%(from)s' '%(to)s' "
"'motion_start' '%%t' '%%Y-%%m-%%dT%%H:%%M:%%S' '%(timespan)s'"
% {
'script': meyectl.find_command('sendmail'),
'server': ui['email_notifications_smtp_server'],
'port': ui['email_notifications_smtp_port'],
'account': ui['email_notifications_smtp_account'],
'password': ui['email_notifications_smtp_password']
.replace(';', '\\;')
.replace('%', '%%'),
'tls': ui['email_notifications_smtp_tls'],
'from': ui['email_notifications_from'],
'to': emails,
'timespan': ui['email_notifications_picture_time_span'],
}
)
on_event_start.append(line)
if ui['telegram_notifications_enabled']:
line = (
"%(script)s '%(api)s' '%(chatid)s' "
"'motion_start' '%%t' '%%Y-%%m-%%dT%%H:%%M:%%S' '%(timespan)s'"
% {
'script': meyectl.find_command('sendtelegram'),
'api': ui['telegram_notifications_api'],
'chatid': ui['telegram_notifications_chat_id'],
'timespan': ui['telegram_notifications_picture_time_span'],
}
)
on_event_start.append(line)
if ui['web_hook_notifications_enabled']:
url = re.sub('\\s', '+', ui['web_hook_notifications_url'])
on_event_start.append(
"{script} '{method}' '{url}'".format(
script=meyectl.find_command('webhook'),
method=ui['web_hook_notifications_http_method'],
url=url,
)
)
if ui['command_notifications_enabled']:
on_event_start += utils.split_semicolon(ui['command_notifications_exec'])
data['on_event_start'] = '; '.join(on_event_start)
# event end
on_event_end = [
'%(script)s stop %%t' % {'script': meyectl.find_command('relayevent')}
]
if ui['web_hook_end_notifications_enabled']:
url = re.sub(r'\s', '+', ui['web_hook_end_notifications_url'])
on_event_end.append(
"%(script)s '%(method)s' '%(url)s'"
% {
'script': meyectl.find_command('webhook'),
'method': ui['web_hook_end_notifications_http_method'],
'url': url,
}
)
if ui['command_end_notifications_enabled']:
on_event_end += utils.split_semicolon(ui['command_end_notifications_exec'])
data['on_event_end'] = '; '.join(on_event_end)
# movie end
on_movie_end = [
'%(script)s movie_end %%t %%f' % {'script': meyectl.find_command('relayevent')}
]
if ui['web_hook_storage_enabled']:
url = re.sub('\\s', '+', ui['web_hook_storage_url'])
on_movie_end.append(
"{script} '{method}' '{url}'".format(
script=meyectl.find_command('webhook'),
method=ui['web_hook_storage_http_method'],
url=url,
)
)
if ui['command_storage_enabled']:
on_movie_end += utils.split_semicolon(ui['command_storage_exec'])
data['on_movie_end'] = '; '.join(on_movie_end)
# picture save
on_picture_save = [
'%(script)s picture_save %%t %%f'
% {'script': meyectl.find_command('relayevent')}
]
if ui['web_hook_storage_enabled']:
url = re.sub('\\s', '+', ui['web_hook_storage_url'])
on_picture_save.append(
"{script} '{method}' '{url}'".format(
script=meyectl.find_command('webhook'),
method=ui['web_hook_storage_http_method'],
url=url,
)
)
if ui['command_storage_enabled']:
on_picture_save += utils.split_semicolon(ui['command_storage_exec'])
data['on_picture_save'] = '; '.join(on_picture_save)
# additional configs
for name, value in list(ui.items()):
if not name.startswith('_'):
continue
data['@' + name] = value
# extra motion options
for name in list(prev_config.keys()):
if name not in _USED_MOTION_OPTIONS and not name.startswith('@'):
prev_config.pop(name)
extra_options = ui.get('extra_options', [])
for name, value in extra_options:
data[name] = value or ''
prev_config.update(data)
return prev_config
|
31,447 |
def get_malicious_domains_for_ip_command(rlb):
# Initialize
contents = []
context = {}
headers = [] # type: ignore
results = []
context_dbotscore = []
context_malicious = []
# Get vars
ip = demisto.args()['ip']
# Fetch data
res = get_malicious_domains_for_ip(ip)
if res:
# Process response - build context and markdown table
domains = []
for item in res:
domains.append(item['name'])
domains = get_domains_categorization(domains)
domains_context = []
if domains:
for domain in domains:
domains_context.append({
'Name': domain,
'MalwareCategories': domains[domain]['security_categories'],
'ContentCategories': domains[domain]['content_categories']
})
contents.append({
'Name': domain,
'Malware Categories': domains[domain]['security_categories'],
'Content Categories': domains[domain]['content_categories']
})
context_dbotscore.append({
'Indicator': domain,
'Type': 'domain',
'Vendor': 'Cisco Umbrella Investigate',
'Score': 3,
'Reliability': rlb
})
context_malicious.append({
'Name': domain,
'Malicious': {
'Vendor': 'Cisco Umbrella Investigate',
'Description': 'For IP ' + ip
}
})
context['Umbrella.MaliciousDomains(val.IP && val.IP == obj.IP)'] = {
'IP': ip,
'Data': domains_context
}
context[outputPaths['domain']] = context_malicious # type: ignore
context[outputPaths['dbotscore']] = context_dbotscore # type: ignore
results.append({
'Type': entryTypes['note'],
'ContentsFormat': formats['json'],
'Contents': contents,
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': tableToMarkdown('"Umbrella Investigate" Malicious Domains for an IP: ' + ip, contents,
headers),
'EntryContext': context
})
return results
|
def get_malicious_domains_for_ip_command(reliability):
# Initialize
contents = []
context = {}
headers = [] # type: ignore
results = []
context_dbotscore = []
context_malicious = []
# Get vars
ip = demisto.args()['ip']
# Fetch data
res = get_malicious_domains_for_ip(ip)
if res:
# Process response - build context and markdown table
domains = []
for item in res:
domains.append(item['name'])
domains = get_domains_categorization(domains)
domains_context = []
if domains:
for domain in domains:
domains_context.append({
'Name': domain,
'MalwareCategories': domains[domain]['security_categories'],
'ContentCategories': domains[domain]['content_categories']
})
contents.append({
'Name': domain,
'Malware Categories': domains[domain]['security_categories'],
'Content Categories': domains[domain]['content_categories']
})
context_dbotscore.append({
'Indicator': domain,
'Type': 'domain',
'Vendor': 'Cisco Umbrella Investigate',
'Score': 3,
'Reliability': rlb
})
context_malicious.append({
'Name': domain,
'Malicious': {
'Vendor': 'Cisco Umbrella Investigate',
'Description': 'For IP ' + ip
}
})
context['Umbrella.MaliciousDomains(val.IP && val.IP == obj.IP)'] = {
'IP': ip,
'Data': domains_context
}
context[outputPaths['domain']] = context_malicious # type: ignore
context[outputPaths['dbotscore']] = context_dbotscore # type: ignore
results.append({
'Type': entryTypes['note'],
'ContentsFormat': formats['json'],
'Contents': contents,
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': tableToMarkdown('"Umbrella Investigate" Malicious Domains for an IP: ' + ip, contents,
headers),
'EntryContext': context
})
return results
|
28,620 |
def from_beanmachine(
sampler=None,
*,
coords=None,
dims=None,
):
"""Convert Bean Machine MonteCarloSamples object into an InferenceData object.
For a usage example read the
:ref:`Creating InferenceData section on from_beanmachine <creating_InferenceData>`
Parameters
----------
sampler : bm.MonteCarloSamples
Fitted MonteCarloSamples object from Bean Machine
coords : dict[str] -> list[str]
Map of dimensions to coordinates
dims : dict[str] -> list[str]
Map variable names to their coordinates
"""
return BMConverter(
sampler=sampler,
coords=coords,
dims=dims,
).to_inference_data()
|
def from_beanmachine(
sampler=None,
*,
coords=None,
dims=None,
):
"""Convert Bean Machine MonteCarloSamples object into an InferenceData object.
For a usage example read the
:ref:`Creating InferenceData section on from_beanmachine <creating_InferenceData>`
Parameters
----------
sampler : bm.MonteCarloSamples
Fitted MonteCarloSamples object from Bean Machine
coords : dict of {str : array-like}
Map of dimensions to coordinates
dims : dict[str] -> list[str]
Map variable names to their coordinates
"""
return BMConverter(
sampler=sampler,
coords=coords,
dims=dims,
).to_inference_data()
|
9,315 |
def main():
"""
:return: token
"""
# define the available arguments/parameters that a user can pass to
# the module
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=dict(
iap_port=dict(type='str', required=True),
iap_fqdn=dict(type='str', required=True),
username=dict(type='str', required=True),
password=dict(type='str', required=True),
https=(dict(type='bool', default=False))
)
)
get_token(module)
|
def main():
"""
:return: token
"""
# define the available arguments/parameters that a user can pass to
# the module
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=dict(
iap_port=dict(type='int', required=True),
iap_fqdn=dict(type='str', required=True),
username=dict(type='str', required=True),
password=dict(type='str', required=True),
https=(dict(type='bool', default=False))
)
)
get_token(module)
|
2,967 |
def lexsort_indexer(keys, orders=None, na_position="last"):
from pandas.core.arrays import Categorical
labels = []
shape = []
if isinstance(orders, bool):
orders = [orders] * len(keys)
elif orders is None:
orders = [True] * len(keys)
for key, order in zip(keys, orders):
# we are already a Categorical
if is_categorical_dtype(key):
cat = key
# create the Categorical
else:
cat = Categorical(key, ordered=True)
if na_position not in ["last", "first"]:
raise ValueError(f"invalid na_position: {na_position}")
n = len(cat.categories)
codes = cat.codes.copy()
mask = cat.codes == -1
if order: # ascending
if na_position == "last":
codes = np.where(mask, n, codes)
elif na_position == "first":
codes += 1
else: # not order means descending
if na_position == "last":
codes = np.where(mask, n, n - codes - 1)
elif na_position == "first":
codes = np.where(mask, 0, n - codes)
if mask.any():
n += 1
shape.append(n)
labels.append(codes)
return indexer_from_factorized(labels, shape)
|
def lexsort_indexer(keys, orders=None, na_position="last"):
from pandas.core.arrays import Categorical
labels = []
shape = []
if isinstance(orders, bool):
orders = [orders] * len(keys)
elif orders is None:
orders = [True] * len(keys)
for key, order in zip(keys, orders):
# we are already a Categorical
if is_categorical_dtype(key):
cat = key
# create the Categorical
else:
cat = Categorical(key, ordered=True)
if na_position not in ["last", "first"]:
raise ValueError(f"invalid na_position: {repr(na_position)}")
n = len(cat.categories)
codes = cat.codes.copy()
mask = cat.codes == -1
if order: # ascending
if na_position == "last":
codes = np.where(mask, n, codes)
elif na_position == "first":
codes += 1
else: # not order means descending
if na_position == "last":
codes = np.where(mask, n, n - codes - 1)
elif na_position == "first":
codes = np.where(mask, 0, n - codes)
if mask.any():
n += 1
shape.append(n)
labels.append(codes)
return indexer_from_factorized(labels, shape)
|
32,151 |
def results(results):
"""Outputs entries to the war-room
Args:
results (Union[list, dict]): The entry object or array of entry objects to output
For example: results = {
'Type' : entryTypes['note'],
'Contents': data,
'ContentsFormat' : formats['json'],
'HumanReadable': md,
'ReadableContentsFormat' : formats['markdown'],
'EntryContext' : context,
'Tags' : ['tag1', 'tag2']
}
Returns:
None: No data returned
"""
if isinstance(results, dict) and results.get("contents"):
results = results.get("contents")
log("demisto results: {}".format(json.dumps(results, indent=4, sort_keys=True)))
|
def results(results):
"""Outputs entries to the war-room
Args:
results (Union[list, dict]): The entry object or array of entry objects to output
For example: results = {
'Type' : entryTypes['note'],
'Contents': data,
'ContentsFormat' : EntryFormat.JSON,
'HumanReadable': md,
'ReadableContentsFormat' : formats['markdown'],
'EntryContext' : context,
'Tags' : ['tag1', 'tag2']
}
Returns:
None: No data returned
"""
if isinstance(results, dict) and results.get("contents"):
results = results.get("contents")
log("demisto results: {}".format(json.dumps(results, indent=4, sort_keys=True)))
|
31,008 |
def test_get_multiple_packs_dirs(requests_mock):
"""
Scenario: Get a pack dir name from pull request files
Given
- A pull request
- A file in the pull request is in a pack
When
- Getting the pack dir name from a pull request
Then
- Ensure the pack dir name is returned correctly
"""
branch = 'contrib_branch'
pr_number = '1'
repo = 'contrib_repo'
requests_mock.get(
'https://api.github.com/repos/demisto/content/pulls/1/files',
[{'json': github_response_1, 'status_code': 200},
{'json': github_response_2, 'status_code': 200},
{'json': github_response_3, 'status_code': 200},
{'json': github_response_4, 'status_code': 200}]
)
pack_dir = get_pack_dir(branch, pr_number, repo)
assert pack_dir == ['Slack', 'Slack1']
|
def test_get_multiple_packs_dirs(requests_mock):
"""
Scenario: Get a list of pack dir names from pull request files
Given
- A pull request
- A file in the pull request is in a pack
When
- Getting the pack dir name from a pull request
Then
- Ensure the pack dir name is returned correctly
"""
branch = 'contrib_branch'
pr_number = '1'
repo = 'contrib_repo'
requests_mock.get(
'https://api.github.com/repos/demisto/content/pulls/1/files',
[{'json': github_response_1, 'status_code': 200},
{'json': github_response_2, 'status_code': 200},
{'json': github_response_3, 'status_code': 200},
{'json': github_response_4, 'status_code': 200}]
)
pack_dir = get_pack_dir(branch, pr_number, repo)
assert pack_dir == ['Slack', 'Slack1']
|
43,006 |
def delete_config(filename="config.toml", directory=None):
"""Delete a configuration file
Keyword Args:
filename (str): the configuration file to delete
directory (str): the directory of the configuration file if None, use
default directory
"""
if directory is None:
file_path = get_default_config_path(filename)
else:
file_path = os.path.join(directory, filename)
os.remove(file_path)
|
def delete_config(filename="config.toml", directory=None):
"""Delete a configuration file
Keyword Args:
filename (str): the filename of the configuration file to delete
directory (str): the directory of the configuration file if None, use
default directory
"""
if directory is None:
file_path = get_default_config_path(filename)
else:
file_path = os.path.join(directory, filename)
os.remove(file_path)
|
35,067 |
def generate_project_from_mlf(
template_project_dir: typing.Union[pathlib.Path, str],
project_dir: typing.Union[pathlib.Path, str],
mlf: typing.Union[pathlib.Path, str],
options: dict,
):
"""Generate a project from a platform template and an existing MLF.
Parameters
----------
template_project_path : pathlib.Path or str
Path to a template project containing a microTVM Project API server.
project_dir : pathlib.Path or str
Path to a directory where the project will be created.
mlf : pathlib.Path or str
Path to the Model Library Format archive that will be used when creating
the new project.
options : dict
Project API options given to the microTVM API server for the specified platform.
Returns
-------
GeneratedProject :
A class that wraps the generated project and which can be used to further interact with it.
"""
template = TemplateProject.from_directory(str(template_project_dir))
return template.generate_project_from_mlf(str(mlf), str(project_dir), options)
|
def generate_project_from_mlf(
template_project_dir: typing.Union[pathlib.Path, str],
project_dir: typing.Union[pathlib.Path, str],
mlf_path: typing.Union[pathlib.Path, str],
options: dict,
):
"""Generate a project from a platform template and an existing MLF.
Parameters
----------
template_project_path : pathlib.Path or str
Path to a template project containing a microTVM Project API server.
project_dir : pathlib.Path or str
Path to a directory where the project will be created.
mlf : pathlib.Path or str
Path to the Model Library Format archive that will be used when creating
the new project.
options : dict
Project API options given to the microTVM API server for the specified platform.
Returns
-------
GeneratedProject :
A class that wraps the generated project and which can be used to further interact with it.
"""
template = TemplateProject.from_directory(str(template_project_dir))
return template.generate_project_from_mlf(str(mlf), str(project_dir), options)
|
49,913 |
def test_prepare_new_username_for_conflicted_username(db):
user = User.objects.create_user("jkowalski", "jkowalski@example.com")
new_username = User.objects.prepare_new_username("jkowalski")
assert new_username != user.username
assert new_username.startswith(user.username)
|
def test_sso_username_is_suffixed_if_its_not_available(db):
...
user = User.objects.create_user("jkowalski", "jkowalski@example.com")
new_username = User.objects.prepare_new_username("jkowalski")
assert new_username != user.username
assert new_username.startswith(user.username)
|
56,402 |
def main(args):
"""
Entry point for parsing some analysis results and printing them to the
stdout in a human-readable format.
"""
logger.setup_logger(args.verbose if 'verbose' in args else None)
try:
cmd_config.check_config_file(args)
except FileNotFoundError as fnerr:
LOG.error(fnerr)
sys.exit(1)
export = args.export if 'export' in args else None
if export == 'html' and 'output_path' not in args:
LOG.error("Argument --export not allowed without argument --output "
"when exporting to HTML.")
sys.exit(1)
if export == 'gerrit' and gerrit.no_mandatory_env_var_is_set():
sys.exit(1)
context = analyzer_context.get_context()
# To ensure the help message prints the default folder properly,
# the 'default' for 'args.input' is a string, not a list.
# But we need lists for the foreach here to work.
if isinstance(args.input, str):
args.input = [args.input]
original_cwd = os.getcwd()
src_comment_status_filter = args.review_status
suppr_handler = None
if 'suppress' in args:
__make_handler = False
if not os.path.isfile(args.suppress):
if 'create_suppress' in args:
with open(args.suppress, 'w',
encoding='utf-8', errors='ignore') as _:
# Just create the file.
__make_handler = True
LOG.info("Will write source-code suppressions to "
"suppress file: %s", args.suppress)
else:
LOG.warning("Suppress file '%s' given, but it does not exist"
" -- will not suppress anything.", args.suppress)
else:
__make_handler = True
if __make_handler:
suppr_handler = suppress_handler.\
GenericSuppressHandler(args.suppress,
'create_suppress' in args,
src_comment_status_filter)
elif 'create_suppress' in args:
LOG.error("Can't use '--export-source-suppress' unless '--suppress "
"SUPPRESS_FILE' is also given.")
sys.exit(2)
processed_path_hashes = set()
skip_handler = None
if 'skipfile' in args:
with open(args.skipfile, 'r',
encoding='utf-8', errors='ignore') as skip_file:
skip_handler = SkipListHandler(skip_file.read())
trim_path_prefixes = args.trim_path_prefix if \
'trim_path_prefix' in args else None
if export:
if export not in EXPORT_TYPES:
LOG.error(f"Unknown export format: {export}")
return
# The HTML part will be handled separately below.
if export != 'html':
try:
res = parse_convert_reports(args.input,
export,
context.severity_map,
trim_path_prefixes)
if 'output_path' in args:
output_path = os.path.abspath(args.output_path)
if not os.path.exists(output_path):
os.mkdir(output_path)
reports_json = os.path.join(output_path, 'reports.json')
with open(reports_json,
mode='w',
encoding='utf-8', errors="ignore") as output_f:
output_f.write(json.dumps(res))
return print(json.dumps(res))
except Exception as ex:
LOG.error(ex)
sys.exit(1)
def trim_path_prefixes_handler(source_file):
"""
Callback to util.trim_path_prefixes to prevent module dependency
of plist_to_html
"""
return util.trim_path_prefixes(source_file, trim_path_prefixes)
html_builder = None
def skip_html_report_data_handler(report_hash, source_file, report_line,
checker_name, diag, files):
"""
Report handler which skips bugs which were suppressed by source code
comments. This function will return a tuple. The first element
will decide whether the report should be skipped or not and the second
element will be a list of source code comments related to the actual
report.
"""
files_dict = {k: v for k, v in enumerate(files)}
report = Report({'check_name': checker_name},
diag['path'],
files_dict,
metadata=None)
path_hash = get_report_path_hash(report)
if path_hash in processed_path_hashes:
LOG.debug("Skip report because it is a deduplication of an "
"already processed report!")
LOG.debug("Path hash: %s", path_hash)
LOG.debug(diag)
return True, []
skip, source_code_comments = skip_report(report_hash,
source_file,
report_line,
checker_name,
suppr_handler,
src_comment_status_filter)
if skip_handler:
skip |= skip_handler.should_skip(source_file)
if not skip:
processed_path_hashes.add(path_hash)
return skip, source_code_comments
file_change = set()
severity_stats = defaultdict(int)
file_stats = defaultdict(int)
report_count = 0
for input_path in args.input:
input_path = os.path.abspath(input_path)
os.chdir(original_cwd)
LOG.debug("Parsing input argument: '%s'", input_path)
if export == 'html':
output_path = os.path.abspath(args.output_path)
if not html_builder:
html_builder = \
PlistToHtml.HtmlBuilder(context.path_plist_to_html_dist,
context.severity_map)
LOG.info("Generating html output files:")
PlistToHtml.parse(input_path,
output_path,
context.path_plist_to_html_dist,
skip_html_report_data_handler,
html_builder,
trim_path_prefixes_handler)
continue
files = []
metadata_dict = {}
if os.path.isfile(input_path):
files.append(input_path)
elif os.path.isdir(input_path):
metadata_file = os.path.join(input_path, "metadata.json")
if os.path.exists(metadata_file):
metadata_dict = util.load_json_or_empty(metadata_file)
LOG.debug(metadata_dict)
if 'working_directory' in metadata_dict:
working_dir = metadata_dict['working_directory']
try:
os.chdir(working_dir)
except OSError as oerr:
LOG.debug(oerr)
LOG.error("Working directory %s is missing.\n"
"Can not parse reports safely.", working_dir)
sys.exit(1)
_, _, file_names = next(os.walk(input_path), ([], [], []))
files = [os.path.join(input_path, file_name) for file_name
in file_names]
file_report_map = defaultdict(list)
plist_pltf = PlistToPlaintextFormatter(suppr_handler,
skip_handler,
context.severity_map,
processed_path_hashes,
trim_path_prefixes,
src_comment_status_filter)
plist_pltf.print_steps = 'print_steps' in args
for file_path in files:
f_change = parse_with_plt_formatter(file_path,
metadata_dict,
plist_pltf,
file_report_map)
file_change = file_change.union(f_change)
report_stats = plist_pltf.write(file_report_map)
sev_stats = report_stats.get('severity')
for severity in sev_stats:
severity_stats[severity] += sev_stats[severity]
f_stats = report_stats.get('files')
for file_path in f_stats:
file_stats[file_path] += f_stats[file_path]
rep_stats = report_stats.get('reports')
report_count += rep_stats.get("report_count", 0)
# Create index.html and statistics.html for the generated html files.
if html_builder:
html_builder.create_index_html(args.output_path)
html_builder.create_statistics_html(args.output_path)
print('\nTo view statistics in a browser run:\n> firefox {0}'.format(
os.path.join(args.output_path, 'statistics.html')))
print('\nTo view the results in a browser run:\n> firefox {0}'.format(
os.path.join(args.output_path, 'index.html')))
else:
print("\n----==== Summary ====----")
if file_stats:
vals = [[os.path.basename(k), v] for k, v in
dict(file_stats).items()]
vals.sort(key=itemgetter(0))
keys = ['Filename', 'Report count']
table = twodim.to_str('table', keys, vals, 1, True)
print(table)
if severity_stats:
vals = [[k, v] for k, v in dict(severity_stats).items()]
vals.sort(key=itemgetter(0))
keys = ['Severity', 'Report count']
table = twodim.to_str('table', keys, vals, 1, True)
print(table)
print("----=================----")
print("Total number of reports: {}".format(report_count))
print("----=================----")
if file_change:
changed_files = '\n'.join([' - ' + f for f in file_change])
LOG.warning("The following source file contents changed since the "
"latest analysis:\n%s\nMultiple reports were not "
"shown and skipped from the statistics. Please "
"analyze your project again to update the "
"reports!", changed_files)
os.chdir(original_cwd)
|
def main(args):
"""
Entry point for parsing some analysis results and printing them to the
stdout in a human-readable format.
"""
logger.setup_logger(args.verbose if 'verbose' in args else None)
try:
cmd_config.check_config_file(args)
except FileNotFoundError as fnerr:
LOG.error(fnerr)
sys.exit(1)
export = args.export if 'export' in args else None
if export == 'html' and 'output_path' not in args:
LOG.error("Argument --export not allowed without argument --output "
"when exporting to HTML.")
sys.exit(1)
if export == 'gerrit' and not gerrit.is_mandatory_env_var_set():
sys.exit(1)
context = analyzer_context.get_context()
# To ensure the help message prints the default folder properly,
# the 'default' for 'args.input' is a string, not a list.
# But we need lists for the foreach here to work.
if isinstance(args.input, str):
args.input = [args.input]
original_cwd = os.getcwd()
src_comment_status_filter = args.review_status
suppr_handler = None
if 'suppress' in args:
__make_handler = False
if not os.path.isfile(args.suppress):
if 'create_suppress' in args:
with open(args.suppress, 'w',
encoding='utf-8', errors='ignore') as _:
# Just create the file.
__make_handler = True
LOG.info("Will write source-code suppressions to "
"suppress file: %s", args.suppress)
else:
LOG.warning("Suppress file '%s' given, but it does not exist"
" -- will not suppress anything.", args.suppress)
else:
__make_handler = True
if __make_handler:
suppr_handler = suppress_handler.\
GenericSuppressHandler(args.suppress,
'create_suppress' in args,
src_comment_status_filter)
elif 'create_suppress' in args:
LOG.error("Can't use '--export-source-suppress' unless '--suppress "
"SUPPRESS_FILE' is also given.")
sys.exit(2)
processed_path_hashes = set()
skip_handler = None
if 'skipfile' in args:
with open(args.skipfile, 'r',
encoding='utf-8', errors='ignore') as skip_file:
skip_handler = SkipListHandler(skip_file.read())
trim_path_prefixes = args.trim_path_prefix if \
'trim_path_prefix' in args else None
if export:
if export not in EXPORT_TYPES:
LOG.error(f"Unknown export format: {export}")
return
# The HTML part will be handled separately below.
if export != 'html':
try:
res = parse_convert_reports(args.input,
export,
context.severity_map,
trim_path_prefixes)
if 'output_path' in args:
output_path = os.path.abspath(args.output_path)
if not os.path.exists(output_path):
os.mkdir(output_path)
reports_json = os.path.join(output_path, 'reports.json')
with open(reports_json,
mode='w',
encoding='utf-8', errors="ignore") as output_f:
output_f.write(json.dumps(res))
return print(json.dumps(res))
except Exception as ex:
LOG.error(ex)
sys.exit(1)
def trim_path_prefixes_handler(source_file):
"""
Callback to util.trim_path_prefixes to prevent module dependency
of plist_to_html
"""
return util.trim_path_prefixes(source_file, trim_path_prefixes)
html_builder = None
def skip_html_report_data_handler(report_hash, source_file, report_line,
checker_name, diag, files):
"""
Report handler which skips bugs which were suppressed by source code
comments. This function will return a tuple. The first element
will decide whether the report should be skipped or not and the second
element will be a list of source code comments related to the actual
report.
"""
files_dict = {k: v for k, v in enumerate(files)}
report = Report({'check_name': checker_name},
diag['path'],
files_dict,
metadata=None)
path_hash = get_report_path_hash(report)
if path_hash in processed_path_hashes:
LOG.debug("Skip report because it is a deduplication of an "
"already processed report!")
LOG.debug("Path hash: %s", path_hash)
LOG.debug(diag)
return True, []
skip, source_code_comments = skip_report(report_hash,
source_file,
report_line,
checker_name,
suppr_handler,
src_comment_status_filter)
if skip_handler:
skip |= skip_handler.should_skip(source_file)
if not skip:
processed_path_hashes.add(path_hash)
return skip, source_code_comments
file_change = set()
severity_stats = defaultdict(int)
file_stats = defaultdict(int)
report_count = 0
for input_path in args.input:
input_path = os.path.abspath(input_path)
os.chdir(original_cwd)
LOG.debug("Parsing input argument: '%s'", input_path)
if export == 'html':
output_path = os.path.abspath(args.output_path)
if not html_builder:
html_builder = \
PlistToHtml.HtmlBuilder(context.path_plist_to_html_dist,
context.severity_map)
LOG.info("Generating html output files:")
PlistToHtml.parse(input_path,
output_path,
context.path_plist_to_html_dist,
skip_html_report_data_handler,
html_builder,
trim_path_prefixes_handler)
continue
files = []
metadata_dict = {}
if os.path.isfile(input_path):
files.append(input_path)
elif os.path.isdir(input_path):
metadata_file = os.path.join(input_path, "metadata.json")
if os.path.exists(metadata_file):
metadata_dict = util.load_json_or_empty(metadata_file)
LOG.debug(metadata_dict)
if 'working_directory' in metadata_dict:
working_dir = metadata_dict['working_directory']
try:
os.chdir(working_dir)
except OSError as oerr:
LOG.debug(oerr)
LOG.error("Working directory %s is missing.\n"
"Can not parse reports safely.", working_dir)
sys.exit(1)
_, _, file_names = next(os.walk(input_path), ([], [], []))
files = [os.path.join(input_path, file_name) for file_name
in file_names]
file_report_map = defaultdict(list)
plist_pltf = PlistToPlaintextFormatter(suppr_handler,
skip_handler,
context.severity_map,
processed_path_hashes,
trim_path_prefixes,
src_comment_status_filter)
plist_pltf.print_steps = 'print_steps' in args
for file_path in files:
f_change = parse_with_plt_formatter(file_path,
metadata_dict,
plist_pltf,
file_report_map)
file_change = file_change.union(f_change)
report_stats = plist_pltf.write(file_report_map)
sev_stats = report_stats.get('severity')
for severity in sev_stats:
severity_stats[severity] += sev_stats[severity]
f_stats = report_stats.get('files')
for file_path in f_stats:
file_stats[file_path] += f_stats[file_path]
rep_stats = report_stats.get('reports')
report_count += rep_stats.get("report_count", 0)
# Create index.html and statistics.html for the generated html files.
if html_builder:
html_builder.create_index_html(args.output_path)
html_builder.create_statistics_html(args.output_path)
print('\nTo view statistics in a browser run:\n> firefox {0}'.format(
os.path.join(args.output_path, 'statistics.html')))
print('\nTo view the results in a browser run:\n> firefox {0}'.format(
os.path.join(args.output_path, 'index.html')))
else:
print("\n----==== Summary ====----")
if file_stats:
vals = [[os.path.basename(k), v] for k, v in
dict(file_stats).items()]
vals.sort(key=itemgetter(0))
keys = ['Filename', 'Report count']
table = twodim.to_str('table', keys, vals, 1, True)
print(table)
if severity_stats:
vals = [[k, v] for k, v in dict(severity_stats).items()]
vals.sort(key=itemgetter(0))
keys = ['Severity', 'Report count']
table = twodim.to_str('table', keys, vals, 1, True)
print(table)
print("----=================----")
print("Total number of reports: {}".format(report_count))
print("----=================----")
if file_change:
changed_files = '\n'.join([' - ' + f for f in file_change])
LOG.warning("The following source file contents changed since the "
"latest analysis:\n%s\nMultiple reports were not "
"shown and skipped from the statistics. Please "
"analyze your project again to update the "
"reports!", changed_files)
os.chdir(original_cwd)
|
11,804 |
def subtract(image1, image2, scale=1.0, offset=0):
"""
Subtracts two images, dividing the result by scale and adding the offset.
If omitted, scale defaults to 1.0, and offset to 0.0. At least one of the
images must be "1" mode.
.. code-block:: python
out = ((image1 - image2) / scale + offset)
:rtype: :py:class:`~PIL.Image.Image`
"""
image1.load()
image2.load()
return image1._new(image1.im.chop_subtract(image2.im, scale, offset))
|
def subtract(image1, image2, scale=1.0, offset=0):
"""
Subtracts two images, dividing the result by scale and adding the offset.
If omitted, scale defaults to 1.0, and offset to 0.0. At least one of the
images must have mode "1".
.. code-block:: python
out = ((image1 - image2) / scale + offset)
:rtype: :py:class:`~PIL.Image.Image`
"""
image1.load()
image2.load()
return image1._new(image1.im.chop_subtract(image2.im, scale, offset))
|
53,651 |
def double_sum(attribute_path=None):
"""Creates an aggregator that calculates the sum of the input values.
Does NOT accept ``None`` input values or ``None`` extracted values.
Since the server-side implementation is in Java, values stored in the
Map must be of type ``double`` (primitive or boxed) in Java or of a type
that can be converted to that. That means, one should be able to use this
aggregator with ``float`` or ``int`` values sent from the Python client
unless they are out of range for ``double`` type in Java.
Args:
attribute_path (str): Extracts values from this path, if given.
Returns:
Aggregator[float]: An aggregator that calculates the sum of input the
values.
"""
return _DoubleSumAggregator(attribute_path)
|
def double_sum(attribute_path=None):
"""Creates an aggregator that calculates the sum of the input values.
Does NOT accept ``None`` input values or ``None`` extracted values.
Since the server-side implementation is in Java, values stored in the
Map must be of type ``double`` (primitive or boxed) in Java or of a type
that can be converted to that. That means, one should be able to use this
aggregator with ``float`` or ``int`` values sent from the Python client
unless they are out of range for ``double`` type in Java.
Args:
attribute_path (str): Extracts values from this path, if given.
Returns:
Aggregator[float]: An aggregator that calculates the sum of the input
values.
"""
return _DoubleSumAggregator(attribute_path)
|
8,800 |
def _join_event_processing(bot):
"""Process a batch of JOIN event from the ``join_events_queue`` queue.
Every time this function is executed, it process at most ``throttle_join``
JOIN event: for each, it sends a WHO request to know more about the
channel. This will prevent an excess of flood when there are too many
channels to join at once.
"""
batch_size = max(bot.settings.core.throttle_join, 1)
for _ in range(batch_size):
try:
channel = bot.memory['join_events_queue'].popleft()
except IndexError:
break
LOGGER.debug('Send WHO after JOIN channel: %s', channel)
_send_who(bot, channel)
|
def _join_event_processing(bot):
"""Process a batch of JOIN event from the ``join_events_queue`` queue.
Every time this function is executed, it process at most ``throttle_join``
JOIN events. For each JOIN, it sends a WHO request to know more about the
channel. This will prevent an excess of flood when there are too many
channels to join at once.
"""
batch_size = max(bot.settings.core.throttle_join, 1)
for _ in range(batch_size):
try:
channel = bot.memory['join_events_queue'].popleft()
except IndexError:
break
LOGGER.debug('Send WHO after JOIN channel: %s', channel)
_send_who(bot, channel)
|
24,570 |
def check_values(
func=None, checks_on_return: Dict[str, bool] = None, **checks: Dict[str, bool]
):
"""
A decorator to 'check' -- limit/control -- the values of input and return
arguments to a function or method.
Parameters
----------
func:
The function to be decorated
checks_on_return: Dict[str, bool]
Specifications for value checks on the return of the function being wrapped.
(see `check values`_ for valid specifications)
**checks: Dict[str, Dict[str, bool]]
Specifications for value checks on the input arguments of the function
being wrapped. Each keyword argument in `checks` is the name of a function
argument to be checked and the keyword value contains the value check
specifications.
.. _`check values`:
The value check specifications are defined within a dictionary containing
the keys defined below. If the dictionary is empty or omitting keys,
then the default value will be assumed for the missing keys.
================ ======= ================================================
Key Type Description
================ ======= ================================================
can_be_negative `bool` [DEFAULT `True`] values can be negative
can_be_complex `bool` [DEFAULT `False`] values can be complex numbers
can_be_inf `bool` [DEFAULT `True`] values can be :data:`~numpy.inf`
can_be_nan `bool` [DEFAULT `True`] values can be :data:`~numpy.nan`
none_shall_pass `bool` [DEFAULT `False`] values can be a python `None`
can_be_zero `bool` [DEFAULT `True`] values can be zeros
================ ======= ================================================
Notes
-----
* Checking of function arguments `*args` and `**kwargs` is not supported.
* Full functionality is defined by the class :class:`CheckValues`.
Examples
--------
.. code-block:: python
from plasmapy.utils.decorators import check_values
@check_values(arg1={'can_be_negative': False, 'can_be_nan': False},
arg2={'can_be_inf': False},
checks_on_return={'none_shall_pass': True)
def foo(arg1, arg2):
return None
# on a method
class Foo:
@check_values(arg1={'can_be_negative': False, 'can_be_nan': False},
arg2={'can_be_inf': False},
checks_on_return={'none_shall_pass': True)
def bar(self, arg1, arg2):
return None
"""
if checks_on_return is not None:
checks["checks_on_return"] = checks_on_return
if func is not None:
# `check_values` called as a function
return CheckValues(**checks)(func)
else:
# `check_values` called as a decorator "sugar-syntax"
return CheckValues(**checks)
|
def check_values(
func=None, checks_on_return: Dict[str, bool] = None, **checks: Dict[str, bool]
):
"""
A decorator to 'check' -- limit/control -- the values of input and return
arguments to a function or method.
Parameters
----------
func:
The function to be decorated
checks_on_return: Dict[str, bool]
Specifications for value checks on the return of the function being wrapped.
(see `check values`_ for valid specifications)
**checks: Dict[str, Dict[str, bool]]
Specifications for value checks on the input arguments of the function
being wrapped. Each keyword argument in `checks` is the name of a function
argument to be checked and the keyword value contains the value check
specifications.
.. _`check values`:
The value check specifications are defined within a dictionary containing
the keys defined below. If the dictionary is empty or omitting keys,
then the default value will be assumed for the missing keys.
================ ======= ================================================
Key Type Description
================ ======= ================================================
can_be_negative `bool` [DEFAULT `True`] values can be negative
can_be_complex `bool` [DEFAULT `False`] values can be complex numbers
can_be_inf `bool` [DEFAULT `True`] values can be :data:`~numpy.inf`
can_be_nan `bool` [DEFAULT `True`] values can be :data:`~numpy.nan`
none_shall_pass `bool` [DEFAULT `False`] values can be a python `None`
can_be_zero `bool` [DEFAULT `True`] values can be zero
================ ======= ================================================
Notes
-----
* Checking of function arguments `*args` and `**kwargs` is not supported.
* Full functionality is defined by the class :class:`CheckValues`.
Examples
--------
.. code-block:: python
from plasmapy.utils.decorators import check_values
@check_values(arg1={'can_be_negative': False, 'can_be_nan': False},
arg2={'can_be_inf': False},
checks_on_return={'none_shall_pass': True)
def foo(arg1, arg2):
return None
# on a method
class Foo:
@check_values(arg1={'can_be_negative': False, 'can_be_nan': False},
arg2={'can_be_inf': False},
checks_on_return={'none_shall_pass': True)
def bar(self, arg1, arg2):
return None
"""
if checks_on_return is not None:
checks["checks_on_return"] = checks_on_return
if func is not None:
# `check_values` called as a function
return CheckValues(**checks)(func)
else:
# `check_values` called as a decorator "sugar-syntax"
return CheckValues(**checks)
|
57,691 |
def test_get_indicators(mocker):
"""Tests get_indicators function
Given
'indicator_types': ['value', 'user-account']
When
- `fetch_indicators_command` or `fetch_indicators_command` are calling the get_indicators function
Then
- convert the result to indicators list
- validate the the indicators list
- validate the the new_last_id to set
"""
client = Client
mocker.patch.object(client.stix_observable, 'list', return_value=RESPONSE_DATA)
new_last_id, indicators = get_indicators(client, indicator_type=['value', 'user-account'])
assert len(indicators) == 2
assert new_last_id == 'YXJyYXljb25uZWN0aW9uOjI='
|
def test_get_indicators(mocker):
"""Tests get_indicators function
Given
'indicator_types': ['value', 'user-account']
When
- `fetch_indicators_command` or `fetch_indicators_command` are calling the get_indicators function
Then
- convert the result to indicators list
- validate the length of the indicators list
- validate the new_last_id that is saved into the integration context is the same as the ID returned by the command.
"""
client = Client
mocker.patch.object(client.stix_observable, 'list', return_value=RESPONSE_DATA)
new_last_id, indicators = get_indicators(client, indicator_type=['value', 'user-account'])
assert len(indicators) == 2
assert new_last_id == 'YXJyYXljb25uZWN0aW9uOjI='
|
26,454 |
def match(command):
return (('choco install' in command.script_parts or 'cinst' in command.script_parts)
and 'Installing the following packages' in command.output)
|
def match(command):
return ((command.script.startswith('choco install') or 'cinst' in command.script_parts)
and 'Installing the following packages' in command.output)
|
30,597 |
def escape_illegal_characters_in_file_name(file_name):
if file_name:
file_name = re.sub(ESCAPE_CHARACTERS, '-', file_name)
file_name = re.sub('-+', '-', file_name)
return file_name
|
def escape_illegal_characters_in_file_name(file_name):
if file_name:
file_name = re.sub(ESCAPE_CHARACTERS, '-', file_name)
file_name = re.sub(r'-+', '-', file_name)
return file_name
|
41,530 |
def split_and_plot_by_parts(w, I, *args, **kwargs):
"""Plot two discontinued arrays (typically a spectrum) without showing
junctions: first identify junctions then split and plot separately.
Useful for plotting an experimental spectrum defined on different, non overlapping
ranges without showing connecting lines between the ranges, or to plot an
experimental spectrum defined on overlapping ranges, without showing connecting
lines neither.
Parameters
----------
w, I: arrays
typically output of :py:func:`~numpy.hstack`.
Other Parameters
----------------
split_threshold: int
number of standard deviation for threshold. Default 10
ax: matplotlib axe
plot on a particular axe
kwargs: dict
forwarded to :func:`~matplotlib.pyplot.plot`
cutwings: int
discard elements on the side. Default 0
Returns
-------
ax.plot
"""
from publib.tools import keep_color
# Get defaults
ax = kwargs.pop("ax", None)
if ax == None:
ax = plt.gca()
split_threshold = kwargs.pop("split_threshold", 10) # type: int
cutwings = kwargs.pop("cutwings", 0) # type: int
label = kwargs.pop("label", None) # type: str
# identify joints
dw = np.diff(w)
dwmean = dw.mean()
joints = np.argwhere((abs(dw - dwmean) > split_threshold * dw.std())) + 1
# Split
if len(joints) > 0:
ws = np.split(w, joints.flatten())
Is = np.split(I, joints.flatten())
# Plot separately
out = []
for i, (wi, Ii) in enumerate(zip(ws, Is)):
if cutwings:
wi = wi[cutwings:-cutwings]
Ii = Ii[cutwings:-cutwings]
if i == 0: # label once only
out.append(
ax.plot(
wi, Ii, *args, **dict(list(kwargs.items()) + [("label", label)])
)
)
else:
keep_color()
out.append(ax.plot(wi, Ii, *args, **kwargs))
return list(zip(*out))
else:
if cutwings:
w = w[cutwings:-cutwings]
I = I[cutwings:-cutwings]
return ax.plot(w, I, *args, **dict(list(kwargs.items()) + [("label", label)]))
|
def split_and_plot_by_parts(w, I, *args, **kwargs):
"""Plot two discontinued arrays (typically a spectrum) without showing
junctions: first identify junctions then split and plot separately.
Useful for plotting an experimental spectrum defined on different, non overlapping
ranges without showing connecting lines between the ranges, or to plot an
experimental spectrum defined on overlapping ranges, without showing connecting
lines neither.
Parameters
----------
w, I: arrays
typically output of :py:func:`~numpy.hstack`.
Other Parameters
----------------
split_threshold: int
number of standard deviation for threshold. Default 10
ax: matplotlib axe
plot on a particular axe
kwargs: dict
forwarded to :func:`~matplotlib.pyplot.plot`
cutwings: int
discard elements on the side. Default 0
Returns
-------
ax.plot
"""
from publib.tools import keep_color
# Get defaults
ax = kwargs.pop("ax", None)
if ax is None:
ax = plt.gca()
split_threshold = kwargs.pop("split_threshold", 10) # type: int
cutwings = kwargs.pop("cutwings", 0) # type: int
label = kwargs.pop("label", None) # type: str
# identify joints
dw = np.diff(w)
dwmean = dw.mean()
joints = np.argwhere((abs(dw - dwmean) > split_threshold * dw.std())) + 1
# Split
if len(joints) > 0:
ws = np.split(w, joints.flatten())
Is = np.split(I, joints.flatten())
# Plot separately
out = []
for i, (wi, Ii) in enumerate(zip(ws, Is)):
if cutwings:
wi = wi[cutwings:-cutwings]
Ii = Ii[cutwings:-cutwings]
if i == 0: # label once only
out.append(
ax.plot(
wi, Ii, *args, **dict(list(kwargs.items()) + [("label", label)])
)
)
else:
keep_color()
out.append(ax.plot(wi, Ii, *args, **kwargs))
return list(zip(*out))
else:
if cutwings:
w = w[cutwings:-cutwings]
I = I[cutwings:-cutwings]
return ax.plot(w, I, *args, **dict(list(kwargs.items()) + [("label", label)]))
|
26,029 |
def load_command_table(self, _):
from ._client_factory import cf_synapse_client_workspace_factory
from ._client_factory import cf_synapse_client_operations_factory
from ._client_factory import cf_synapse_client_bigdatapool_factory
from ._client_factory import cf_synapse_client_sqlpool_factory
from ._client_factory import cf_synapse_client_ipfirewallrules_factory
from ._client_factory import cf_synapse_client_cmk_factory
from ._client_factory import cf_synapse_client_sqlpool_sensitivity_labels_factory
from ._client_factory import cf_synapse_client_restorable_dropped_sqlpools_factory
from ._client_factory import cf_synapse_client_sqlpool_transparent_data_encryptions_factory
from ._client_factory import cf_synapse_client_sqlpool_security_alert_policies_factory
from ._client_factory import cf_synapse_client_sqlpool_blob_auditing_policies_factory
from ._client_factory import cf_synapse_client_managed_identity_sqlcontrol_factory
from ._client_factory import cf_synapse_client_workspace_aad_admins_factory
from ._client_factory import cf_synapse_client_sqlserver_blob_auditing_policies_factory
from ._client_factory import cf_synapse_client_integrationruntimes_factory
from ._client_factory import cf_synapse_client_integrationruntimeauthkeys_factory
from ._client_factory import cf_synapse_client_integrationruntimemonitoringdata_factory
from ._client_factory import cf_synapse_client_integrationruntimenodeipaddress_factory
from ._client_factory import cf_synapse_client_integrationruntimenodes_factory
from ._client_factory import cf_synapse_client_integrationruntimecredentials_factory
from ._client_factory import cf_synapse_client_integrationruntimeconnectioninfos_factory
from ._client_factory import cf_synapse_client_integrationruntimestatus_factory
from ._client_factory import cf_kusto_pool
from ._client_factory import cf_kusto_script
from ._client_factory import cf_kusto_scripts
def get_custom_sdk(custom_module, client_factory):
return CliCommandType(
operations_tmpl='azure.cli.command_modules.synapse.manual.operations.{}#'.format(custom_module) + '{}',
client_factory=client_factory,
)
synapse_workspace_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspacesOperations.{}',
client_factory=cf_synapse_client_workspace_factory)
synapse_operations_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#Operations.{}',
client_factory=cf_synapse_client_operations_factory)
synapse_bigdatapool_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#BigDataPoolsOperations.{}',
client_factory=cf_synapse_client_bigdatapool_factory)
synapse_workspace_aad_admin_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspaceAadAdminsOperations.{}'
)
synapse_sqlpool_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolsOperations.{}',
client_factory=cf_synapse_client_sqlpool_factory)
# Classification operation
synapse_sqlpool_sensitivity_labels_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolSensitivityLabelsOperations.{}',
client_factory=cf_synapse_client_sqlpool_sensitivity_labels_factory)
# List deleted
synapse_restorable_dropped_sqlpools_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#RestorableDroppedSqlPoolsOperations.{}',
client_factory=cf_synapse_client_restorable_dropped_sqlpools_factory)
# Tde operation
synapse_sqlpool_transparent_data_encryptions_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolTransparentDataEncryptionsOperations.{}',
client_factory=cf_synapse_client_sqlpool_transparent_data_encryptions_factory)
# Threat policy operation
synapse_sqlpool_security_alert_policies_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolSecurityAlertPoliciesOperations.{}',
client_factory=cf_synapse_client_sqlpool_security_alert_policies_factory)
# Audit policy operation
synapse_sqlpool_blob_auditing_policies_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolBlobAuditingPoliciesOperations.{}',
client_factory=cf_synapse_client_sqlpool_blob_auditing_policies_factory)
# Workspace managed sql server audit policy operation
synapse_workspace_managed_sqlserver_blob_auditing_policies_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspaceManagedSqlServerBlobAuditingPoliciesOperations.{}',
client_factory=cf_synapse_client_sqlserver_blob_auditing_policies_factory)
synapse_firewallrules_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IpFirewallRulesOperations.{}',
client_factory=cf_synapse_client_ipfirewallrules_factory)
synapse_cmk_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#KeysOperations.{}',
client_factory=cf_synapse_client_cmk_factory)
synapse_managedidentitysqlcontrol_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspaceManagedIdentitySqlControlSettingsOperations.{}',
client_factory=cf_synapse_client_managed_identity_sqlcontrol_factory)
synapse_integrationruntimes_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimesOperations.{}',
client_factory=cf_synapse_client_integrationruntimes_factory)
synapse_integrationruntimeauthkeys_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeAuthKeysOperations.{}',
client_factory=cf_synapse_client_integrationruntimeauthkeys_factory)
synapse_integrationruntimemonitoringdata_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeMonitoringDataOperations.{}',
client_factory=cf_synapse_client_integrationruntimemonitoringdata_factory)
synapse_integrationruntimenodes_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeNodesOperations.{}',
client_factory=cf_synapse_client_integrationruntimenodes_factory)
synapse_integrationruntimenodeipaddress_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeNodeIpAddressOperations.{}',
client_factory=cf_synapse_client_integrationruntimenodeipaddress_factory)
synapse_integrationruntimecredentials_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeCredentialsOperations.{}',
client_factory=cf_synapse_client_integrationruntimecredentials_factory)
synapse_integrationruntimeconnectioninfos_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeConnectionInfosOperations.{}',
client_factory=cf_synapse_client_integrationruntimeconnectioninfos_factory)
synapse_integrationruntimestatus_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeStatusOperations.{}',
client_factory=cf_synapse_client_integrationruntimestatus_factory)
synapse_spark_session_sdk = CliCommandType(
operations_tmpl='azure.synapse.spark.operations#SparkSessionOperations.{}',
client_factory=None)
synapse_spark_batch_sdk = CliCommandType(
operations_tmpl='azure.synapse.spark.operations#SparkBatchOperations.{}',
client_factory=None)
synapse_role_assignment_sdk = CliCommandType(
operations_tmpl='azure.synapse.accesscontrol.operations#RoleAssignmentsOperations.{}',
client_factory=None)
synapse_role_definitions_sdk = CliCommandType(
operations_tmpl='azure.synapse.accesscontrol.operations#RoleDefinitionsOperations.{}',
client_factory=None)
synapse_linked_service_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#LinkedServiceOperations.{}',
client_factory=None)
synapse_dataset_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#DatasetOperations.{}',
client_factory=None)
synapse_pipeline_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#PipelineOperations.{}',
client_factory=None)
synapse_pipeline_run_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#PipelineRunOperations.{}',
client_factory=None)
synapse_trigger_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#TriggerOperations.{}',
client_factory=None)
synapse_data_flow_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#DataFlowOperations.{}',
client_factory=None)
synapse_trigger_run_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#TriggerRunOperations.{}',
client_factory=None)
synapse_notebook_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#NotebookOperations.{}',
client_factory=None)
synapse_library_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#LibraryOperations.{}',
client_factory=None)
synapse_managed_private_endpoints_sdk = CliCommandType(
operations_tmpl='azure.synapse.managedprivateendpoints.operations#ManagedPrivateEndpoints.{}',
client_factory=None)
synapse_spark_job_definition_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#SparkJobDefinitionOperations.{}',
client_factory=None)
synapse_sql_script_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#SqlScriptOperations.{}',
client_factory=None)
synapse_kusto_pool_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations._kusto_pools_operations#KustoPoolsOperations.{}',
client_factory=cf_kusto_pool,
)
synapse_kusto_script_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#KqlScriptOperations.{}',
client_factory=cf_kusto_script,
)
synapse_link_connection_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#linkconnectionOperations.{}',
client_factory=None,
)
# Management Plane Commands --Workspace
with self.command_group('synapse workspace', command_type=synapse_workspace_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_workspace_factory),
client_factory=cf_synapse_client_workspace_factory) as g:
g.show_command('show', 'get')
g.custom_command('list', 'list_workspaces')
g.custom_command('create', 'create_workspace', supports_no_wait=True)
g.custom_command('update', 'update_workspace', supports_no_wait=True)
g.custom_command('check-name', 'custom_check_name_availability',
command_type=synapse_operations_sdk,
client_factory=cf_synapse_client_operations_factory)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.custom_command('activate', 'activate_workspace', command_type=synapse_cmk_sdk, client_factory=cf_synapse_client_cmk_factory, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --SparkPool
with self.command_group('synapse spark pool', command_type=synapse_bigdatapool_sdk,
custom_command_type=get_custom_sdk('sparkpool', cf_synapse_client_bigdatapool_factory),
client_factory=cf_synapse_client_bigdatapool_factory) as g:
g.custom_show_command('show', 'get_spark_pool')
g.command('list', 'list_by_workspace')
g.custom_command('create', 'create_spark_pool', supports_no_wait=True)
g.custom_command('update', 'update_spark_pool', supports_no_wait=True)
g.custom_command('delete', 'delete_spark_pool', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --SqlPool
with self.command_group('synapse sql pool', command_type=synapse_sqlpool_sdk,
custom_command_type=get_custom_sdk('sqlpool', cf_synapse_client_sqlpool_factory),
client_factory=cf_synapse_client_sqlpool_factory) as g:
g.show_command('show', 'get')
g.command('list', 'list_by_workspace')
g.custom_command('create', 'create_sql_pool', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.custom_command('update', 'update_sql_pool')
g.command('pause', 'begin_pause')
g.command('resume', 'begin_resume')
g.custom_command('restore', 'restore_sql_pool')
g.custom_command('show-connection-string', 'sql_pool_show_connection_string')
g.wait_command('wait')
# Management Plane Commands --SqlPool list-deleted
with self.command_group('synapse sql pool', command_type=synapse_restorable_dropped_sqlpools_sdk,
client_factory=cf_synapse_client_restorable_dropped_sqlpools_factory) as g:
g.command('list-deleted', 'list_by_workspace')
# Management Plane Commands --SqlPool Classification
with self.command_group('synapse sql pool classification', command_type=synapse_sqlpool_sensitivity_labels_sdk,
custom_command_type=get_custom_sdk('sqlpoolsensitivitylabel',
cf_synapse_client_sqlpool_sensitivity_labels_factory),
client_factory=cf_synapse_client_sqlpool_sensitivity_labels_factory) as g:
g.custom_show_command('show', 'sqlpool_sensitivity_label_show')
g.command('list', 'list_current')
g.custom_command('create', 'sqlpool_sensitivity_label_create')
g.command('delete', 'delete')
g.custom_command('update', 'sqlpool_sensitivity_label_update')
with self.command_group('synapse sql pool classification recommendation',
command_type=synapse_sqlpool_sensitivity_labels_sdk,
custom_command_type=get_custom_sdk('sqlpoolsensitivitylabel',
cf_synapse_client_sqlpool_sensitivity_labels_factory),
client_factory=cf_synapse_client_sqlpool_sensitivity_labels_factory) as g:
g.command('list', 'list_recommended')
g.command('enable', 'enable_recommendation')
g.command('disable', 'disable_recommendation')
# Management Plane Commands --SqlPool Tde
with self.command_group('synapse sql pool tde', command_type=synapse_sqlpool_transparent_data_encryptions_sdk,
custom_command_type=get_custom_sdk('sqlpooltde',
cf_synapse_client_sqlpool_transparent_data_encryptions_factory),
client_factory=cf_synapse_client_sqlpool_transparent_data_encryptions_factory) as g:
g.custom_command('set', 'create_or_update')
g.show_command('show', 'get')
# Management Plane Commands --SqlPool Threat-policy
with self.command_group('synapse sql pool threat-policy', command_type=synapse_sqlpool_security_alert_policies_sdk,
custom_command_type=get_custom_sdk('sqlpoolsecurityalertpolicy',
cf_synapse_client_sqlpool_security_alert_policies_factory),
client_factory=cf_synapse_client_sqlpool_security_alert_policies_factory) as g:
g.show_command('show', 'get')
g.generic_update_command('update', custom_func_name='sqlpool_security_alert_policy_update')
# Management Plane Commands --SqlPool Audit-policy
with self.command_group('synapse sql pool audit-policy', command_type=synapse_sqlpool_blob_auditing_policies_sdk,
custom_command_type=get_custom_sdk('sqlpoolblobauditingpolicy',
cf_synapse_client_sqlpool_blob_auditing_policies_factory),
client_factory=cf_synapse_client_sqlpool_blob_auditing_policies_factory) as g:
g.custom_show_command('show', 'sqlpool_audit_policy_show')
g.generic_update_command('update', custom_func_name='sqlpool_blob_auditing_policy_update')
# Management Plane Commands --Sql Ad-Admin
with self.command_group('synapse sql ad-admin', command_type=synapse_workspace_aad_admin_sdk,
custom_command_type=get_custom_sdk('workspacesqlaadadmin',
cf_synapse_client_workspace_aad_admins_factory),
client_factory=cf_synapse_client_workspace_aad_admins_factory) as g:
g.show_command('show', 'get')
g.custom_command('create', 'create_workspace_sql_aad_admin', supports_no_wait=True)
g.generic_update_command('update', setter_name='begin_create_or_update', custom_func_name='update_workspace_sql_aad_admin',
setter_arg_name='aad_admin_info', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --Sql audit-policy
with self.command_group('synapse sql audit-policy',
command_type=synapse_workspace_managed_sqlserver_blob_auditing_policies_sdk,
custom_command_type=get_custom_sdk('sqlpoolblobauditingpolicy',
cf_synapse_client_sqlserver_blob_auditing_policies_factory),
client_factory=cf_synapse_client_sqlserver_blob_auditing_policies_factory) as g:
g.custom_show_command('show', 'workspace_audit_policy_show')
g.generic_update_command('update', setter_name='begin_create_or_update', custom_func_name='sqlserver_blob_auditing_policy_update',
supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --FirewallRule
with self.command_group('synapse workspace firewall-rule', command_type=synapse_firewallrules_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_ipfirewallrules_factory),
client_factory=cf_synapse_client_ipfirewallrules_factory) as g:
g.command('list', 'list_by_workspace')
g.show_command('show', 'get')
g.custom_command('create', 'create_firewall_rule', supports_no_wait=True)
g.custom_command('update', 'update_firewall_rule', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --IntegrationRuntime
with self.command_group('synapse integration-runtime', command_type=synapse_integrationruntimes_sdk,
custom_command_type=get_custom_sdk('integrationruntime', cf_synapse_client_integrationruntimes_factory),
client_factory=cf_synapse_client_integrationruntimes_factory) as g:
g.command('list', 'list_by_workspace')
g.show_command('show', 'get')
g.custom_command('create', 'create', deprecate_info=g.deprecate(redirect='managed create, self-hosted create'), supports_no_wait=True)
g.custom_command('managed create', 'Managed_Create', supports_no_wait=True)
g.custom_command('self-hosted create', 'Selfhosted_Create', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.custom_command('update', 'update')
g.command('start', 'begin_start', supports_no_wait=True)
g.command('stop', 'begin_stop', confirmation=True, supports_no_wait=True)
g.command('upgrade', 'upgrade')
g.command('list-auth-key', 'list', command_type=synapse_integrationruntimeauthkeys_sdk,
client_factory=cf_synapse_client_integrationruntimeauthkeys_factory)
g.custom_command('regenerate-auth-key', 'regenerate', command_type=synapse_integrationruntimeauthkeys_sdk,
client_factory=cf_synapse_client_integrationruntimeauthkeys_factory)
g.command('get-monitoring-data', 'list', command_type=synapse_integrationruntimemonitoringdata_sdk,
client_factory=cf_synapse_client_integrationruntimemonitoringdata_factory)
g.command('sync-credentials', 'sync', command_type=synapse_integrationruntimecredentials_sdk,
client_factory=cf_synapse_client_integrationruntimecredentials_factory)
g.command('get-connection-info', 'get', command_type=synapse_integrationruntimeconnectioninfos_sdk,
client_factory=cf_synapse_client_integrationruntimeconnectioninfos_factory)
g.command('get-status', 'get', command_type=synapse_integrationruntimestatus_sdk,
client_factory=cf_synapse_client_integrationruntimestatus_factory)
g.wait_command('wait')
# Management Plane Commands --Keys
with self.command_group('synapse workspace key', command_type=synapse_cmk_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_cmk_factory),
client_factory=cf_synapse_client_cmk_factory) as g:
g.command('list', 'list_by_workspace')
g.show_command('show', 'get')
g.custom_command('create', 'create_workspace_key', supports_no_wait=True)
g.command('delete', 'delete', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --Managed-Identity
with self.command_group('synapse workspace managed-identity', command_type=synapse_managedidentitysqlcontrol_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_managed_identity_sqlcontrol_factory),
client_factory=cf_synapse_client_managed_identity_sqlcontrol_factory) as g:
g.show_command('show-sql-access', 'get')
g.custom_command('grant-sql-access', 'grant_sql_access_to_managed_identity', supports_no_wait=True)
g.custom_command('revoke-sql-access', 'revoke_sql_access_to_managed_identity', supports_no_wait=True)
g.wait_command('wait')
with self.command_group('synapse integration-runtime-node', command_type=synapse_integrationruntimenodes_sdk,
custom_command_type=get_custom_sdk('integrationruntimenode',
cf_synapse_client_integrationruntimenodes_factory),
client_factory=cf_synapse_client_integrationruntimenodes_factory) as g:
g.show_command('show', 'get')
g.custom_command('update', 'update')
g.command('delete', 'delete', confirmation=True)
g.command('get-ip-address', 'get', command_type=synapse_integrationruntimenodeipaddress_sdk,
client_factory=cf_synapse_client_integrationruntimenodeipaddress_factory)
# Data Plane Commands --Spark batch opertions
with self.command_group('synapse spark job', command_type=synapse_spark_batch_sdk,
custom_command_type=get_custom_sdk('spark', None)) as g:
g.custom_command('submit', 'create_spark_batch_job')
g.custom_command('list', 'list_spark_batch_jobs')
g.custom_show_command('show', 'get_spark_batch_job')
g.custom_command('cancel', 'cancel_spark_batch_job', confirmation=True)
# Data Plane Commands --Spark session operations
with self.command_group('synapse spark session', synapse_spark_session_sdk,
custom_command_type=get_custom_sdk('spark', None)) as g:
g.custom_command('create', 'create_spark_session_job')
g.custom_command('list', 'list_spark_session_jobs')
g.custom_show_command('show', 'get_spark_session_job')
g.custom_command('cancel', 'cancel_spark_session_job', confirmation=True)
g.custom_command('reset-timeout', 'reset_timeout')
# Data Plane Commands --Spark session statements operations
with self.command_group('synapse spark statement', synapse_spark_session_sdk,
custom_command_type=get_custom_sdk('spark', None)) as g:
g.custom_command('invoke', 'create_spark_session_statement')
g.custom_command('list', 'list_spark_session_statements')
g.custom_show_command('show', 'get_spark_session_statement')
g.custom_command('cancel', 'cancel_spark_session_statement', confirmation=True)
# Data Plane Commands --Access control operations
with self.command_group('synapse role assignment', synapse_role_assignment_sdk,
custom_command_type=get_custom_sdk('accesscontrol', None)) as g:
g.custom_command('create', 'create_role_assignment')
g.custom_command('list', 'list_role_assignments')
g.custom_show_command('show', 'get_role_assignment_by_id')
g.custom_command('delete', 'delete_role_assignment', confirmation=True)
with self.command_group('synapse role definition', synapse_role_definitions_sdk,
custom_command_type=get_custom_sdk('accesscontrol', None)) as g:
g.custom_command('list', 'list_role_definitions')
g.custom_show_command('show', 'get_role_definition')
with self.command_group('synapse role scope', synapse_role_definitions_sdk,
custom_command_type=get_custom_sdk('accesscontrol', None)) as g:
g.custom_command('list', 'list_scopes')
# Data Plane Commands --Artifacts Linked service operations
with self.command_group('synapse linked-service', synapse_linked_service_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_linked_service', supports_no_wait=True)
g.custom_command('set', 'create_or_update_linked_service', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_linked_service', supports_no_wait=True)
g.custom_command('list', 'list_linked_service')
g.custom_show_command('show', 'get_linked_service')
g.custom_command('delete', 'delete_linked_service', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts dataset operations
with self.command_group('synapse dataset', synapse_dataset_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_dataset', supports_no_wait=True)
g.custom_command('set', 'create_or_update_dataset', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_dataset', supports_no_wait=True)
g.custom_command('list', 'list_datasets')
g.custom_show_command('show', 'get_dataset')
g.custom_command('delete', 'delete_dataset', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts pipeline operations
with self.command_group('synapse pipeline', synapse_pipeline_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_pipeline', supports_no_wait=True)
g.custom_command('set', 'create_or_update_pipeline', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_pipeline', supports_no_wait=True)
g.custom_command('list', 'list_pipelines')
g.custom_show_command('show', 'get_pipeline')
g.custom_command('delete', 'delete_pipeline', confirmation=True, supports_no_wait=True)
g.custom_command('create-run', 'create_pipeline_run')
# Data Plane Commands --Artifacts pipeline run operations
with self.command_group('synapse pipeline-run', synapse_pipeline_run_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('query-by-workspace', 'query_pipeline_runs_by_workspace')
g.custom_show_command('show', 'get_pipeline_run')
g.custom_command('cancel', 'cancel_pipeline_run', confirmation=True)
with self.command_group('synapse activity-run', synapse_pipeline_run_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('query-by-pipeline-run', 'query_activity_runs')
# Data Plane Commands --Artifacts trigger operations
with self.command_group('synapse trigger', synapse_trigger_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_trigger', supports_no_wait=True)
g.custom_command('set', 'create_or_update_trigger', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_trigger', supports_no_wait=True)
g.custom_command('list', 'list_triggers')
g.custom_show_command('show', 'get_trigger')
g.custom_command('delete', 'delete_trigger', confirmation=True, supports_no_wait=True)
g.custom_command('subscribe-to-event', 'subscribe_trigger_to_events', supports_no_wait=True)
g.custom_command('get-event-subscription-status', 'get_event_subscription_status')
g.custom_command('unsubscribe-from-event', 'unsubscribe_trigger_from_events', supports_no_wait=True)
g.custom_command('start', 'start_trigger', supports_no_wait=True)
g.custom_command('stop', 'stop_trigger', supports_no_wait=True)
g.custom_wait_command('wait', 'get_trigger')
# Data Plane Commands --Artifacts trigger run operations
with self.command_group('synapse trigger-run', synapse_trigger_run_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('rerun', 'rerun_trigger')
g.custom_command('cancel', 'cancel_trigger')
g.custom_command('query-by-workspace', 'query_trigger_runs_by_workspace')
# Data Plane Commands --Artifacts data flow operations
with self.command_group('synapse data-flow', synapse_data_flow_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_data_flow', supports_no_wait=True)
g.custom_command('set', 'create_or_update_data_flow', supports_no_wait=True)
g.custom_command('list', 'list_data_flows')
g.custom_show_command('show', 'get_data_flow')
g.custom_command('delete', 'delete_data_flow', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts notebook operations
with self.command_group('synapse notebook', synapse_notebook_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_notebook', supports_no_wait=True)
g.custom_command('set', 'create_or_update_notebook', supports_no_wait=True)
g.custom_command('import', 'create_or_update_notebook', supports_no_wait=True)
g.custom_command('list', 'list_notebooks')
g.custom_show_command('show', 'get_notebook')
g.custom_command('export', 'export_notebook')
g.custom_command('delete', 'delete_notebook', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts library operations
with self.command_group('synapse workspace-package', synapse_library_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('upload', 'upload_workspace_package')
g.custom_command('upload-batch', 'workspace_package_upload_batch')
g.custom_command('list', 'list_workspace_package')
g.custom_show_command('show', 'get_workspace_package')
g.custom_command('delete', 'delete_workspace_package', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Managed private endpoints operations
with self.command_group('synapse managed-private-endpoints', synapse_managed_private_endpoints_sdk,
custom_command_type=get_custom_sdk('managedprivateendpoints', None)) as g:
g.custom_show_command('show', 'get_Managed_private_endpoints')
g.custom_command('create', 'create_Managed_private_endpoints')
g.custom_command('list', 'list_Managed_private_endpoints')
g.custom_command('delete', 'delete_Managed_private_endpoints', confirmation=True)
# Data Plane Commands --Artifacts Spark job definitions operations
with self.command_group('synapse spark-job-definition', synapse_spark_job_definition_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('list', 'list_spark_job_definition')
g.custom_show_command('show', 'get_spark_job_definition')
g.custom_command('delete', 'delete_spark_job_definition', supports_no_wait=True)
g.custom_command('create', 'create_or_update_spark_job_definition', supports_no_wait=True)
g.custom_wait_command('wait', 'get_spark_job_definition')
g.custom_command('update', 'create_or_update_spark_job_definition', supports_no_wait=True)
with self.command_group('synapse sql-script', synapse_sql_script_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('list', 'list_sql_scripts')
g.custom_show_command('show', 'get_sql_script')
g.custom_command('delete', 'delete_sql_script', supports_no_wait=True)
g.custom_command('create', 'create_sql_script', supports_no_wait=True)
g.custom_wait_command('wait', 'get_sql_script')
g.custom_show_command('export', 'export_sql_script')
g.custom_command('import', 'create_sql_script', supports_no_wait=True)
with self.command_group('synapse', is_preview=True):
pass
# synapse kusto pool Commands --Managed kusto pool Commands
with self.command_group('synapse kusto pool',
command_type=synapse_kusto_pool_sdk,
custom_command_type=get_custom_sdk('kustopool',
cf_kusto_pool),
client_factory=cf_kusto_pool) as g:
g.custom_command('create', 'synapse_kusto_pool_create', supports_no_wait=True)
g.custom_command('update', 'synapse_kusto_pool_update', supports_no_wait=True)
g.custom_command('add-language-extension', 'synapse_kusto_pool_add_language_extension', supports_no_wait=True)
g.custom_command('detach-follower-database', 'synapse_kusto_pool_detach_follower_database', supports_no_wait=True)
g.custom_command('remove-language-extension', 'synapse_kusto_pool_remove_language_extension', supports_no_wait=True)
with self.command_group('synapse kql-script', command_type=synapse_kusto_script_sdk,
custom_command_type=get_custom_sdk('kustopool', cf_kusto_script),
client_factory=cf_kusto_script) as g:
g.custom_show_command('show', 'synapse_kusto_script_show')
g.custom_command('create', 'synapse_kusto_script_create', supports_no_wait=True)
g.custom_command('import', 'synapse_kusto_script_create', supports_no_wait=True)
g.custom_command('delete', 'synapse_kusto_script_delete', supports_no_wait=True, confirmation=True)
g.custom_command('list', 'synapse_kusto_script_list', client_factory=cf_kusto_scripts)
g.custom_command('export', 'synapse_kusto_script_export')
g.custom_wait_command('wait', 'synapse_kusto_script_show')
with self.command_group('synapse link-connection', synapse_link_connection_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('list', 'list_link_connection')
g.custom_show_command('show', 'get_link_connection')
g.custom_command('delete', 'delete_link_connection')
g.custom_command('create', 'create_or_update_link_connection')
g.custom_command('update', 'create_or_update_link_connection')
g.custom_command('get-status', 'get_link_connection_status')
g.custom_command('start ', 'start_link_connection')
g.custom_command('stop', 'stop_link_connection')
g.custom_command('list-link-tables', 'synapse_list_link_table')
g.custom_command('edit-link-tables', 'synapse_edit_link_table')
g.custom_command('get-link-tables-status', 'synapse_get_link_tables_status')
g.custom_command('update-landing-zone-credential', 'synapse_update_landing_zone_credential')
|
def load_command_table(self, _):
from ._client_factory import cf_synapse_client_workspace_factory
from ._client_factory import cf_synapse_client_operations_factory
from ._client_factory import cf_synapse_client_bigdatapool_factory
from ._client_factory import cf_synapse_client_sqlpool_factory
from ._client_factory import cf_synapse_client_ipfirewallrules_factory
from ._client_factory import cf_synapse_client_cmk_factory
from ._client_factory import cf_synapse_client_sqlpool_sensitivity_labels_factory
from ._client_factory import cf_synapse_client_restorable_dropped_sqlpools_factory
from ._client_factory import cf_synapse_client_sqlpool_transparent_data_encryptions_factory
from ._client_factory import cf_synapse_client_sqlpool_security_alert_policies_factory
from ._client_factory import cf_synapse_client_sqlpool_blob_auditing_policies_factory
from ._client_factory import cf_synapse_client_managed_identity_sqlcontrol_factory
from ._client_factory import cf_synapse_client_workspace_aad_admins_factory
from ._client_factory import cf_synapse_client_sqlserver_blob_auditing_policies_factory
from ._client_factory import cf_synapse_client_integrationruntimes_factory
from ._client_factory import cf_synapse_client_integrationruntimeauthkeys_factory
from ._client_factory import cf_synapse_client_integrationruntimemonitoringdata_factory
from ._client_factory import cf_synapse_client_integrationruntimenodeipaddress_factory
from ._client_factory import cf_synapse_client_integrationruntimenodes_factory
from ._client_factory import cf_synapse_client_integrationruntimecredentials_factory
from ._client_factory import cf_synapse_client_integrationruntimeconnectioninfos_factory
from ._client_factory import cf_synapse_client_integrationruntimestatus_factory
from ._client_factory import cf_kusto_pool
from ._client_factory import cf_kusto_script
from ._client_factory import cf_kusto_scripts
def get_custom_sdk(custom_module, client_factory):
return CliCommandType(
operations_tmpl='azure.cli.command_modules.synapse.manual.operations.{}#'.format(custom_module) + '{}',
client_factory=client_factory,
)
synapse_workspace_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspacesOperations.{}',
client_factory=cf_synapse_client_workspace_factory)
synapse_operations_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#Operations.{}',
client_factory=cf_synapse_client_operations_factory)
synapse_bigdatapool_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#BigDataPoolsOperations.{}',
client_factory=cf_synapse_client_bigdatapool_factory)
synapse_workspace_aad_admin_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspaceAadAdminsOperations.{}'
)
synapse_sqlpool_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolsOperations.{}',
client_factory=cf_synapse_client_sqlpool_factory)
# Classification operation
synapse_sqlpool_sensitivity_labels_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolSensitivityLabelsOperations.{}',
client_factory=cf_synapse_client_sqlpool_sensitivity_labels_factory)
# List deleted
synapse_restorable_dropped_sqlpools_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#RestorableDroppedSqlPoolsOperations.{}',
client_factory=cf_synapse_client_restorable_dropped_sqlpools_factory)
# Tde operation
synapse_sqlpool_transparent_data_encryptions_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolTransparentDataEncryptionsOperations.{}',
client_factory=cf_synapse_client_sqlpool_transparent_data_encryptions_factory)
# Threat policy operation
synapse_sqlpool_security_alert_policies_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolSecurityAlertPoliciesOperations.{}',
client_factory=cf_synapse_client_sqlpool_security_alert_policies_factory)
# Audit policy operation
synapse_sqlpool_blob_auditing_policies_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#SqlPoolBlobAuditingPoliciesOperations.{}',
client_factory=cf_synapse_client_sqlpool_blob_auditing_policies_factory)
# Workspace managed sql server audit policy operation
synapse_workspace_managed_sqlserver_blob_auditing_policies_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspaceManagedSqlServerBlobAuditingPoliciesOperations.{}',
client_factory=cf_synapse_client_sqlserver_blob_auditing_policies_factory)
synapse_firewallrules_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IpFirewallRulesOperations.{}',
client_factory=cf_synapse_client_ipfirewallrules_factory)
synapse_cmk_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#KeysOperations.{}',
client_factory=cf_synapse_client_cmk_factory)
synapse_managedidentitysqlcontrol_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#WorkspaceManagedIdentitySqlControlSettingsOperations.{}',
client_factory=cf_synapse_client_managed_identity_sqlcontrol_factory)
synapse_integrationruntimes_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimesOperations.{}',
client_factory=cf_synapse_client_integrationruntimes_factory)
synapse_integrationruntimeauthkeys_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeAuthKeysOperations.{}',
client_factory=cf_synapse_client_integrationruntimeauthkeys_factory)
synapse_integrationruntimemonitoringdata_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeMonitoringDataOperations.{}',
client_factory=cf_synapse_client_integrationruntimemonitoringdata_factory)
synapse_integrationruntimenodes_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeNodesOperations.{}',
client_factory=cf_synapse_client_integrationruntimenodes_factory)
synapse_integrationruntimenodeipaddress_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeNodeIpAddressOperations.{}',
client_factory=cf_synapse_client_integrationruntimenodeipaddress_factory)
synapse_integrationruntimecredentials_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeCredentialsOperations.{}',
client_factory=cf_synapse_client_integrationruntimecredentials_factory)
synapse_integrationruntimeconnectioninfos_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeConnectionInfosOperations.{}',
client_factory=cf_synapse_client_integrationruntimeconnectioninfos_factory)
synapse_integrationruntimestatus_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations#IntegrationRuntimeStatusOperations.{}',
client_factory=cf_synapse_client_integrationruntimestatus_factory)
synapse_spark_session_sdk = CliCommandType(
operations_tmpl='azure.synapse.spark.operations#SparkSessionOperations.{}',
client_factory=None)
synapse_spark_batch_sdk = CliCommandType(
operations_tmpl='azure.synapse.spark.operations#SparkBatchOperations.{}',
client_factory=None)
synapse_role_assignment_sdk = CliCommandType(
operations_tmpl='azure.synapse.accesscontrol.operations#RoleAssignmentsOperations.{}',
client_factory=None)
synapse_role_definitions_sdk = CliCommandType(
operations_tmpl='azure.synapse.accesscontrol.operations#RoleDefinitionsOperations.{}',
client_factory=None)
synapse_linked_service_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#LinkedServiceOperations.{}',
client_factory=None)
synapse_dataset_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#DatasetOperations.{}',
client_factory=None)
synapse_pipeline_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#PipelineOperations.{}',
client_factory=None)
synapse_pipeline_run_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#PipelineRunOperations.{}',
client_factory=None)
synapse_trigger_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#TriggerOperations.{}',
client_factory=None)
synapse_data_flow_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#DataFlowOperations.{}',
client_factory=None)
synapse_trigger_run_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#TriggerRunOperations.{}',
client_factory=None)
synapse_notebook_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#NotebookOperations.{}',
client_factory=None)
synapse_library_sdk = CliCommandType(
operation_tmpl='azure.synapse.artifacts.operations#LibraryOperations.{}',
client_factory=None)
synapse_managed_private_endpoints_sdk = CliCommandType(
operations_tmpl='azure.synapse.managedprivateendpoints.operations#ManagedPrivateEndpoints.{}',
client_factory=None)
synapse_spark_job_definition_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#SparkJobDefinitionOperations.{}',
client_factory=None)
synapse_sql_script_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#SqlScriptOperations.{}',
client_factory=None)
synapse_kusto_pool_sdk = CliCommandType(
operations_tmpl='azure.mgmt.synapse.operations._kusto_pools_operations#KustoPoolsOperations.{}',
client_factory=cf_kusto_pool,
)
synapse_kusto_script_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#KqlScriptOperations.{}',
client_factory=cf_kusto_script,
)
synapse_link_connection_sdk = CliCommandType(
operations_tmpl='azure.synapse.artifacts.operations#linkconnectionOperations.{}',
client_factory=cf_synapse_link_connection,
)
# Management Plane Commands --Workspace
with self.command_group('synapse workspace', command_type=synapse_workspace_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_workspace_factory),
client_factory=cf_synapse_client_workspace_factory) as g:
g.show_command('show', 'get')
g.custom_command('list', 'list_workspaces')
g.custom_command('create', 'create_workspace', supports_no_wait=True)
g.custom_command('update', 'update_workspace', supports_no_wait=True)
g.custom_command('check-name', 'custom_check_name_availability',
command_type=synapse_operations_sdk,
client_factory=cf_synapse_client_operations_factory)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.custom_command('activate', 'activate_workspace', command_type=synapse_cmk_sdk, client_factory=cf_synapse_client_cmk_factory, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --SparkPool
with self.command_group('synapse spark pool', command_type=synapse_bigdatapool_sdk,
custom_command_type=get_custom_sdk('sparkpool', cf_synapse_client_bigdatapool_factory),
client_factory=cf_synapse_client_bigdatapool_factory) as g:
g.custom_show_command('show', 'get_spark_pool')
g.command('list', 'list_by_workspace')
g.custom_command('create', 'create_spark_pool', supports_no_wait=True)
g.custom_command('update', 'update_spark_pool', supports_no_wait=True)
g.custom_command('delete', 'delete_spark_pool', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --SqlPool
with self.command_group('synapse sql pool', command_type=synapse_sqlpool_sdk,
custom_command_type=get_custom_sdk('sqlpool', cf_synapse_client_sqlpool_factory),
client_factory=cf_synapse_client_sqlpool_factory) as g:
g.show_command('show', 'get')
g.command('list', 'list_by_workspace')
g.custom_command('create', 'create_sql_pool', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.custom_command('update', 'update_sql_pool')
g.command('pause', 'begin_pause')
g.command('resume', 'begin_resume')
g.custom_command('restore', 'restore_sql_pool')
g.custom_command('show-connection-string', 'sql_pool_show_connection_string')
g.wait_command('wait')
# Management Plane Commands --SqlPool list-deleted
with self.command_group('synapse sql pool', command_type=synapse_restorable_dropped_sqlpools_sdk,
client_factory=cf_synapse_client_restorable_dropped_sqlpools_factory) as g:
g.command('list-deleted', 'list_by_workspace')
# Management Plane Commands --SqlPool Classification
with self.command_group('synapse sql pool classification', command_type=synapse_sqlpool_sensitivity_labels_sdk,
custom_command_type=get_custom_sdk('sqlpoolsensitivitylabel',
cf_synapse_client_sqlpool_sensitivity_labels_factory),
client_factory=cf_synapse_client_sqlpool_sensitivity_labels_factory) as g:
g.custom_show_command('show', 'sqlpool_sensitivity_label_show')
g.command('list', 'list_current')
g.custom_command('create', 'sqlpool_sensitivity_label_create')
g.command('delete', 'delete')
g.custom_command('update', 'sqlpool_sensitivity_label_update')
with self.command_group('synapse sql pool classification recommendation',
command_type=synapse_sqlpool_sensitivity_labels_sdk,
custom_command_type=get_custom_sdk('sqlpoolsensitivitylabel',
cf_synapse_client_sqlpool_sensitivity_labels_factory),
client_factory=cf_synapse_client_sqlpool_sensitivity_labels_factory) as g:
g.command('list', 'list_recommended')
g.command('enable', 'enable_recommendation')
g.command('disable', 'disable_recommendation')
# Management Plane Commands --SqlPool Tde
with self.command_group('synapse sql pool tde', command_type=synapse_sqlpool_transparent_data_encryptions_sdk,
custom_command_type=get_custom_sdk('sqlpooltde',
cf_synapse_client_sqlpool_transparent_data_encryptions_factory),
client_factory=cf_synapse_client_sqlpool_transparent_data_encryptions_factory) as g:
g.custom_command('set', 'create_or_update')
g.show_command('show', 'get')
# Management Plane Commands --SqlPool Threat-policy
with self.command_group('synapse sql pool threat-policy', command_type=synapse_sqlpool_security_alert_policies_sdk,
custom_command_type=get_custom_sdk('sqlpoolsecurityalertpolicy',
cf_synapse_client_sqlpool_security_alert_policies_factory),
client_factory=cf_synapse_client_sqlpool_security_alert_policies_factory) as g:
g.show_command('show', 'get')
g.generic_update_command('update', custom_func_name='sqlpool_security_alert_policy_update')
# Management Plane Commands --SqlPool Audit-policy
with self.command_group('synapse sql pool audit-policy', command_type=synapse_sqlpool_blob_auditing_policies_sdk,
custom_command_type=get_custom_sdk('sqlpoolblobauditingpolicy',
cf_synapse_client_sqlpool_blob_auditing_policies_factory),
client_factory=cf_synapse_client_sqlpool_blob_auditing_policies_factory) as g:
g.custom_show_command('show', 'sqlpool_audit_policy_show')
g.generic_update_command('update', custom_func_name='sqlpool_blob_auditing_policy_update')
# Management Plane Commands --Sql Ad-Admin
with self.command_group('synapse sql ad-admin', command_type=synapse_workspace_aad_admin_sdk,
custom_command_type=get_custom_sdk('workspacesqlaadadmin',
cf_synapse_client_workspace_aad_admins_factory),
client_factory=cf_synapse_client_workspace_aad_admins_factory) as g:
g.show_command('show', 'get')
g.custom_command('create', 'create_workspace_sql_aad_admin', supports_no_wait=True)
g.generic_update_command('update', setter_name='begin_create_or_update', custom_func_name='update_workspace_sql_aad_admin',
setter_arg_name='aad_admin_info', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --Sql audit-policy
with self.command_group('synapse sql audit-policy',
command_type=synapse_workspace_managed_sqlserver_blob_auditing_policies_sdk,
custom_command_type=get_custom_sdk('sqlpoolblobauditingpolicy',
cf_synapse_client_sqlserver_blob_auditing_policies_factory),
client_factory=cf_synapse_client_sqlserver_blob_auditing_policies_factory) as g:
g.custom_show_command('show', 'workspace_audit_policy_show')
g.generic_update_command('update', setter_name='begin_create_or_update', custom_func_name='sqlserver_blob_auditing_policy_update',
supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --FirewallRule
with self.command_group('synapse workspace firewall-rule', command_type=synapse_firewallrules_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_ipfirewallrules_factory),
client_factory=cf_synapse_client_ipfirewallrules_factory) as g:
g.command('list', 'list_by_workspace')
g.show_command('show', 'get')
g.custom_command('create', 'create_firewall_rule', supports_no_wait=True)
g.custom_command('update', 'update_firewall_rule', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --IntegrationRuntime
with self.command_group('synapse integration-runtime', command_type=synapse_integrationruntimes_sdk,
custom_command_type=get_custom_sdk('integrationruntime', cf_synapse_client_integrationruntimes_factory),
client_factory=cf_synapse_client_integrationruntimes_factory) as g:
g.command('list', 'list_by_workspace')
g.show_command('show', 'get')
g.custom_command('create', 'create', deprecate_info=g.deprecate(redirect='managed create, self-hosted create'), supports_no_wait=True)
g.custom_command('managed create', 'Managed_Create', supports_no_wait=True)
g.custom_command('self-hosted create', 'Selfhosted_Create', supports_no_wait=True)
g.command('delete', 'begin_delete', confirmation=True, supports_no_wait=True)
g.custom_command('update', 'update')
g.command('start', 'begin_start', supports_no_wait=True)
g.command('stop', 'begin_stop', confirmation=True, supports_no_wait=True)
g.command('upgrade', 'upgrade')
g.command('list-auth-key', 'list', command_type=synapse_integrationruntimeauthkeys_sdk,
client_factory=cf_synapse_client_integrationruntimeauthkeys_factory)
g.custom_command('regenerate-auth-key', 'regenerate', command_type=synapse_integrationruntimeauthkeys_sdk,
client_factory=cf_synapse_client_integrationruntimeauthkeys_factory)
g.command('get-monitoring-data', 'list', command_type=synapse_integrationruntimemonitoringdata_sdk,
client_factory=cf_synapse_client_integrationruntimemonitoringdata_factory)
g.command('sync-credentials', 'sync', command_type=synapse_integrationruntimecredentials_sdk,
client_factory=cf_synapse_client_integrationruntimecredentials_factory)
g.command('get-connection-info', 'get', command_type=synapse_integrationruntimeconnectioninfos_sdk,
client_factory=cf_synapse_client_integrationruntimeconnectioninfos_factory)
g.command('get-status', 'get', command_type=synapse_integrationruntimestatus_sdk,
client_factory=cf_synapse_client_integrationruntimestatus_factory)
g.wait_command('wait')
# Management Plane Commands --Keys
with self.command_group('synapse workspace key', command_type=synapse_cmk_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_cmk_factory),
client_factory=cf_synapse_client_cmk_factory) as g:
g.command('list', 'list_by_workspace')
g.show_command('show', 'get')
g.custom_command('create', 'create_workspace_key', supports_no_wait=True)
g.command('delete', 'delete', confirmation=True, supports_no_wait=True)
g.wait_command('wait')
# Management Plane Commands --Managed-Identity
with self.command_group('synapse workspace managed-identity', command_type=synapse_managedidentitysqlcontrol_sdk,
custom_command_type=get_custom_sdk('workspace', cf_synapse_client_managed_identity_sqlcontrol_factory),
client_factory=cf_synapse_client_managed_identity_sqlcontrol_factory) as g:
g.show_command('show-sql-access', 'get')
g.custom_command('grant-sql-access', 'grant_sql_access_to_managed_identity', supports_no_wait=True)
g.custom_command('revoke-sql-access', 'revoke_sql_access_to_managed_identity', supports_no_wait=True)
g.wait_command('wait')
with self.command_group('synapse integration-runtime-node', command_type=synapse_integrationruntimenodes_sdk,
custom_command_type=get_custom_sdk('integrationruntimenode',
cf_synapse_client_integrationruntimenodes_factory),
client_factory=cf_synapse_client_integrationruntimenodes_factory) as g:
g.show_command('show', 'get')
g.custom_command('update', 'update')
g.command('delete', 'delete', confirmation=True)
g.command('get-ip-address', 'get', command_type=synapse_integrationruntimenodeipaddress_sdk,
client_factory=cf_synapse_client_integrationruntimenodeipaddress_factory)
# Data Plane Commands --Spark batch opertions
with self.command_group('synapse spark job', command_type=synapse_spark_batch_sdk,
custom_command_type=get_custom_sdk('spark', None)) as g:
g.custom_command('submit', 'create_spark_batch_job')
g.custom_command('list', 'list_spark_batch_jobs')
g.custom_show_command('show', 'get_spark_batch_job')
g.custom_command('cancel', 'cancel_spark_batch_job', confirmation=True)
# Data Plane Commands --Spark session operations
with self.command_group('synapse spark session', synapse_spark_session_sdk,
custom_command_type=get_custom_sdk('spark', None)) as g:
g.custom_command('create', 'create_spark_session_job')
g.custom_command('list', 'list_spark_session_jobs')
g.custom_show_command('show', 'get_spark_session_job')
g.custom_command('cancel', 'cancel_spark_session_job', confirmation=True)
g.custom_command('reset-timeout', 'reset_timeout')
# Data Plane Commands --Spark session statements operations
with self.command_group('synapse spark statement', synapse_spark_session_sdk,
custom_command_type=get_custom_sdk('spark', None)) as g:
g.custom_command('invoke', 'create_spark_session_statement')
g.custom_command('list', 'list_spark_session_statements')
g.custom_show_command('show', 'get_spark_session_statement')
g.custom_command('cancel', 'cancel_spark_session_statement', confirmation=True)
# Data Plane Commands --Access control operations
with self.command_group('synapse role assignment', synapse_role_assignment_sdk,
custom_command_type=get_custom_sdk('accesscontrol', None)) as g:
g.custom_command('create', 'create_role_assignment')
g.custom_command('list', 'list_role_assignments')
g.custom_show_command('show', 'get_role_assignment_by_id')
g.custom_command('delete', 'delete_role_assignment', confirmation=True)
with self.command_group('synapse role definition', synapse_role_definitions_sdk,
custom_command_type=get_custom_sdk('accesscontrol', None)) as g:
g.custom_command('list', 'list_role_definitions')
g.custom_show_command('show', 'get_role_definition')
with self.command_group('synapse role scope', synapse_role_definitions_sdk,
custom_command_type=get_custom_sdk('accesscontrol', None)) as g:
g.custom_command('list', 'list_scopes')
# Data Plane Commands --Artifacts Linked service operations
with self.command_group('synapse linked-service', synapse_linked_service_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_linked_service', supports_no_wait=True)
g.custom_command('set', 'create_or_update_linked_service', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_linked_service', supports_no_wait=True)
g.custom_command('list', 'list_linked_service')
g.custom_show_command('show', 'get_linked_service')
g.custom_command('delete', 'delete_linked_service', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts dataset operations
with self.command_group('synapse dataset', synapse_dataset_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_dataset', supports_no_wait=True)
g.custom_command('set', 'create_or_update_dataset', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_dataset', supports_no_wait=True)
g.custom_command('list', 'list_datasets')
g.custom_show_command('show', 'get_dataset')
g.custom_command('delete', 'delete_dataset', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts pipeline operations
with self.command_group('synapse pipeline', synapse_pipeline_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_pipeline', supports_no_wait=True)
g.custom_command('set', 'create_or_update_pipeline', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_pipeline', supports_no_wait=True)
g.custom_command('list', 'list_pipelines')
g.custom_show_command('show', 'get_pipeline')
g.custom_command('delete', 'delete_pipeline', confirmation=True, supports_no_wait=True)
g.custom_command('create-run', 'create_pipeline_run')
# Data Plane Commands --Artifacts pipeline run operations
with self.command_group('synapse pipeline-run', synapse_pipeline_run_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('query-by-workspace', 'query_pipeline_runs_by_workspace')
g.custom_show_command('show', 'get_pipeline_run')
g.custom_command('cancel', 'cancel_pipeline_run', confirmation=True)
with self.command_group('synapse activity-run', synapse_pipeline_run_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('query-by-pipeline-run', 'query_activity_runs')
# Data Plane Commands --Artifacts trigger operations
with self.command_group('synapse trigger', synapse_trigger_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_trigger', supports_no_wait=True)
g.custom_command('set', 'create_or_update_trigger', deprecate_info=g.deprecate(redirect='update'), supports_no_wait=True)
g.custom_command('update', 'create_or_update_trigger', supports_no_wait=True)
g.custom_command('list', 'list_triggers')
g.custom_show_command('show', 'get_trigger')
g.custom_command('delete', 'delete_trigger', confirmation=True, supports_no_wait=True)
g.custom_command('subscribe-to-event', 'subscribe_trigger_to_events', supports_no_wait=True)
g.custom_command('get-event-subscription-status', 'get_event_subscription_status')
g.custom_command('unsubscribe-from-event', 'unsubscribe_trigger_from_events', supports_no_wait=True)
g.custom_command('start', 'start_trigger', supports_no_wait=True)
g.custom_command('stop', 'stop_trigger', supports_no_wait=True)
g.custom_wait_command('wait', 'get_trigger')
# Data Plane Commands --Artifacts trigger run operations
with self.command_group('synapse trigger-run', synapse_trigger_run_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('rerun', 'rerun_trigger')
g.custom_command('cancel', 'cancel_trigger')
g.custom_command('query-by-workspace', 'query_trigger_runs_by_workspace')
# Data Plane Commands --Artifacts data flow operations
with self.command_group('synapse data-flow', synapse_data_flow_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_data_flow', supports_no_wait=True)
g.custom_command('set', 'create_or_update_data_flow', supports_no_wait=True)
g.custom_command('list', 'list_data_flows')
g.custom_show_command('show', 'get_data_flow')
g.custom_command('delete', 'delete_data_flow', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts notebook operations
with self.command_group('synapse notebook', synapse_notebook_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('create', 'create_or_update_notebook', supports_no_wait=True)
g.custom_command('set', 'create_or_update_notebook', supports_no_wait=True)
g.custom_command('import', 'create_or_update_notebook', supports_no_wait=True)
g.custom_command('list', 'list_notebooks')
g.custom_show_command('show', 'get_notebook')
g.custom_command('export', 'export_notebook')
g.custom_command('delete', 'delete_notebook', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Artifacts library operations
with self.command_group('synapse workspace-package', synapse_library_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('upload', 'upload_workspace_package')
g.custom_command('upload-batch', 'workspace_package_upload_batch')
g.custom_command('list', 'list_workspace_package')
g.custom_show_command('show', 'get_workspace_package')
g.custom_command('delete', 'delete_workspace_package', confirmation=True, supports_no_wait=True)
# Data Plane Commands --Managed private endpoints operations
with self.command_group('synapse managed-private-endpoints', synapse_managed_private_endpoints_sdk,
custom_command_type=get_custom_sdk('managedprivateendpoints', None)) as g:
g.custom_show_command('show', 'get_Managed_private_endpoints')
g.custom_command('create', 'create_Managed_private_endpoints')
g.custom_command('list', 'list_Managed_private_endpoints')
g.custom_command('delete', 'delete_Managed_private_endpoints', confirmation=True)
# Data Plane Commands --Artifacts Spark job definitions operations
with self.command_group('synapse spark-job-definition', synapse_spark_job_definition_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('list', 'list_spark_job_definition')
g.custom_show_command('show', 'get_spark_job_definition')
g.custom_command('delete', 'delete_spark_job_definition', supports_no_wait=True)
g.custom_command('create', 'create_or_update_spark_job_definition', supports_no_wait=True)
g.custom_wait_command('wait', 'get_spark_job_definition')
g.custom_command('update', 'create_or_update_spark_job_definition', supports_no_wait=True)
with self.command_group('synapse sql-script', synapse_sql_script_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('list', 'list_sql_scripts')
g.custom_show_command('show', 'get_sql_script')
g.custom_command('delete', 'delete_sql_script', supports_no_wait=True)
g.custom_command('create', 'create_sql_script', supports_no_wait=True)
g.custom_wait_command('wait', 'get_sql_script')
g.custom_show_command('export', 'export_sql_script')
g.custom_command('import', 'create_sql_script', supports_no_wait=True)
with self.command_group('synapse', is_preview=True):
pass
# synapse kusto pool Commands --Managed kusto pool Commands
with self.command_group('synapse kusto pool',
command_type=synapse_kusto_pool_sdk,
custom_command_type=get_custom_sdk('kustopool',
cf_kusto_pool),
client_factory=cf_kusto_pool) as g:
g.custom_command('create', 'synapse_kusto_pool_create', supports_no_wait=True)
g.custom_command('update', 'synapse_kusto_pool_update', supports_no_wait=True)
g.custom_command('add-language-extension', 'synapse_kusto_pool_add_language_extension', supports_no_wait=True)
g.custom_command('detach-follower-database', 'synapse_kusto_pool_detach_follower_database', supports_no_wait=True)
g.custom_command('remove-language-extension', 'synapse_kusto_pool_remove_language_extension', supports_no_wait=True)
with self.command_group('synapse kql-script', command_type=synapse_kusto_script_sdk,
custom_command_type=get_custom_sdk('kustopool', cf_kusto_script),
client_factory=cf_kusto_script) as g:
g.custom_show_command('show', 'synapse_kusto_script_show')
g.custom_command('create', 'synapse_kusto_script_create', supports_no_wait=True)
g.custom_command('import', 'synapse_kusto_script_create', supports_no_wait=True)
g.custom_command('delete', 'synapse_kusto_script_delete', supports_no_wait=True, confirmation=True)
g.custom_command('list', 'synapse_kusto_script_list', client_factory=cf_kusto_scripts)
g.custom_command('export', 'synapse_kusto_script_export')
g.custom_wait_command('wait', 'synapse_kusto_script_show')
with self.command_group('synapse link-connection', synapse_link_connection_sdk,
custom_command_type=get_custom_sdk('artifacts', None)) as g:
g.custom_command('list', 'list_link_connection')
g.custom_show_command('show', 'get_link_connection')
g.custom_command('delete', 'delete_link_connection')
g.custom_command('create', 'create_or_update_link_connection')
g.custom_command('update', 'create_or_update_link_connection')
g.custom_command('get-status', 'get_link_connection_status')
g.custom_command('start ', 'start_link_connection')
g.custom_command('stop', 'stop_link_connection')
g.custom_command('list-link-tables', 'synapse_list_link_table')
g.custom_command('edit-link-tables', 'synapse_edit_link_table')
g.custom_command('get-link-tables-status', 'synapse_get_link_tables_status')
g.custom_command('update-landing-zone-credential', 'synapse_update_landing_zone_credential')
|
34,248 |
def encode_string(s: Text) -> Text:
"""Return a encoded python string"""
def replace(match):
return ESCAPE_DCT[match.group(0)]
return ESCAPE.sub(replace, s)
|
def encode_string(s: Text) -> Text:
"""Return a encoded python string."""
def replace(match):
return ESCAPE_DCT[match.group(0)]
return ESCAPE.sub(replace, s)
|
31,891 |
def get_tag_groups(tag_groups: list) -> list:
"""
Returns the tag groups as a list of the groups names.
Args:
tag_groups: list of all groups
Returns:
The tag groups as a list of the groups names
"""
# Tag_groups is a list of dictionaries, each contains a tag group name and its description
results = []
if len(tag_groups) > 0:
for group in tag_groups:
tag_group_name = group.get('tag_group_name', '')
if tag_group_name:
results.append(tag_group_name)
return results
|
def get_tag_groups_names(tag_groups: list) -> list:
"""
Returns the tag groups as a list of the groups names.
Args:
tag_groups: list of all groups
Returns:
The tag groups as a list of the groups names
"""
# Tag_groups is a list of dictionaries, each contains a tag group name and its description
results = []
if len(tag_groups) > 0:
for group in tag_groups:
tag_group_name = group.get('tag_group_name', '')
if tag_group_name:
results.append(tag_group_name)
return results
|
34,766 |
def story_graph_from_paths(
files: List[Text],
domain: Domain,
template_variables: Optional[Dict] = None,
use_e2e: bool = False,
exclusion_percentage: Optional[int] = None,
) -> StoryGraph:
"""Returns the StoryGraph from paths."""
from rasa.shared.core.training_data import loading
story_steps = loading.load_data_from_files(
files, domain, template_variables, use_e2e, exclusion_percentage
)
return StoryGraph(story_steps)
|
def story_graph_from_paths(
files: List[Text],
domain: Domain,
template_variables: Optional[Dict] = None,
use_e2e: bool = False,
exclusion_percentage: Optional[int] = None,
) -> StoryGraph:
"""Returns the `StoryGraph` from paths."""
from rasa.shared.core.training_data import loading
story_steps = loading.load_data_from_files(
files, domain, template_variables, use_e2e, exclusion_percentage
)
return StoryGraph(story_steps)
|
55,386 |
def delete_run(run_id):
"""
Deletes a run with the given ID.
:param run_id: Unique identifier for the run to delete.
.. code-block:: python
:caption: Example
import mlflow
# Set existing run_ids to delete
run_ids = ["13ee9e661cbf4095a7c92cc55b4e12b4", "948fbf2d0b7f4056b3dd4914845a1e1b"]
# Delete run_ids and fetch the results.
# Note that runs are not actually delete, only lifecycle stage is set to "deleted"
[mlflow.delete_run(run_id) for run_id in run_ids]
[print("run_id={}; lifecycle_stage={}".format(run_id,
mlflow.get_run(run_id).info.lifecycle_stage)) for run_id in run_ids]
.. code-block:: text
:caption: Output
run_id=13ee9e661cbf4095a7c92cc55b4e12b4; lifecycle_stage=deleted
run_id=948fbf2d0b7f4056b3dd4914845a1e1b; lifecycle_stage=deleted
"""
MlflowClient().delete_run(run_id)
|
def delete_run(run_id):
"""
Deletes a run with the given ID.
:param run_id: Unique identifier for the run to delete.
.. code-block:: python
:caption: Example
import mlflow
# Set existing run_ids to delete
run_ids = ["13ee9e661cbf4095a7c92cc55b4e12b4", "948fbf2d0b7f4056b3dd4914845a1e1b"]
# Delete run_ids and fetch the results.
# Note that runs are not actually delete, only lifecycle stage is set to "deleted"
for run_id in run_ids:
mlflow.delete_run(run_id)
[print("run_id={}; lifecycle_stage={}".format(run_id,
mlflow.get_run(run_id).info.lifecycle_stage)) for run_id in run_ids]
.. code-block:: text
:caption: Output
run_id=13ee9e661cbf4095a7c92cc55b4e12b4; lifecycle_stage=deleted
run_id=948fbf2d0b7f4056b3dd4914845a1e1b; lifecycle_stage=deleted
"""
MlflowClient().delete_run(run_id)
|
22,019 |
def correlation_test():
df = vaex.example()
# A single column pair
desired = df.correlation('x', 'y')
expected = np.array(-0.066913)
np.testing.assert_array_almost_equal(desired, expected)
# A list of columns
desired = df.correlation(x=['x', 'y', 'z'])
expected = np.array([[ 1.00000000, -0.06691309, -0.02656313],
[-0.06691309, 1.00000000, 0.03083857],
[-0.02656313, 0.03083857, 1.00000000]])
np.testing.assert_array_almost_equal(desired, expected)
# A list of columns and a single target
desired = df.correlation(x=['x', 'y', 'z'], y='vx')
expected = np.array([-0.00779179, 0.01804911, -0.02175331])
np.testing.assert_array_almost_equal(desired, expected)
# A list of columns and a list of targets
desired = df.correlation(x=['x', 'y', 'z'], y=['vx', 'vy'])
expected = np.array(([-0.00779179, 0.01804911, -0.02175331],
[0.00014019, -0.00411498, 0.02988355]))
# No arguments (entire DataFrame)
desired = df[['x', 'y', 'z']].correlation(x=None, y=None)
expected = np.array([[ 1.00000000, -0.06691309, -0.02656313],
[-0.06691309, 1.00000000, 0.03083857],
[-0.02656313, 0.03083857, 1.00000000]])
|
def test_correlation():
df = vaex.example()
# A single column pair
desired = df.correlation('x', 'y')
expected = np.array(-0.066913)
np.testing.assert_array_almost_equal(desired, expected)
# A list of columns
desired = df.correlation(x=['x', 'y', 'z'])
expected = np.array([[ 1.00000000, -0.06691309, -0.02656313],
[-0.06691309, 1.00000000, 0.03083857],
[-0.02656313, 0.03083857, 1.00000000]])
np.testing.assert_array_almost_equal(desired, expected)
# A list of columns and a single target
desired = df.correlation(x=['x', 'y', 'z'], y='vx')
expected = np.array([-0.00779179, 0.01804911, -0.02175331])
np.testing.assert_array_almost_equal(desired, expected)
# A list of columns and a list of targets
desired = df.correlation(x=['x', 'y', 'z'], y=['vx', 'vy'])
expected = np.array(([-0.00779179, 0.01804911, -0.02175331],
[0.00014019, -0.00411498, 0.02988355]))
# No arguments (entire DataFrame)
desired = df[['x', 'y', 'z']].correlation(x=None, y=None)
expected = np.array([[ 1.00000000, -0.06691309, -0.02656313],
[-0.06691309, 1.00000000, 0.03083857],
[-0.02656313, 0.03083857, 1.00000000]])
|
32,044 |
def update_list_name(args: dict, sg):
listID = args.get('list_id')
listName = args.get('updated_list_name')
data = {"name": listName}
response = sg.client.marketing.lists._(listID).patch(request_body=data)
if response.status_code == 200:
rBody = response.body
body = json.loads(rBody.decode("utf-8"))
ec = {'Sendgrid.updatedList': body}
md = tableToMarkdown('List Name has been updated successfully ', body)
return {
'ContentsFormat': formats['json'],
'Type': entryTypes['note'],
'Contents': body,
'HumanReadable': md,
'EntryContext': ec
}
else:
return 'List name update has been failed: ' + str(response.body)
|
def update_list_name(args: dict, sg):
listID = args.get('list_id')
listName = args.get('updated_list_name')
data = {"name": listName}
response = sg.client.marketing.lists._(listID).patch(request_body=data)
if response.status_code == 200:
rBody = response.body
body = json.loads(rBody.decode("utf-8"))
ec = {'Sendgrid.updatedList': body}
md = tableToMarkdown('List Name has been updated successfully ', body)
return {
'ContentsFormat': formats['json'],
'Type': entryTypes['note'],
'Contents': body,
'HumanReadable': md,
'EntryContext': ec
}
else:
return 'Failed to update list name: ' + str(response.body)
|
58,419 |
def subaward_filter(filters, for_downloads=False):
queryset = SubawardView.objects.all()
recipient_scope_q = Q(recipient_location_country_code="USA") | Q(recipient_location_country_name="UNITED STATES")
pop_scope_q = Q(pop_country_code="USA") | Q(pop_country_name="UNITED STATES")
for key, value in filters.items():
if value is None:
raise InvalidParameterException("Invalid filter: " + key + " has null as its value.")
key_list = [
"keywords",
"elasticsearch_keyword",
"time_period",
"award_type_codes",
"prime_and_sub_award_types",
"agencies",
"legal_entities",
"recipient_search_text",
"recipient_scope",
"recipient_locations",
"recipient_type_names",
"place_of_performance_scope",
"place_of_performance_locations",
"award_amounts",
"award_ids",
"program_numbers",
"naics_codes",
"psc_codes",
"contract_pricing_type_codes",
"set_aside_type_codes",
"extent_competed_type_codes",
TasCodes.underscore_name,
TreasuryAccounts.underscore_name,
]
if key not in key_list:
raise InvalidParameterException("Invalid filter: " + key + " does not exist.")
if key == "keywords":
def keyword_parse(keyword):
# keyword_ts_vector & award_ts_vector are Postgres TS_vectors.
# keyword_ts_vector = recipient_name + psc_description + subaward_description
# award_ts_vector = piid + fain + uri + subaward_number
filter_obj = Q(keyword_ts_vector=keyword) | Q(award_ts_vector=keyword)
# Commenting out until NAICS is associated with subawards in DAIMS 1.3.1
# if keyword.isnumeric():
# filter_obj |= Q(naics_code__contains=keyword)
if len(keyword) == 4 and PSC.objects.filter(code__iexact=keyword).exists():
filter_obj |= Q(product_or_service_code__iexact=keyword)
return filter_obj
filter_obj = Q()
for keyword in value:
filter_obj |= keyword_parse(keyword)
potential_duns = list(filter((lambda x: len(x) > 7 and len(x) < 10), value))
if len(potential_duns) > 0:
filter_obj |= Q(recipient_unique_id__in=potential_duns) | Q(
parent_recipient_unique_id__in=potential_duns
)
queryset = queryset.filter(filter_obj)
elif key == "elasticsearch_keyword":
keyword = value
transaction_ids = elasticsearch_helper.get_download_ids(keyword=keyword, field="transaction_id")
# flatten IDs
transaction_ids = list(itertools.chain.from_iterable(transaction_ids))
logger.info("Found {} transactions based on keyword: {}".format(len(transaction_ids), keyword))
transaction_ids = [str(transaction_id) for transaction_id in transaction_ids]
queryset = queryset.filter(latest_transaction_id__isnull=False)
# Prepare a SQL snippet to include in the predicate for searching an array of transaction IDs
sql_fragment = '"subaward_view"."latest_transaction_id" = ANY(\'{{{}}}\'::int[])' # int[] -> int array type
queryset = queryset.extra(where=[sql_fragment.format(",".join(transaction_ids))])
elif key == "time_period":
min_date = API_SEARCH_MIN_DATE
if for_downloads:
min_date = API_MIN_DATE
queryset &= combine_date_range_queryset(value, SubawardView, min_date, API_MAX_DATE)
elif key == "award_type_codes":
queryset = queryset.filter(prime_award_type__in=value)
elif key == "prime_and_sub_award_types":
award_types = value.get("sub_awards")
if award_types:
queryset = queryset.filter(award_type__in=award_types)
elif key == "agencies":
# TODO: Make function to match agencies in award filter throwing dupe error
funding_toptier = Q()
funding_subtier = Q()
awarding_toptier = Q()
awarding_subtier = Q()
for v in value:
type = v["type"]
tier = v["tier"]
name = v["name"]
if type == "funding":
if tier == "toptier":
funding_toptier |= Q(funding_toptier_agency_name=name)
elif tier == "subtier":
if "toptier_name" in v:
funding_subtier |= Q(funding_subtier_agency_name=name) & Q(
funding_toptier_agency_name=v["toptier_name"]
)
else:
funding_subtier |= Q(funding_subtier_agency_name=name)
elif type == "awarding":
if tier == "toptier":
awarding_toptier |= Q(awarding_toptier_agency_name=name)
elif tier == "subtier":
if "toptier_name" in v:
awarding_subtier |= Q(awarding_subtier_agency_name=name) & Q(
awarding_toptier_agency_name=v["toptier_name"]
)
else:
awarding_subtier |= Q(awarding_subtier_agency_name=name)
awarding_queryfilter = Q()
funding_queryfilter = Q()
# Since these are Q filters, no DB hits for boolean checks
if funding_toptier:
funding_queryfilter |= funding_toptier
if funding_subtier:
funding_queryfilter |= funding_subtier
if awarding_toptier:
awarding_queryfilter |= awarding_toptier
if awarding_subtier:
awarding_queryfilter |= awarding_subtier
queryset = queryset.filter(funding_queryfilter & awarding_queryfilter)
elif key == "legal_entities":
# This filter key has effectively become obsolete by recipient_search_text
msg = 'API request included "{}" key. No filtering will occur with provided value "{}"'
logger.info(msg.format(key, value))
elif key == "recipient_search_text":
def recip_string_parse(recipient_string):
upper_recipient_string = recipient_string.upper()
# recipient_name_ts_vector is a postgres TS_Vector
filter_obj = Q(recipient_name_ts_vector=upper_recipient_string)
if len(upper_recipient_string) == 9 and upper_recipient_string[:5].isnumeric():
filter_obj |= Q(recipient_unique_id=upper_recipient_string)
return filter_obj
filter_obj = Q()
for recipient in value:
filter_obj |= recip_string_parse(recipient)
queryset = queryset.filter(filter_obj)
elif key == "recipient_scope":
if value == "domestic":
queryset = queryset.filter(recipient_scope_q)
elif value == "foreign":
queryset = queryset.exclude(recipient_scope_q)
else:
raise InvalidParameterException("Invalid filter: recipient_scope type is invalid.")
elif key == "recipient_locations":
queryset = queryset.filter(geocode_filter_locations("recipient_location", value))
elif key == "recipient_type_names":
if len(value) != 0:
queryset = queryset.filter(business_categories__overlap=value)
elif key == "place_of_performance_scope":
if value == "domestic":
queryset = queryset.filter(pop_scope_q)
elif value == "foreign":
queryset = queryset.exclude(pop_scope_q)
else:
raise InvalidParameterException("Invalid filter: place_of_performance_scope is invalid.")
elif key == "place_of_performance_locations":
queryset = queryset.filter(geocode_filter_locations("pop", value))
elif key == "award_amounts":
queryset &= total_obligation_queryset(value, SubawardView, filters)
elif key == "award_ids":
queryset = build_award_ids_filter(queryset, value, ("piid", "fain"))
# add "naics_codes" (column naics) after NAICS are mapped to subawards
elif key in ("program_numbers", "psc_codes", "contract_pricing_type_codes"):
filter_to_col = {
"program_numbers": "cfda_number",
"psc_codes": "product_or_service_code",
"contract_pricing_type_codes": "type_of_contract_pricing",
}
in_query = [v for v in value]
if len(in_query) != 0:
queryset &= SubawardView.objects.filter(**{"{}__in".format(filter_to_col[key]): in_query})
elif key in ("set_aside_type_codes", "extent_competed_type_codes"):
or_queryset = Q()
filter_to_col = {"set_aside_type_codes": "type_set_aside", "extent_competed_type_codes": "extent_competed"}
in_query = [v for v in value]
for v in value:
or_queryset |= Q(**{"{}__exact".format(filter_to_col[key]): in_query})
queryset = queryset.filter(or_queryset)
# Because these two filters OR with each other, we need to know about the presense of both filters to know what to do
# This filter was picked arbitrarily to be the one that checks for the other
elif key == TasCodes.underscore_name:
if TreasuryAccounts.underscore_name in filters.keys():
q = TasCodes.build_tas_codes_filter(queryset, value)
q |= TreasuryAccounts.build_tas_codes_filter(queryset, filters[TreasuryAccounts.underscore_name])
queryset = queryset.filter(q)
else:
queryset = queryset.filter(TasCodes.build_tas_codes_filter(queryset, value))
elif key == TreasuryAccounts.underscore_name and TasCodes.underscore_name not in filters.keys():
queryset = queryset.filter(TreasuryAccounts.build_tas_codes_filter(queryset, value))
return queryset
|
def subaward_filter(filters, for_downloads=False):
queryset = SubawardView.objects.all()
recipient_scope_q = Q(recipient_location_country_code="USA") | Q(recipient_location_country_name="UNITED STATES")
pop_scope_q = Q(pop_country_code="USA") | Q(pop_country_name="UNITED STATES")
for key, value in filters.items():
if value is None:
raise InvalidParameterException("Invalid filter: " + key + " has null as its value.")
key_list = [
"keywords",
"elasticsearch_keyword",
"time_period",
"award_type_codes",
"prime_and_sub_award_types",
"agencies",
"legal_entities",
"recipient_search_text",
"recipient_scope",
"recipient_locations",
"recipient_type_names",
"place_of_performance_scope",
"place_of_performance_locations",
"award_amounts",
"award_ids",
"program_numbers",
"naics_codes",
"psc_codes",
"contract_pricing_type_codes",
"set_aside_type_codes",
"extent_competed_type_codes",
TasCodes.underscore_name,
TreasuryAccounts.underscore_name,
]
if key not in key_list:
raise InvalidParameterException("Invalid filter: " + key + " does not exist.")
if key == "keywords":
def keyword_parse(keyword):
# keyword_ts_vector & award_ts_vector are Postgres TS_vectors.
# keyword_ts_vector = recipient_name + psc_description + subaward_description
# award_ts_vector = piid + fain + uri + subaward_number
filter_obj = Q(keyword_ts_vector=keyword) | Q(award_ts_vector=keyword)
# Commenting out until NAICS is associated with subawards in DAIMS 1.3.1
# if keyword.isnumeric():
# filter_obj |= Q(naics_code__contains=keyword)
if len(keyword) == 4 and PSC.objects.filter(code__iexact=keyword).exists():
filter_obj |= Q(product_or_service_code__iexact=keyword)
return filter_obj
filter_obj = Q()
for keyword in value:
filter_obj |= keyword_parse(keyword)
potential_duns = list(filter((lambda x: len(x) > 7 and len(x) < 10), value))
if len(potential_duns) > 0:
filter_obj |= Q(recipient_unique_id__in=potential_duns) | Q(
parent_recipient_unique_id__in=potential_duns
)
queryset = queryset.filter(filter_obj)
elif key == "elasticsearch_keyword":
keyword = value
transaction_ids = elasticsearch_helper.get_download_ids(keyword=keyword, field="transaction_id")
# flatten IDs
transaction_ids = list(itertools.chain.from_iterable(transaction_ids))
logger.info("Found {} transactions based on keyword: {}".format(len(transaction_ids), keyword))
transaction_ids = [str(transaction_id) for transaction_id in transaction_ids]
queryset = queryset.filter(latest_transaction_id__isnull=False)
# Prepare a SQL snippet to include in the predicate for searching an array of transaction IDs
sql_fragment = '"subaward_view"."latest_transaction_id" = ANY(\'{{{}}}\'::int[])' # int[] -> int array type
queryset = queryset.extra(where=[sql_fragment.format(",".join(transaction_ids))])
elif key == "time_period":
min_date = API_SEARCH_MIN_DATE
if for_downloads:
min_date = API_MIN_DATE
queryset &= combine_date_range_queryset(value, SubawardView, min_date, API_MAX_DATE)
elif key == "award_type_codes":
queryset = queryset.filter(prime_award_type__in=value)
elif key == "prime_and_sub_award_types":
award_types = value.get("sub_awards")
if award_types:
queryset = queryset.filter(award_type__in=award_types)
elif key == "agencies":
# TODO: Make function to match agencies in award filter throwing dupe error
funding_toptier = Q()
funding_subtier = Q()
awarding_toptier = Q()
awarding_subtier = Q()
for v in value:
type = v["type"]
tier = v["tier"]
name = v["name"]
if type == "funding":
if tier == "toptier":
funding_toptier |= Q(funding_toptier_agency_name=name)
elif tier == "subtier":
if "toptier_name" in v:
funding_subtier |= Q(funding_subtier_agency_name=name) & Q(
funding_toptier_agency_name=v["toptier_name"]
)
else:
funding_subtier |= Q(funding_subtier_agency_name=name)
elif type == "awarding":
if tier == "toptier":
awarding_toptier |= Q(awarding_toptier_agency_name=name)
elif tier == "subtier":
if "toptier_name" in v:
awarding_subtier |= Q(awarding_subtier_agency_name=name) & Q(
awarding_toptier_agency_name=v["toptier_name"]
)
else:
awarding_subtier |= Q(awarding_subtier_agency_name=name)
awarding_queryfilter = Q()
funding_queryfilter = Q()
# Since these are Q filters, no DB hits for boolean checks
if funding_toptier:
funding_queryfilter |= funding_toptier
if funding_subtier:
funding_queryfilter |= funding_subtier
if awarding_toptier:
awarding_queryfilter |= awarding_toptier
if awarding_subtier:
awarding_queryfilter |= awarding_subtier
queryset = queryset.filter(funding_queryfilter & awarding_queryfilter)
elif key == "legal_entities":
# This filter key has effectively become obsolete by recipient_search_text
msg = 'API request included "{}" key. No filtering will occur with provided value "{}"'
logger.info(msg.format(key, value))
elif key == "recipient_search_text":
def recip_string_parse(recipient_string):
upper_recipient_string = recipient_string.upper()
# recipient_name_ts_vector is a postgres TS_Vector
filter_obj = Q(recipient_name_ts_vector=upper_recipient_string)
if len(upper_recipient_string) == 9 and upper_recipient_string[:5].isnumeric():
filter_obj |= Q(recipient_unique_id=upper_recipient_string)
return filter_obj
filter_obj = Q()
for recipient in value:
filter_obj |= recip_string_parse(recipient)
queryset = queryset.filter(filter_obj)
elif key == "recipient_scope":
if value == "domestic":
queryset = queryset.filter(recipient_scope_q)
elif value == "foreign":
queryset = queryset.exclude(recipient_scope_q)
else:
raise InvalidParameterException("Invalid filter: recipient_scope type is invalid.")
elif key == "recipient_locations":
queryset = queryset.filter(geocode_filter_locations("recipient_location", value))
elif key == "recipient_type_names":
if len(value) != 0:
queryset = queryset.filter(business_categories__overlap=value)
elif key == "place_of_performance_scope":
if value == "domestic":
queryset = queryset.filter(pop_scope_q)
elif value == "foreign":
queryset = queryset.exclude(pop_scope_q)
else:
raise InvalidParameterException("Invalid filter: place_of_performance_scope is invalid.")
elif key == "place_of_performance_locations":
queryset = queryset.filter(geocode_filter_locations("pop", value))
elif key == "award_amounts":
queryset &= total_obligation_queryset(value, SubawardView, filters)
elif key == "award_ids":
queryset = build_award_ids_filter(queryset, value, ("piid", "fain"))
# add "naics_codes" (column naics) after NAICS are mapped to subawards
elif key in ("program_numbers", "psc_codes", "contract_pricing_type_codes"):
filter_to_col = {
"program_numbers": "cfda_number",
"psc_codes": "product_or_service_code",
"contract_pricing_type_codes": "type_of_contract_pricing",
}
in_query = [v for v in value]
if len(in_query) != 0:
queryset &= SubawardView.objects.filter(**{"{}__in".format(filter_to_col[key]): in_query})
elif key in ("set_aside_type_codes", "extent_competed_type_codes"):
or_queryset = Q()
filter_to_col = {"set_aside_type_codes": "type_set_aside", "extent_competed_type_codes": "extent_competed"}
in_query = [v for v in value]
for v in value:
or_queryset |= Q(**{"{}__exact".format(filter_to_col[key]): in_query})
queryset = queryset.filter(or_queryset)
# Because these two filters OR with each other, we need to know about the presense of both filters to know what to do
# This filter was picked arbitrarily to be the one that checks for the other
elif key == TasCodes.underscore_name:
q = TasCodes.build_tas_codes_filter(queryset, value)
if TreasuryAccounts.underscore_name in filters.keys():
q |= TreasuryAccounts.build_tas_codes_filter(queryset, filters[TreasuryAccounts.underscore_name])
queryset = queryset.filter(q)
elif key == TreasuryAccounts.underscore_name and TasCodes.underscore_name not in filters.keys():
queryset = queryset.filter(TreasuryAccounts.build_tas_codes_filter(queryset, value))
return queryset
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.