Python

After installing you can import SCS using

import scs

This module provides the SCS class which is initialized using:

solver = scs.SCS(data,
                cone,
                use_indirect=False,
                mkl=False,
                apple_ldl=False,
                gpu=False,
                verbose=True,
                normalize=True,
                max_iters=int(1e5),
                scale=0.1,
                adaptive_scale=True,
                eps_abs=1e-4,
                eps_rel=1e-4,
                eps_infeas=1e-7,
                alpha=1.5,
                rho_x=1e-6,
                acceleration_lookback=10,
                acceleration_interval=10,
                time_limit_secs=0,
                write_data_filename=None,
                log_csv_filename=None)

where data is a dict containing P, A, b, c, and cone is a dict that contains the Cones information. The cone dict contains keys corresponding to the cone type and values corresponding to either the cone length or the array that defines the cone (see the third column in Cones for the keys and what the corresponding values represent). The b, and c entries must be 1d numpy arrays and the P and A entries must be scipy sparse matrices in CSC format; if they are not of the proper format, SCS will attempt to convert them. The use_indirect setting switches between the sparse direct Linear System Solver (the default) or the sparse indirect solver. If the MKL Pardiso direct solver for SCS is installed then it can be used by setting mkl=True. On macOS the Apple Accelerate sparse LDLT solver is included automatically and can be used by setting apple_ldl=True. If a GPU solver for SCS is installed and a GPU is available then it can be used by setting gpu=True. For the direct GPU solver based on cuDSS set use_indirect=False. The remaining fields are explained in Settings.

Cone dict

The cone dict supports the following keys (see Cones for mathematical definitions):

Key

Value

Description

z

int

Zero cone length.

l

int

Non-negative cone length.

bu, bl

array

Box cone upper/lower bounds (length \(\text{bsize}-1\)).

q

list[int]

Second-order cone lengths.

s

list[int]

PSD cone matrix dimensions.

cs

list[int]

Complex PSD cone matrix dimensions.

ep

int

Number of primal exponential cone triples.

ed

int

Number of dual exponential cone triples.

p

list[float]

Power cone parameters in \([-1, 1]\).

Spectral cone keys (require spectral cone build):

Key

Value

Description

d

list[int]

Log-determinant cone matrix dimensions.

nuc_m, nuc_n

list[int]

Nuclear norm cone matrix row/column dimensions (must be equal length, \(m_i \geq n_i\)).

ell1

list[int]

\(\ell_1\) norm cone vector dimensions.

sl_n, sl_k

list[int]

Sum-of-largest-eigenvalues cone matrix dimensions and \(k\) values (must be equal length, \(0 < k_i < n_i\)).

Then to solve the problem call:

sol = solver.solve(warm_start=True, x=None, y=None, s=None)

where warm_start indicates whether the solve will reuse the previous solution as a warm-start (if this is the first solve it initializes at zero). A good warm-start can reduce the overall number of iterations required to solve a problem. 1d Numpy arrays x,y,s are (optional) warm-start overrides if you wish to set these manually rather than use solution to the last problem as the warm-start.

At termination sol is a dict with fields x, y, s, info where x, y, s contains the primal-dual solution or the certificate of infeasibility, and info is a dict containing the solve Return information. Detailed Anderson acceleration diagnostics are available in sol["info"]["aa_stats"].

To re-use the workspace and solve a similar problem with new b and / or c data, we can update the solver using:

solver.update(b=new_b, c=new_c)  # update b and c vectors (can be None)
solver.solve()  # solve new problem with updated b and c