Modelling CNECs and range actions#

Used input data#

Name

Symbol

Details

FlowCnecs

\(c \in \mathcal{C}\)

set of FlowCnecs. Note that FlowCnecs are all the CBCO for which we compute the flow in the MILP, either:
- because we are optimizing their flow (optimized flowCnec = CNEC)
- because we are monitoring their flow, and ensuring it does not exceed its threshold (monitored flowCnec = MNEC)
- or both

RangeActions

\(r,s \in \mathcal{RA}\)

set of RangeActions and state on which they are applied, could be PSTs, HVDCs, or injection range actions

RangeActions

\(r \in \mathcal{RA(s)}\)

set of RangeActions available on state \(s\), could be PSTs, HVDCs, or injection range actions

Iteration number

\(n\)

number of current iteration

ReferenceFlow

\(f_{n}(c)\)

reference flow, for FlowCnec \(c\).
The reference flow is the flow at the beginning of the current iteration of the MILP, around which the sensitivities are computed

PrePerimeterSetpoints

\(\alpha _0(r)\)

setpoint of RangeAction \(r\) at the beginning of the optimization

ReferenceSetpoints

\(\alpha _n(r)\)

setpoint of RangeAction \(r\) at the beginning of the current iteration of the MILP, around which the sensitivities are computed

Sensitivities

\(\sigma _{n}(r,c,s)\)

sensitivity of RangeAction \(r\) on FlowCnec \(c\) for state \(s\)

Previous RA setpoint

\(A_{n-1}(r,s)\)

optimal setpoint of RangeAction \(r\) on state \(s\) in previous iteration (\(n-1\))

Used parameters#

Name

Symbol

Details

Source

sensitivityThreshold

Set to zero the sensitivities of RangeActions below this threshold; thus avoiding the activation of RangeActions which have too small an impact on the flows (can also be achieved with penaltyCost). This simplifies & speeds up the resolution of the optimization problem (can be necessary when the problem contains integer variables). However, it also adds an approximation in the computation of the flows within the MILP, which can be tricky to handle when the MILP contains hard constraints on loop-flows or monitored FlowCnecs.

Equal to pst-sensitivity-threshold for PSTs, hvdc-sensitivity-threshold for HVDCs, and injection-ra-sensitivity-threshold for injection range actions

penaltyCost

\(c^{penalty}_{ra}\)

Supposedly a small penalization, in the use of the RangeActions. When several solutions are equivalent, this favours the one with the least change in the RangeActions’ setpoints (compared to the initial situation). It also avoids the activation of RangeActions which have to small an impact on the objective function.

Equal to pst-penalty-cost for PSTs, hvdc-penalty-cost for HVDCs, and injection-ra-penalty-cost for injection range actions

Defined optimization variables#

Name

Symbol

Details

Type

Index

Unit

Lower bound

Upper bound

Flow

\(F(c)\)

flow of FlowCnec \(c\)

Real value

One variable for every element of (FlowCnecs)

MW

\(-\infty\)

\(+\infty\)

RA setpoint

\(A(r,s)\)

setpoint of RangeAction \(r\) on state \(s\)

Real value

One variable for every element of (RangeActions)

Degrees for PST range actions; MW for other range actions

Range lower bound[1]

Range upper bound[1]

RA setpoint absolute variation

\(\Delta A(r,s)\)

The absolute setpoint variation of RangeAction \(r\) on state \(s\), from setpoint on previous state to “RA setpoint”

Real positive value

One variable for every element of (RangeActions)

Degrees for PST range actions; MW for other range actions

0

\(+\infty\)

Defined constraints#

Impact of rangeActions on FlowCnecs flows#

\[ \begin{equation} F(c) = f_{n}(c) + \sum_{r \in \mathcal{RA(s)}} \sigma_n (r,c,s) * [A(r,s) - \alpha_{n}(r,s)] , \forall (c) \in \mathcal{C} \end{equation} \]

with \(s\) the state on \(c\) which is evaluated


Definition of the absolute setpoint variations of the RangeActions#

\[ \begin{equation} \Delta A(r,s) \geq A(r,s) - A(r,s') , \forall (r,s) \in \mathcal{RA} \end{equation} \]
\[ \begin{equation} \Delta A(r,s) \geq - A(r,s) + A(r,s') , \forall (r,s) \in \mathcal{RA} \end{equation} \]

with \(A(r,s')\) the setpoint of the last range action on the same element as \(r\) but a state preceding \(s\). If none such range actions exists, then \(A(r,s') = \alpha_{0}(r)\)


Shrinking the allowed range#

If parameter ra-range-shrinking is enabled, the allowed range for range actions is shrunk after each iteration according to the following constraints:

\[ \begin{equation} t = UB(A(r,s)) - LB(A(r,s)) \end{equation} \]
\[ \begin{equation} A(r,s) \geq A_{n-1}(r,s) - (\frac{2}{3})^{n}*t \end{equation} \]
\[ \begin{equation} A(r,s) \leq A_{n-1}(r,s) + (\frac{2}{3})^{n}*t \end{equation} \]

The value \(\frac{2}{3}\) has been chosen to force the linear problem convergence while allowing the RA to go back to its initial solution if needed.

Contribution to the objective function#

Small penalisation for the use of RangeActions:

\[ \begin{equation} \min \sum_{r,s \in \mathcal{RA}} (c^{penalty}_{ra}(r) \Delta A(r,s)) \end{equation} \]