How to model vAMM from scratch with zero users?

Perpify
9 min readJan 30, 2023

This is the second article in a series describing the technical features of the Perpify trading engine. Feel free to check the first one as well 🤝

Modeling vAMM from scratch with zero users can seem like a daunting task, and at Perpify, we’re faced with the challenging multi-parameter problem of optimizing our model. In our previous article, we discussed the concept of ADvAMM, but now the question is, how do we optimize this multi-parameter model when we have no users so far? Some may suggest using guess values for coefficients or simple functions for the parameters and testing the model in the wild.

We don’t believe in cutting corners. We want to provide our users with a high-quality, smooth, and predictable trading experience from the very beginning. That’s why we’ve built a research process and are actively working on optimizing our system.

Optimizing ADvAMM is crucial for several reasons:

  • It needs to be solvent at all times.
  • It needs to be profitable, with a profit expectation rate that can be regulated in the future (and just >0 right now).
  • It needs to be predictable and comfortable for traders to use. This includes:
  • Adequate slippage and price impact, so traders aren’t liquidated on price jumps that don’t actually occur on the external market.
  • Adequate values of funding rate and swap fee, which create incentives for self-regulation and allow successful traders to earn.

It’s important to note that simply creating a “greedy protocol” with forced profit delay and high exchange fees is not enough to ensure stability and satisfaction for traders.

In mathematical terms, the problem of optimizing ADvAMM can be formulated as follows:

b(t) — the protocol’s balance over time (the accumulated difference between what it earns and spends).

d_avg — average growth calculation function for a period of time (for example, for a day)

fee_i — protocol fees: swap fees and funding payments (it is important to keep both of them to a minimum and to limit them with the maximum available values in order not to let the system optimize itself to an inappropriate state, such as fees in the dozens).

When optimizing ADvAMM, there are several key parameters to consider:

  • Swap fee: This is a formulaic fee that takes into account the bid/ask price simulation. At Perpify, we implement two different fee functions (buy/sell) while imposing a limit on their difference to keep them close to each other and reformulate them as bid & ask price, which is familiar to traders and typical to the spot market. The problem is formulated as follows:

L(t) — Time series of the total long position value

S(t) — Time series of the total short position value.

in/out — buy or sell direction indexes (increase or decrease of L(t) and S(t))

  • Funding fee: This is a formulaic fee that acts as a market→index magnet and creates an arbitrage opportunity while synchronizing ADvAMM with the external market. The model can be extended with the following formulations:

T_twap — is a time-weighted averaging function

p_mark/p_index — mark and index prices time series (regressed to functions)

g_l(t) / g_s(t) — funding payment function (for longs and shorts)

  • Sync and Async pool adjustment functions: These functions change the peg price (async) and affect the mark price, and the system. Changing k (sync) affects the mark price behavior and the pool sensitivity p_mark(t), and therefore, these regulations create direct costs for the protocol for already opened longs and shorts. But on the other hand, it can minimize all subsequent losses at further changes in the pool state, so the optimization of these operations is definitely needed. These functional constraints are the subject of a separate article, and we will certainly tell you about them.
  • Max stake / leverage / liquidation restrictions: This is a very important group of quite simple restrictions on tangible and important system parameters for the user. For example, there must be a restriction on the redemption of 100% of the pool’s reserves through the “max stake allowed” parameter, and this parameter must be sensitive to changes in the pool’s state — (k, peg, S(t), L(t)) and adjust to the current market demand. Liquidations should also be parameterized to protect the protocol from liquidations that are harmful, by temporarily raising the threshold of the maximum allowable effective leverage if it is in the interest of the protocol and does not incur costs.”

Optimization Environment:

To solve the optimization problem, we need to make some assumptions about parameters that are external to the protocol — such as the state of the market and the actions of Perpify users.

  • To model user stakes, i.e. L(t), S(t), we turn to market leaders for guidance. Here are GMX statistics on the size of user positions for the week:

The graph shows a distribution that can be considered close to a Poisson distribution (the variation of which we use when creating our test scenarios), and in terms of the ratio of long vs short it would be interesting to model three different types of situations:

  • L(t) is close to S(t) — the basic calm scenario, where the optimization task solution should give the target functions of ADvAMM parameters that we would like to strive for.
  • L(t) dominates — the stressful scenario where L over S dominance should be chosen based on historical data and expected future real-world situations. This is because an optimization task solution in unlikely stressful scenarios will likely require weakening the constraints on commission values, and may create a too “greedy” system of rules that users would not find interesting to trade with. Therefore, the optimization task solution in such a scenario should be used as a support for fine-tuning the solution in the basic scenario, and then compared.
  • S(t) dominates — similarly to the previous point.

We also focus on modeling the external market by examining the index price drift. To do this, we use the Geometric Brownian Motion (GBM) model, which is a widely accepted approach to modeling market behavior. We propose considering three different scenarios:

  • A scenario where the volatility of the index price is relatively stable and there is no significant drift over the period of analysis.
  • A scenario where there is a significant upward drift in the index price over the period of analysis.
  • A scenario where there is a significant downward drift in the index price over the period of analysis.

In our search for optimal market parameter values, we use a combination of approaches: numerical modeling and analytical methods. It is clear that we are not the first to tackle the optimization of vAMM, so the quickest and most accessible approach for us is numerical modeling of the system. Building on the experience of Drift, Perp and GMX, we have already created a python model ADvAMM, which simulates trading over a given time interval by playing out one of the customizable scenarios from two groups of inputs:

  • the behavior of the index price
  • the behavior of ADvAMM users (opening/closing positions)

For example, here is how the system’s results look when simulating a weekly period of high volatility with a small upward price drift and a significant dominance of long vs short positions (which is a relatively uncomfortable scenario for the system):

from datetime import date, datetime, timedelta

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

from model import VAMM, BetStatus, Oracle

pd.set_option('display.width', 1000)
pd.set_option("display.precision", 4)

SIMULATION_PERIOD = 7 # days
BETS_PER_DAY = 200

MS_IN_MINUTE = 60*1000
BET_LIFETIME_MU = 48*60*MS_IN_MINUTE # milliseconds
BET_LIFETIME_SIGMA = 12*60*MS_IN_MINUTE # milliseconds
BETS_LS_RATIO = [0.2, 0.8] #longs vs shorts ratio

PEG = 100
K = 1000000
from tqdm import tqdm

vamm = VAMM(K**2, PEG, date_start)
results_df = pd.DataFrame(columns=["event", "vault", "fee_minus_distributions", "distributions"])
prices_df = pd.DataFrame(columns=["mark", "oracle", "mark_twap", "oracle_twap" ])
bets_df = pd.DataFrame(columns=["long_short_diff"])
funding_df = pd.DataFrame(columns=["fr", "cum_fr_longs", "cum_fr_shorts"])

def run_simulation():

Simultaneously, the analytical approach is quite complex but incredibly productive for us. The system’s balance, the average daily drift, and all other constraints, as shown above, are expressed mathematically, making the application of gradient descent and genetic algorithms, or in this case, since we have specific constraints on the functions — Lagrange multipliers or Sequential Quadratic Programming (SQP) available. We are using scipy.optimize (minimize, minimize_scalar etc) and PYOMO and CVXPY packages as well.

It’s all quite simple if we move step by step, adding new constraints and parameters, rather than immediately complicating the system to the maximum:

  • Represent the protocol’s balance over time as the sum of incoming and outgoing payments.
  • Incoming payments can be represented as the sum of swap fee, and incoming funding payments, while outgoing payments are represented as the sum of outgoing funding payments, regulation costs (sync, async), and at a certain level of refinement, protocol compensation costs for liquidations that were not carried out in time and the rate “went negative.”
  • As we have already shown above, all of these components can be represented as time series as a result of optimization and, as a result of the regression, as system state functions: k, peg, L(t), S(t), p_index.

It is important to note that this approach to modeling implies smooth and differentiable L, S, and the functional (also smooth) fees: funding and swap respectively. However, this approach is permissible. The L and S generated by a certain scenario are expressed as functions of time, and using a polynomial decomposition for such extrapolation (for example, we use sklearn.preprocessing.PolynomialFeatures and numpy.polyfit) allows for easy differentiation and integration.

The transition from smooth and continuous functional parameters of the protocol’s commissions to discrete values of swap and funding fees is also a solvable task. For example, in the case of funding, it is integral to such a function over a period of funding.

Finally, it should be noted that solving the optimization problem of f values of form f(p(t)), where p is a function of time, is also a simplification, as ultimately we are interested in pool parameters expressed as a function of the system state, not time. In this case, as already mentioned, we generate states from scenario to scenario as sets of values in time and reducing the set of p(t) and q(t) to p(q) is generally a solvable task, for which we use the SymPy package, approximately in this way:

from sympy import *

# Define the functions
t = symbols('t')
f = t**2 //#fee function
p = t**3 //#state function (dataset reggressed approximation)

# Find the inverse function of p
p_inv = solve(p-t, t)

# Substitute the value of t from the inverse function in the f(t)
z = f.subs(t,p_inv[0])

Where are we now?

At this stage, we cannot fully reveal the code and data as, on one hand, the work is not yet finished, and on the other, it may be harmful to the protocol at such an early stage of its development.

However, building a decentralized protocol and being devoted to the core concepts of web3, we will publish the obtained values of the ADvAMM functional parameters, as well as the entire preceding code of the modeling, and the vAMM itself after passing community tests and a public testnet.

--

--

Perpify

Decentralized NFT Perpetual Exchange that supports deep liquidity and low fees for leveraged trading a wide range of NFT collections.