The real and paper bot produce radically different results

I am writing here because the thing seems bigger than a simple bug.

  1. the same strategy implemented at the same time (40 seconds difference) on Bybit Spot and Bybit Paper Spot.
  2. the same number of trades executed at the same time
  3. Differences in Entry and Exit Price result in a profitable transaction on paper resulting in a loss in the real world.
  4. backtesting results in both cases are identical, similar to the paper version.

I can understand if backtesting produces different results than the real world. However, if different results are produced by the bot in paper and real mode, I lose faith in the underlying assumptions.

The cryptocurrency used (Vega) is in a sustained downward trend, but this should in no way affect the consistency of the data in paper and real mode.

The test amount is a $1, as the previous bot burned through real $30 in less than 12 hours.

Paper bot (14% gain):
https://app.gainium.io/bot/66f3a5653433d0f1a2d4dc91

Bot real (24% loss):
https://app.gainium.io/bot/66f3a53e3433d0f1a2d4dbd1

BTW In both cases the message ‘bot not found’ happens to me, reload helps.

Backtesting and paper trading do not take into account the real market, so it’s very likely that there will be a difference. For example, volatility, trading volume and delays can cause slippage in the real world that isn’t there in the simulation.

Vega looks like a token with low liquidity. In actual trading, there is slippage; in paper trading, there is not. Tokens with low liquidity will get executed in paper trading at whichever price is triggered, which can differ greatly from a real trade. We have no way to know liquidity and slippage, this is a known problem with paper trading. If you want more accurate results I suggest trading tokens with higher liquidity.

For this reason, I do not complain about backtesting. However, if a paper bot does not simulate reality, why use it? What do you get out of NCAP stars that say nothing about actual security?

I understand the possible limitations, for example the low reliability for trades inside 1 candle (BTW that’s not the case). Or the trailing stoploss, which should not be too small.

So if a paper bot (not backtesting) has some limitations, it should be clearly stated. Otherwise it provides false information resulting in real losses.

I am not complaining, however, I want to get an answer as to what errors are possible and when we can trust paper bots.

This is because I assume that paper bots only provide untruths in certain situations. Which ones?

Thank you, this is the kind of answer I was looking for. Admittedly, $50,000 volume on Bybit doesn’t seem very small to me, but perhaps not enough to avoid slippage.

The main difference of paper bots is, that they run in real time and therefore may have access to interbar values of the current candle. Else they possibly do the exact same steps as the backtests and should therefore give almost identical results for the same time frame.