An analysis for 3pool data of Curve with covalent dataset

Mitu Tan
4 min readDec 18, 2020

--

Recently I attended the Covalent’s Dungeons & Data competition. This competition is for Curve data analysis and it provides Curve’s 3pool and ypool datasets. I’d like to share with you how I use the datasets. Before that, let’s learn some backgrounds of Curve and Covalent.

If you’re a DeFi user maybe you have heard Curve. Curve is a stablecoin swap protocol with low slippage and low trading fee. Now it provides 20 liquidity pools and covers all main stablecoins. In fact, any relatively stable coins can be integrated in Curve not only dollar-pegged coins.3pool is a liquidity pool composed of 3 stable coins(DAI/USDC/USDT).

Covalent provides a unified API to bring full transparency and visibility to assets across all blockchain networks. Simply speaking it makes on-chain data easily accessible to no blockchain background programmers.

Covalent listed 3 tasks for this competition and the first task is about total volume traded. I use 3pool datasets for this task. After downloading the datasets I try to figure out what’s the meaning of different columns. Obviously many columns are easily understood such as the address,time etc. There are 4 columns(logged_sold_id,logged_tokens_sold,logged_bought_id,logged_tokens_bought) needed to do some converts. The logged_sold_id represents the sold token and logged_tokens_sold represents the amount of the sold token. There are 3 kinds of values for the sold token:0,1,2 which are mapped to 3 tokens:DAI,USDC,USDT. There’s an easy way to figure out this number. Just go to the 3pool trading page and the order is consistent with the trading order. As for the value of logged_tokens_sold we should divide the token’s decimal. The decimal can be got on etherscan. Well we just need the time column by day so we can substring(the block_signed_at,0,10) to get the day date.

DAI decimals in etherscan

After finishing the convert part we can make the data visualization now. I use google’s datastudio achieving this. If you have used BI tools before it’s easy for you to use datastudio. All you need is defining the datasource(I use google sheet as the datasource),generating the report and sharing with others.

Here I also want to tell you how I define the volume. I used the token sold amounts as the unit to calculate volume. So the total trading volume equals sum(token_sold_amount). From this report we can learn that the maximum trading volumes day is 14 Nov. Some of you may have already recalled a value defi hacking happened that day. This hacker manipulated Curve on-chain prices which contributed to the huge trading volume. The daily average trading volume is also an interesting metric. It shows that there are many big trading sizes.

Now it’s time to go through the next task: calculate slippage rates. Before this I should figure out what the slippage rate is. I asked one Curve community member @ne1k0 for help. He told me the slippage rate = (<Value of Item Received> / <Value of Item Sold>) — 1 so in Curve 3pool it equals bought_token_amount/sold_token_amount — 1. Then it requires group the trading volume into sizes of 1–50, 50–500, 500–5000, 5000–50,000 and above 50,000. I think this is not complete so I added 0–1 size. I decide to use the average slippage rate for different groups and considering the slippage rate can be positive or negative using the above formula I use the absolute value. Then @ne1k0 told me I should subtract 0.04% because the trading fee(which is 0.04%) is included into the slippage. After subtracting 0.04% I got the result and uploaded it to the google sheet again. Next step is the same as the first task: creating a new data source and generating a new report. But I think it’s better to combine this report with the first report so I only add a new page for the first report.

Let’s analyse the second page now. Many people may think large trading sizes will have large slippage rates but from the report we can see that different trading sizes have nearly similar slippage rates. It proves again that Curve has a low slippage rate. There’s a very big slippage rate in the above 50,000 group which refers to this transaction. It also happened on 14 Nov and someone swapped 31M USDT to 17.33M USDC. The reason is the same with the first report(defi value hacking).

But I meet some challenges for the third task: calculate whether the Curve rewards as determined by the gauges are consistent with the amount of liquidity. The Curve rewards are consistent with gauge relative weight so this task can be expressed as : whether the gauge relative weight is consistent with the amount of liquidity. The question is how to use covalent’s API to calculate the relative weight of different pools. Also take 3pool for example both @jiecut and @ne1k0 told me I can get the relative weight by calling the gauge controller contract’s gauge_relative_weight function.But the question is I call only get the event log data from covalent’s API so I think I have to calculate all pools weight and get the 3pool relative weight. From the gauge controller’s event I can find which addresses vote for which gauge at which time. I think it’s too challenging for me so I give up. That’s a pity.

From this competition I learn more about Curve and covalent. Thanks Ben, if you don’t post this competition news at Curve ecosystem group I will never know covalent. Thanks Angel, you give many feedbacks of my reports and encourage me to apply for the ecosystem grant again. Thanks jie and ne1k0, you both always help solve my confusions about Curve. Data analysis is still a small part of the blockchain area but I do think it will grow rapidly in the future!

--

--