Update on CovrPrice relative values

Hi. So as we all know, one of the quirks of CovrPrice is that you can sometimes have books show as worth more in lower grades. This is because each grade range is considered a separate thing in terms of how sales are applied.

I know there was some conversation about how to maybe apply some additional logic to the values that would account for this, but it’s been a while since I’ve heard any updates.

It came to mind because I’ve been thinking about how during the comics “bubble” of 2020-2021, people were trading comics in many grades. Now that we’re out of that, prices are going down, and with that probably comes a reduction in the number of individual-comic sales (which is what CovrPrice can track) in lower grades. If I have a bunch of VF raw books, I might be more likely to sell them in lots now, for example. Which means those higher values for VF from 2020-2021 are less likely to get updated.

Anyway, just wondering if any progress had been made in that area. Thanks!

Providing more of a predictive view of values as opposed to an actual view of sales - where there’s more of a logical price curve that attempts to incorporate not only grade, but age of sale, volume of sales, prices of similarly conditioned unsold copies, etc. - is definitely near the top of our list of features to deliver. The problem is no one has figured it out for a near real-time solution taking into account myriad data sources and literally 30 different grades across raw and slabbed. We have quite a few ideas we’ve been cultivating for years on how to accomplish this but it comes down to time, money, resources and priorities. Right now - on top of keeping the site humming along and filled with new content and sales data - our top priority is finishing a major database overhaul that will enable some next-level data analysis, filtering and sorting around creator and key info as well as various other comic-related metadata. The new database touches every corner of our solution so we must get that right. That will open the door to quite a few other big features - including the predictive pricing model.

I know you’d much rather hear that predictive pricing is coming in a month (or less), but rest assured - we talk about it constantly and keep adding to our plans for how to build it (but also not confuse people with actual sales data vs the more predictive value information).

Just to throw it out there, here’s a perspective.

So as an example we’ve got Cosmic Powers #1.
Cosmic Powers Vol.1 #1 - CovrPrice

For those who don’t want to bounce away it’s currently sitting at:
NM+ $4 (03/01/21)
NM $4 (01/19/25)
VF $10 (03/06/25)
FN $8 (01/01/25)
VG $1 (02/09/25)
NG $3 (04/25/24)

So this is a typical example, although I will admit I thought it was usually prices coming down for more recent sales, but this should still work.

So somewhere “in memory” you could keep these numbers. But before presenting them*, there would be one last look that sees that the numbers don’t descend from top to bottom (ignoring No Grade). So that means that the numbers need to be adjusted.
That most recent $10 sale for VF would therefore mean that VF, NM, and NM+ need to be updated to $10 as well. Since anybody who would pay $10 for VF would pay $10 for NM or NM+ too, if given the opportunity. So then you’d actually move forward with:

NM+ $10
NM $10
VF $10
FN $8
VG $1
NG $3

Not sure if I’m over-simplifying - feel free to “stump me” by providing a more difficult example.

Appreciate your listening.

  • I say “presented” to mean either on the CovrPrice site and CLZ OR arguably this could just apply to what goes to CLZ. I’m a CLZ user so that’s technically more my priority. And since CLZ only gets the one value $ rather than a more comprehensive view, this sort of “fix” might arguably be more necessary for that.

There are various challenges with programmatically setting values. I can speak to your example but then list myriad others.

In the example you provided, the most recent recorded sale was from 3/6/2025, in VF and it was for $10. Based on your logic I can see how you’d think setting $10 for the NM and NM+ buckets would make a lot of sense. In a vacuum, it would. But there’s other factors you’d need to consider or you’re really just making a price curve to make it look logical, when in fact that NM $4 value is probably a lot closer to the actual value. If you look on eBay right now, you can get a solid copy - better than VF, for less than $2 (plus shipping - but that’s another factor to potentially consider). So while the actual value for a NM is more like $2-4, just relying on one most recent sale to reset the entire curve would throw.

So how many sales do you need at a given grade before you reset the price curve? What if a $5 FN sale came in this week and a $3 NM sale? Which one dictates how the curve should look?

Does that mean CP data is crap? I’d argue - no. These are all confirmed sales, sales that didn’t trigger any of our red flags; but it shows the actual variability for low dollar books, as well as the variability you can find across platforms (that $10 sale was a VF grade raw from MCS and maybe the buyer just didn’t care to look at copies on eBay available for $2 because: a) they were already making a purchase on MCS and wanted to grab a few more items off their wish list, b) they don’t really care if a book is $2 or $10 - maybe the convenience drove them more than the cost, c) maybe they have a free shipping deal if they buy $x in books, d) maybe they trust MCS grading a lot more than other places and have historically bought VF’s there and got NM+'s so they were willing to pay the slightly heightened price, e) maybe they trust MCS packing/shipping more than a random eBay seller, etc.).

As for the various challenges shown here by this relatively simple example, but also more complex examples:

  • How many sales at a given grade are enough to cause the price curve to change?
  • How old can a sale be and still influence a price curve?
  • Can a raw sale influence the slabbed price curve and vice versa?
  • What if you have very little data, maybe a sale from 6 months ago at VG for $100 and a sale from 5 days ago at VF for $40 and a GD sale from today at $65 - how does that impact the price curve?
  • Should you factor in asking prices on unsold books?
  • What if the asking price is very high?
  • What if the asking price is very low compared to recently recorded sales?
  • Does the fact that one sale was a BIN and another sale an auction play a role in shifting a price curve?
  • Does the source of the sale matter (MCS/Heritage vs random 92% feedback seller on eBay)?
  • Does a Golden age filler book price curve look different than a Modern filler book? What about a Bronze age Key vs a Silver age filler?
  • Does rarity play a role?

So while I agree there are a lot of examples where fitting values to a logical price curve makes a lot of sense (maybe you have recent data at $10 for a NM and $5 for a FN, but no VF data, so you set that VF at something like $7 and call it a day), but as soon as new data starts coming in, there’s only data at one of the ends of the grading scale, data is older, data is limited, conflicting data comes in from various sources - now do that for over 700K books across 30 grades many times a day and it gets REALLY hard to programmatically set values.

Hope that helps in just brushing the surface on the challenge of predictive price models.

1 Like

Just another couple of cents of my 2 cents:

  • How many sales at a given grade are enough to cause the price curve to change? - however many sales are enough to change the value for that grade. The $10 was enough to change the VF so why not the rest? If we are concerned the $10 was not valid then it should not be considered for the VF.

  • How old can a sale be and still influence a price curve? - same age as it can be to influence a single grade level.

  • Can a raw sale influence the slabbed price curve and vice versa? Only if you’re doing it for individual grade levels.

  • What if you have very little data, maybe a sale from 6 months ago at VG for $100 and a sale from 5 days ago at VF for $40 and a GD sale from today at $65 - how does that impact the price curve? - everything Good and up goes to $65.

  • Should you factor in asking prices on unsold books? No.

  • What if the asking price is very high? N/A

  • What if the asking price is very low compared to recently recorded sales? N/A

  • Does the fact that one sale was a BIN and another sale an auction play a role in shifting a price curve? Does it currently when figuring an individual grade?

  • Does the source of the sale matter (MCS/Heritage vs random 92% feedback seller on eBay)? Do it currently when figuring an individual grade?

  • Does a Golden age filler book price curve look different than a Modern filler book? What about a Bronze age Key vs a Silver age filler? No.

  • Does rarity play a role? No.

I think you’re making a broader distinction than I am between setting a value for an individual grade and using that same data to influence the other grades.

I actually don’t support the “$7 and call it a day” model, I agree that it should be based on sales, but just to not ignore the notion that people when presented with the same price for a VF and a NM would take the NM.

1 Like

Me make a notable distinction between reporting the sales and predictive pricing. When you’re just reporting sales that’s all you’re doing - so a single sale is just another data point.

When you’re doing predictive pricing and potentially influencing every grade that’s a much bigger deal - you don’t want to be shifting every grade’s value based on just one data point that could be an outlier.

Its a similar situation with age of sale - when its just reporting the sales that’s one thing. But when a sale could potentially influence an entire price curve you want more data before making those types of programmatic decisions.

I get the logic - if a lower value book goes for X, then every grade above that should at least go for X. But what if instead you have 100 very recent data points that say a NM copy is $40, and then a GD comes in at $100. Do you completely disregard that very strong signal that a NM copy is $40 for one single data point? I’d say definitely not. And that’s where it gets tricky. What if its 50 data points at NM? 25? 10? What if its 10 over the course of the past 6 months but one data point showing $40 from this past week? It might be a lot simpler to just shift the entire curve - it might make you feel better from a surface level standpoint - but its really not representative of the actual market for the book.

This is highly problematic as well. If you can readily buy that book online at that same grade for $2 when the last recorded sale is $50, its HIGHLY likely the value of that book has gone way down and just no one is interested enough to even buy it at $2. While it should not necessarily influence all values, its something to be considered in predictive pricing.

The price curves of different age books, whether they are key, etc. - are definitely different. This type of thing is crucial in providing predictive values for books that just don’t have a lot of data. If you only have a few data points, filling in those blanks in a realistic way is heavily dependent on an age/rarity/key adjusted curve.

So I get what you’re saying about the general concept of, if a lower grade sold for x, every higher grade should at least be x. But I’d argue that while that makes you feel good about the curve - that it seems logical - it would actually be doing a great disservice to folks trying to rely on that predictive valuation.

Appreciate you engaging on the topic as its definitely a tricky one and one we’ve been thinking a lot about over the years. Thanks again!