RE: How many players did Splinterlands have in Season 162?

You are viewing a single comment's thread:

Hello @beaker007, nice to greet you. Yes, data mining can be highly addictive! If you're very curious, with lots of questions and available data, the effort-to-reward ratio is truly gratifying. I confess that when I decided to install the server, I did so with many doubts and knowing what I was getting into, but even so, the apprehension always far outweighed those fears.

I'm not sure why the dynamics of players located in different parts of the planet call to me so much. Questions arise like, "Are Splinterlands players global, or do they form isolated groups due to time zone differences? That is, how often do I face Asian or European players? Are we always the same players within a determined time range?" And the anxiety grows as more data accumulates. It's a shame I can't dedicate much more time than a few hours a week to this activity.

I wasn't familiar with your beebalanced tool; well, there are many things that exist in the Hive world that escape us because there's no easy way to keep them present for users. We'll have to do something about that!

Past data doesn't interest me as much because, even though it has a lot to say and much can be learned from that enormous boom Splinterlands once had, I'm more interested in the current and future state of the game. As you well know, the amount of data is overwhelming, and I'm not sure if the effort and cost of old data are worth it. At some point, I might dedicate the effort and resources to obtain all the data, but it would be more for creating a historical and foundational base for the game, rather than for a genuine interest in analyzing that data.

What do I use to obtain the data? Basically, I have a server with Ubuntu 24.04.2 LTS operating system, an Intel Core I3-2100 4-core processor, 16 GiB of RAM, a 120 GB SSD hard drive for Splinterlands data, and a 1 GB bandwidth internet connection. I always work with modules and separate the entire workflow into various microservices. So, I have a Python program that runs as a systemd service and is responsible for connecting to the API and extracting battles in raw format. All this data is stored in a temporary database. Then, I have another Python script also running as a systemd service that handles the processing and classification of all battles by season, match type, battle format, etc. This script is the system's orchestrator, creating a database per season_matchtype_format. For simplicity, all databases are currently in SQLite. Finally, what I'm currently working on is the service itself, which consists of a script that will monitor queries on my Hive account and respond to those queries with the data users request. That's the current structure I have. The most serious problems I face with this service are the frequent power outages where the server is located, consequences of a sad reality where we went from being a net energy-exporting country to one that doesn't have enough infrastructure to maintain current consumption.



0
0
0.000
1 comments
avatar

thanks you for such a detailed and insightful response! 🙏

I truly appreciate the time you took to share your setup, process, and thoughts. It’s always inspiring to hear from others who are equally passionate (and slightly addicted 😅) to digging into Splinterlands data.

I agree—while past data can tell interesting stories, it’s the present trends and future potential that really spark curiosity. The questions you’re asking about global player behavior and time zone dynamics are fascinating.

It’s also great to see that you’re using Python for this—still one of my favorite tools for this kind of analysis.

0
0
0.000