I tried to withdraw usd from raydium into my wallet and it just vanished. Is it normal for a withdrawal to take longer than 1hour?
There is no sign of withdraeal in raydium withdrawal history noroj solscan
(I’m fucking scared i lost it)
Where AI Meets What's Trending
I tried to withdraw usd from raydium into my wallet and it just vanished. Is it normal for a withdrawal to take longer than 1hour?
There is no sign of withdraeal in raydium withdrawal history noroj solscan
(I’m fucking scared i lost it)
I’ve never seen anything like it. I mean, I know scamming is an integral part of America and an integral part of the world, but I feel like the white hot center of scamming is the PumpFun Telegram Solana confluence.
It is fun to make a token on PumpFun, but just the amount of sheer scammery that happens once you create a Telegram group, I’ve never seen anything like it.
In a way, it scares me that it’s a preview of things to come. That the 21st century will just be one long, unending scam. That any technology will be co-opted to just scam, and that the technologies that are being created facilitate scams better than they used to.
Chwowww, all I gotta say is the shit sucks and is terrible for you. Like junk food dipped in pesticides.
I am pleased to introduce [`treemind`](https://github.com/sametcopur/treemind/), a high-performance Python library for interpreting tree-based models.
Whether you’re auditing models, debugging feature behavior, or exploring feature interactions, `treemind` provides a robust and scalable solution with meaningful visual explanations.
* **Feature Analysis** Understand how individual features influence model predictions across different split intervals.
* **Interaction Detection** Automatically detect and rank pairwise or higher-order feature interactions.
* **Model Support** Works seamlessly with LightGBM, XGBoost, CatBoost, scikit-learn, and perpetual.
* **Performance Optimized** Fast even on deep and wide ensembles via Cython-backed internals.
* **Visualizations** Includes a plotting module for interaction maps, importance heatmaps, feature influence charts, and more.
**Installation**
pip install treemind
**One-Dimensional Feature Explanation**
Each row in the table shows how the model behaves within a specific range of the selected feature.
The `value` column represents the average prediction in that interval, making it easier to identify which value ranges influence the model most.
| worst_texture_lb | worst_texture_ub | value | std | count |
|——————|——————|———–|———-|———|
| -inf | 18.460 | 3.185128 | 8.479232 | 402.24 |
| 18.460 | 19.300 | 3.160656 | 8.519873 | 402.39 |
| 19.300 | 19.415 | 3.119814 | 8.489262 | 401.85 |
| 19.415 | 20.225 | 3.101601 | 8.490439 | 402.55 |
| 20.225 | 20.360 | 2.772929 | 8.711773 | 433.16 |
**Feature Plot**
#
**Two Dimensional Interaction Plot**
The plot shows how the model’s prediction varies across value combinations of two features. It highlights regions where their joint influence is strongest, revealing important interactions.
# Learn More
* Documentation: [https://treemind.readthedocs.io](https://treemind.readthedocs.io)
* Github: [https://github.com/sametcopur/treemind/](https://github.com/sametcopur/treemind/)
* Algorithm Details: [How It Works](https://treemind.readthedocs.io/en/latest/algorithm.html)
* Benchmarks: [Performance Evaluation](https://treemind.readthedocs.io/en/latest/experiments/experiment_main.html)
Feedback and contributions are welcome. If you’re working on model interpretability, we’d love to hear your thoughts.
Hey, I had some codes running that bought and sold automatically pumpfun tokens, I reversed engineered the pumpfun website to have the websockets they used and see the information about the tokens.
Now everything has changed and most memecoins are on Bonkfun, how do you guys obtain the source for receiveing tokens transactions, market cap etc?
I’ve been diving into ZK tech lately—not from a trading angle, but just out of curiosity around how scalable, private on-chain computation is evolving. Most zkVMs I come across either feel too academic or locked into Ethereum’s EVM structure.
ZKWASM caught my attention because it takes a different route: it combines WebAssembly (WASM) with zero-knowledge proofs, so developers can write in familiar languages like Rust or C++ and still get privacy-preserving execution, without rewriting their code.
That part hit home for me. As someone with a Web2 background, the barrier to entry for most zk toolchains is steep. The idea of compiling to WASM and letting the system generate zk-proofs automatically feels like a more natural bridge between the two ecosystems.
The project’s been around since 2023 and has quietly shown up at major events (ZK Summit, ETHDenver, etc.) and even has IEEE paper validation. It’s open-source, ships with docs and tutorials, and even offers a cloud hub for proof aggregation. So it’s not just whitepaper promises, it feels like real infrastructure with actual usability.
There is a token aspect too (not the focus of this post), and I saw that it’s getting listed on Bitget soon. What interests me isn’t the listing itself, but the shift it implies, maybe a move from “research mode” to “go-to-market.” Whether that works or not is a different question, but the timing is something I’m watching.
In the broader Web3 context, I think ZKWASM poses a question:
Does zk-WASM offer a more dev-friendly path forward for privacy and scalability compared to zk-EVM or bespoke circuits?
Would love to hear from anyone working with zk tools or WASM stacks, especially those experimenting with ZK outside of Ethereum’s L2 rollup scene. Is there a real demand here, or is this too far ahead of its time?
As a math major, I was interested in seeing what different fields of mathematical research looks like. I decided to just browse the Arxiv, but I can’t help to notice the difference between Stat.ML and CS.LG sections.
From my understanding, they are both suppose to be about Machine Learning research, but what I found was that many of the CS.LG articles applied ML to novel scenarios instead of actually researching new mathematical/statistical models. Why are these considered ML research, if they are not researching ML but using it?
Does this reflect a bigger divide within the machine learning research field? Is there some fields in ML that are more suited for people interested in math research? if so, are those generally hosted in the math/stats department, or still under the CS department?
A while back, I was working on localization with GPs and had a thought: could we encode vehicle dynamics directly into the GP kernel?
I know GPs are used to model parameters in physical models. But my idea was that a car’s trajectory resembles a smooth GP sample. A faster car takes smoother paths, just like longer length scales produce smoother GPs. Instead of modeling `y(x)` directly, I used cumulative distance `s` as the input, and trained two separate GPs:
* `x(s)`
* `y(s)`
Both use an RBF kernel. So we are basically maximizing the probability function:
Which translates to something like
*“Given a speed, how probable is it that these data points came from this vehicle?”*
**The algorithm goes like this:**
1. Collect data
2. Optimize the kernel
3. Construct the `l(v)` function
4. Optimize the lap
I fitted the kernel’s length scale `l` as a function of speed: `l(v)`. To do this, I recorded driving data in batches at different constant speeds, optimized the GP on each batch, then fit a simple `l(v)` relation, which turned out to be very linear.
With the optimized kernel in hand, you can ask questions like:
*“Given this raceline and a speed, can my car follow it?”*
As the GP is a probabilistic model, it doesn’t give a binary answer that we requested. We could optimize for “the most likely speed” the same way we optimized the length scales. However, this would be more like asking, “What is the most likely speed this raceline can be achieved?”, which is okay for keeping your Tesla on the road, but not optimal for racing. My approach was to define an acceptable tolerance for the deviation from the raceline. With these constraints in hand, I run a heuristic window-based optimization for a given raceline:
**Results?**
Simulator executed lap plan times were close to human-driven laps. The model didn’t account for acceleration limits, so actual performance fell slightly short of the predicted plan, but I think it proved the concept.
There are a lot of things that could be improved in the model. One of the biggest limitations is the independent models for x and y coordinates. Some of the things I also tried:
1. Absolute angle and cumulative distance model – This one considers the dynamics in terms of the absolute heading angle with respect to cumulative distance. This solves the problem of intercorrelation between X and Y coordinates, but introduces two more problems. First, to go back from the angle-domain, you need to integrate. This will lead to drifting errors. And even if you don’t want to go back to trajectory space, you still lose the direct link between the error definition of the two domains. And second, this function is not entirely smooth, so you need a fancier Kernel to capture the features. A Matérn at least.
2. “Unfolding the trajectory” – This was one of my favorites, since it is the closest to the analogy of modeling y relation to x directly, wiggly road style. In the original domain, you would face the multivalued problem, where for a single x-value, there can be multiple y-values. One can “unfold” the lap (loop) by reducing the corner angles until you have unfolded the points to a single-valued function. This, however, also destroys the link to the original domain error values.
Here is the code and the data if you want to make it better:
[https://github.com/Miikkasna/gpdynalgo](https://github.com/Miikkasna/gpdynalgo)