• I was really looking forward to finally earning some yield on my crypto stack, but now I’m stuck not knowing how much longer I’ll have to wait.

    Is there any legal or easy workaround for this? Maybe through a platform that still supports it or something like self-custodial staking?

    Also, has there been any update recently on whether these restrictions might be lifted? Would love to hear from anyone in the same boat or who found a good solution.

  • Jane Street is often seen as the gold standard in trading top infra, top talent, massive volume. But they’ve been tied to questionable practices (e.g., alleged market manipulation in India, early SBF connections), and their business model is arguably just high-frequency rent-seeking.

    Yet in quant circles, they rarely face pushback. Why is that? Is it just respect for execution, or are we overlooking real ethical concerns in favor of performance? Curious what others here think.

  • Wrote a quick essay on ETH’s main value proposition: Programmable Money.

    There’s a little too much focus on ETH as a better Bitcoin, when IMO its value exceeds that if you understand that it’s also programmable – something Bitcoin can never be. Essay expands on this idea and why ETH has more value as more money moves on chain.

  • * `y(s)`

    Both use an RBF kernel. So we are basically maximizing the probability function:

    https://preview.redd.it/ksoisiw9r9ef1.png?width=430&format=png&auto=webp&s=e01f1827f3c74550f596de2ee02fe4b7d2e93178

    Which translates to something like

    *“Given a speed, how probable is it that these data points came from this vehicle?”*

    **The algorithm goes like this:**

    1. Collect data
    2. Optimize the kernel
    3. Construct the `l(v)` function
    4. Optimize the lap

    I fitted the kernel’s length scale `l` as a function of speed: `l(v)`. To do this, I recorded driving data in batches at different constant speeds, optimized the GP on each batch, then fit a simple `l(v)` relation, which turned out to be very linear.

    With the optimized kernel in hand, you can ask questions like:

    *“Given this raceline and a speed, can my car follow it?”*

    As the GP is a probabilistic model, it doesn’t give a binary answer that we requested. We could optimize for “the most likely speed” the same way we optimized the length scales. However, this would be more like asking, “What is the most likely speed this raceline can be achieved?”, which is okay for keeping your Tesla on the road, but not optimal for racing. My approach was to define an acceptable tolerance for the deviation from the raceline. With these constraints in hand, I run a heuristic window-based optimization for a given raceline:

    **Results?**

    Simulator executed lap plan times were close to human-driven laps. The model didn’t account for acceleration limits, so actual performance fell slightly short of the predicted plan, but I think it proved the concept.

    There are a lot of things that could be improved in the model. One of the biggest limitations is the independent models for x and y coordinates. Some of the things I also tried:

    1. Absolute angle and cumulative distance model – This one considers the dynamics in terms of the absolute heading angle with respect to cumulative distance. This solves the problem of intercorrelation between X and Y coordinates, but introduces two more problems. First, to go back from the angle-domain, you need to integrate. This will lead to drifting errors. And even if you don’t want to go back to trajectory space, you still lose the direct link between the error definition of the two domains. And second, this function is not entirely smooth, so you need a fancier Kernel to capture the features. A Matérn at least.
    2. “Unfolding the trajectory” – This was one of my favorites, since it is the closest to the analogy of modeling y relation to x directly, wiggly road style. In the original domain, you would face the multivalued problem, where for a single x-value, there can be multiple y-values. One can “unfold” the lap (loop) by reducing the corner angles until you have unfolded the points to a single-valued function. This, however, also destroys the link to the original domain error values.

    Here is the code and the data if you want to make it better:
    [https://github.com/Miikkasna/gpdynalgo](https://github.com/Miikkasna/gpdynalgo)

  • I am currently coordinating a group trip to Japan during March 12 – 25th in 2024! It is during their Cherry Blossom Season, and the trip will include airfare, accommodations, and will visit several places in Japan including: Tokyo, Hiroshima, Kyoto, Osaka, Mt Koya and Hakone.

    – Breakfastes included for each day.
    – 6 Dinners included
    – Round Trip Airfare and Transfers
    – Expert Tour Directors
    – Entrances to shrines, temples and other local attractions
    – Nara Monastery experience for additional fee

    Payment plans available + 200 off with a code (DM For Code).

    If you’d like to hear more just PM me and I’d be happy to explain, it’s through an accredited travel company and is super fun and affordable!

  • Untitled Post
  • While I’ve handled other events before, this field is new to me. I’d appreciate any quick tips, resources, or key concepts to get up to speed.

    Also, if you have advice on how to approach senior roles (MDs, Heads of Departments, Chiefs, Presidents) effectively in this space.

    Thanks

  • https://preview.redd.it/4cjz7qwcb2ef1.png?width=1192&format=png&auto=webp&s=959dd70368ec15d4f607486dc464cc339d691a9e

    \- No central coordinator
    \- Nodes train locally on custom data shards
    \- Aggregation (e.g., FedAvg) happens across verifiable nodes
    \- All results are hash-verified before acceptance
    \- Decentralized, docker-native FL infra
    \- Ideal for research in Non-IID, private datasets, or public benchmark tasks

    Project:
    GitHub – [https://github.com/theblitlabs](https://github.com/theblitlabs)
    Docs – [https://blitlabs.xyz/docs](https://blitlabs.xyz/docs)

    We’re college devs building a trustless alternative to AWS Lambda for container-based compute, Federated learning and LLM inference

    Would love feedback or help. Everything is open source and permissionless.