• I am planning a semi last minute solo trip to Italy the last week of August/ beginning of September. There are so many incredible cities, but have come narrowed it down to the following stops:

    Fly into Rome, then to Florence, then to Cinque Terre, then would love to Bergamo/Dolomites area. Wine, good food, and great views are my goal.

    1. Public transportation seems to be great over there, but with this itinerary should I rent a car? The only spot that concerns me for parking, etc. mainly in Cinque Terre is Cinque Terre since I don’t plan to spend too much time in the larger cities

    2. Any recommendations on spots that I am missing? Love the walkability, hiking aspect of many of the locations.

    3. Any recommendations on where to stay in Cinque Terre or Dolomites area? Any amazing experiences at hotels? Preferences between hotels vs air bnbs?

    4. Has anyone used any app, program, watched any youtube videos that helped them learn words and phrases to help get around over there?

    Any and all advice/ recommendations are welcome! First time to Europe so incredibly excited. Thanks!!

  • In 2015, before ERC-20 existed, Ethereum’s Mist browser team quietly deployed a token called MistCoin, a prototype, not meant to trade.

    Sixteen days later, the ERC-20 standard was proposed. MistCoin became the spark. The prototype. The father of everything that followed.

    A Forgotten Token Reborn

    In 2022, MistCoin was wrapped for trading ($WMC), but never truly revived.

    Now, on Ethereum’s 10-year anniversary, MistCoin returns unwrapped, for the first time ever, tradeable as $MC, natively on ERC-20.

    The first ever ERC-20 token $MC which is a collectible and not tradable. Only tradable via Wrapped MistCoin until now.

    * The first token.
    * The first act.
    * The Adam of Ethereum.
    * The Father of ERC-20.

    **For the first time ever you can now trade MistCoin $MC unwrapped.**

    Check out the tokenomics and official links down below for more information about this gem of a project! **Don’t miss out on this!!!!!**

    **$TOKENOMICS**

    * Token Name: MistCoin
    * Token Symbol: MC
    * Network: ERC20
    * Contract Address: 0xD8f8e1bcb80373CEBDD6b6c8e0139CfFDeA9D3C6

    **Socials and Official Links:**

    * Tg: @ mistcoinerc
    * X: https://x.com/mistcoinerc
    * Website: https://mistcoin.io/

    Always remember to DYOR and this is NFA. Everything written in the post is my personal opinion and not technical analysis or advice.

  • * **Interaction Detection** Automatically detect and rank pairwise or higher-order feature interactions.
    * **Model Support** Works seamlessly with LightGBM, XGBoost, CatBoost, scikit-learn, and perpetual.
    * **Performance Optimized** Fast even on deep and wide ensembles via Cython-backed internals.
    * **Visualizations** Includes a plotting module for interaction maps, importance heatmaps, feature influence charts, and more.

    **Installation**

    pip install treemind

    **One-Dimensional Feature Explanation**

    Each row in the table shows how the model behaves within a specific range of the selected feature.
    The `value` column represents the average prediction in that interval, making it easier to identify which value ranges influence the model most.

    | worst_texture_lb | worst_texture_ub | value | std | count |
    |——————|——————|———–|———-|———|
    | -inf | 18.460 | 3.185128 | 8.479232 | 402.24 |
    | 18.460 | 19.300 | 3.160656 | 8.519873 | 402.39 |
    | 19.300 | 19.415 | 3.119814 | 8.489262 | 401.85 |
    | 19.415 | 20.225 | 3.101601 | 8.490439 | 402.55 |
    | 20.225 | 20.360 | 2.772929 | 8.711773 | 433.16 |

    **Feature Plot**

    https://preview.redd.it/cbmyl38y7oef1.png?width=1189&format=png&auto=webp&s=5c7657a74bdebf5c51332ddc856f5de3d5583de9

    #

    **Two Dimensional Interaction Plot**

    The plot shows how the model’s prediction varies across value combinations of two features. It highlights regions where their joint influence is strongest, revealing important interactions.

    https://preview.redd.it/2zb1ra5h8oef1.png?width=943&format=png&auto=webp&s=6b1149795ce202f50f47f0264013eb225e09de2c

    # Learn More

    * Documentation: [https://treemind.readthedocs.io](https://treemind.readthedocs.io)
    * Github: [https://github.com/sametcopur/treemind/](https://github.com/sametcopur/treemind/)
    * Algorithm Details: [How It Works](https://treemind.readthedocs.io/en/latest/algorithm.html)
    * Benchmarks: [Performance Evaluation](https://treemind.readthedocs.io/en/latest/experiments/experiment_main.html)

    Feedback and contributions are welcome. If you’re working on model interpretability, we’d love to hear your thoughts.

  • My understanding is that option liquidity comoves with the underlying stock liquidity, and such comovement should be more pronounced near expiration due to more trading activities. How come in the Indian option market, the expiry day spike in option liquidity does not propagate to the underlying stock liquidity, which allowed Jane Street to manipulate?

  • For anyone not working with deep-pocket clients — what’s been surprisingly effective for building links when you don’t have a budget for paid placements or large-scale outreach?

    I’m working on two sites where the content is solid, but authority is holding us back. Paid guest posts and link inserts are off the table, so I’m testing a few grassroots things — content repurposing, Reddit and Quora drops, niche directories, etc.

    Curious what’s been working for others who’ve been in the same boat — not looking for tools or freebies, just want to hear what creative stuff’s moving the needle lately.

  • # How much does online negativity really affect your mind. Research found even brief exposure to negative social media comments can trigger immediate anxiety and mood drops in adults, especially younger users

  • I know GPs are used to model parameters in physical models. But my idea was that a car’s trajectory resembles a smooth GP sample. A faster car takes smoother paths, just like longer length scales produce smoother GPs. Instead of modeling `y(x)` directly, I used cumulative distance `s` as the input, and trained two separate GPs:

    * `x(s)`
    * `y(s)`

    Both use an RBF kernel. So we are basically maximizing the probability function:

    https://preview.redd.it/ksoisiw9r9ef1.png?width=430&format=png&auto=webp&s=e01f1827f3c74550f596de2ee02fe4b7d2e93178

    Which translates to something like

    *“Given a speed, how probable is it that these data points came from this vehicle?”*

    **The algorithm goes like this:**

    1. Collect data
    2. Optimize the kernel
    3. Construct the `l(v)` function
    4. Optimize the lap

    I fitted the kernel’s length scale `l` as a function of speed: `l(v)`. To do this, I recorded driving data in batches at different constant speeds, optimized the GP on each batch, then fit a simple `l(v)` relation, which turned out to be very linear.

    With the optimized kernel in hand, you can ask questions like:

    *“Given this raceline and a speed, can my car follow it?”*

    As the GP is a probabilistic model, it doesn’t give a binary answer that we requested. We could optimize for “the most likely speed” the same way we optimized the length scales. However, this would be more like asking, “What is the most likely speed this raceline can be achieved?”, which is okay for keeping your Tesla on the road, but not optimal for racing. My approach was to define an acceptable tolerance for the deviation from the raceline. With these constraints in hand, I run a heuristic window-based optimization for a given raceline:

    **Results?**

    Simulator executed lap plan times were close to human-driven laps. The model didn’t account for acceleration limits, so actual performance fell slightly short of the predicted plan, but I think it proved the concept.

    There are a lot of things that could be improved in the model. One of the biggest limitations is the independent models for x and y coordinates. Some of the things I also tried:

    1. Absolute angle and cumulative distance model – This one considers the dynamics in terms of the absolute heading angle with respect to cumulative distance. This solves the problem of intercorrelation between X and Y coordinates, but introduces two more problems. First, to go back from the angle-domain, you need to integrate. This will lead to drifting errors. And even if you don’t want to go back to trajectory space, you still lose the direct link between the error definition of the two domains. And second, this function is not entirely smooth, so you need a fancier Kernel to capture the features. A Matérn at least.
    2. “Unfolding the trajectory” – This was one of my favorites, since it is the closest to the analogy of modeling y relation to x directly, wiggly road style. In the original domain, you would face the multivalued problem, where for a single x-value, there can be multiple y-values. One can “unfold” the lap (loop) by reducing the corner angles until you have unfolded the points to a single-valued function. This, however, also destroys the link to the original domain error values.

    Here is the code and the data if you want to make it better:
    [https://github.com/Miikkasna/gpdynalgo](https://github.com/Miikkasna/gpdynalgo)