• I’m not sure if anybody else has been completely let down by this feature. I asked it to copy the full documentation section of a website to a single HTML file. The agent browsed through all of the sections of the documentation. This seemed very promising, as did the text updates it displayed as it fulfilled the task. But in the end? I was sent a tiny “getting started” section of the documentation, despite the agent browsing all of the documentation pages. I pointed out the mistake, and it got back to work. I was sent the same HTML file. I sent it the HTML file to demonstrate the issue, and it acknowledged that and proceeded to send a “documentation” containing a brief summary of each section.

    Seriously, I’ve been waiting for an agent that can do something like this. Once again, OpenAI has given me the bluest balls that ever blued. Their only worse product launch, in my view, was Sora.

  • Hi y’all, left the scene for awhile (was broke), have some extra cash on hand and want to support the network again. This was about 2ish years ago. Staking back then was a pain and required a lot of command-line-fu and tinkering.

    I was wondering if there were any full node options for Ethereum the way that bitcoin has Umbrel. I tried dappnode but it’s glitchy as hell and there isn’t a whole lot of support for it – I remember that there was an upgrade to docker and all of the sudden I was no longer able to restart any of the containers for my node- I used the install script to get everything going.

  • I’ve been wondering if it’s worth consistently posting on our Google Business Profile. I know it “keeps it fresh,” but does it actually move the needle for visibility or rankings?

    We started posting four times a week about two months ago. Mostly short updates, promos, seasonal tips. Weirdly, we did notice an uptick in views and map actions.

    I use a tool that lets me queue up a month’s worth of posts ahead of time, so it’s not a hassle. I’ve definitely noticed better local reach since doing it regularly.

    Anyone else seen similar results? Or is this just a coincidence?

  • A while back, I was working on localization with GPs and had a thought: could we encode vehicle dynamics directly into the GP kernel?

    I know GPs are used to model parameters in physical models. But my idea was that a car’s trajectory resembles a smooth GP sample. A faster car takes smoother paths, just like longer length scales produce smoother GPs. Instead of modeling `y(x)` directly, I used cumulative distance `s` as the input, and trained two separate GPs:

    * `x(s)`
    * `y(s)`

    Both use an RBF kernel. So we are basically maximizing the probability function:

    https://preview.redd.it/ksoisiw9r9ef1.png?width=430&format=png&auto=webp&s=e01f1827f3c74550f596de2ee02fe4b7d2e93178

    Which translates to something like

    *“Given a speed, how probable is it that these data points came from this vehicle?”*

    **The algorithm goes like this:**

    1. Collect data
    2. Optimize the kernel
    3. Construct the `l(v)` function
    4. Optimize the lap

    I fitted the kernel’s length scale `l` as a function of speed: `l(v)`. To do this, I recorded driving data in batches at different constant speeds, optimized the GP on each batch, then fit a simple `l(v)` relation, which turned out to be very linear.

    With the optimized kernel in hand, you can ask questions like:

    *“Given this raceline and a speed, can my car follow it?”*

    As the GP is a probabilistic model, it doesn’t give a binary answer that we requested. We could optimize for “the most likely speed” the same way we optimized the length scales. However, this would be more like asking, “What is the most likely speed this raceline can be achieved?”, which is okay for keeping your Tesla on the road, but not optimal for racing. My approach was to define an acceptable tolerance for the deviation from the raceline. With these constraints in hand, I run a heuristic window-based optimization for a given raceline:

    **Results?**

    Simulator executed lap plan times were close to human-driven laps. The model didn’t account for acceleration limits, so actual performance fell slightly short of the predicted plan, but I think it proved the concept.

    There are a lot of things that could be improved in the model. One of the biggest limitations is the independent models for x and y coordinates. Some of the things I also tried:

    1. Absolute angle and cumulative distance model – This one considers the dynamics in terms of the absolute heading angle with respect to cumulative distance. This solves the problem of intercorrelation between X and Y coordinates, but introduces two more problems. First, to go back from the angle-domain, you need to integrate. This will lead to drifting errors. And even if you don’t want to go back to trajectory space, you still lose the direct link between the error definition of the two domains. And second, this function is not entirely smooth, so you need a fancier Kernel to capture the features. A Matérn at least.
    2. “Unfolding the trajectory” – This was one of my favorites, since it is the closest to the analogy of modeling y relation to x directly, wiggly road style. In the original domain, you would face the multivalued problem, where for a single x-value, there can be multiple y-values. One can “unfold” the lap (loop) by reducing the corner angles until you have unfolded the points to a single-valued function. This, however, also destroys the link to the original domain error values.

    Here is the code and the data if you want to make it better:
    [https://github.com/Miikkasna/gpdynalgo](https://github.com/Miikkasna/gpdynalgo)

  • Hellow ML/Al folks,

    I’m working on an upcoming Machine Learning in Quantitative Finance conference, my role is to outreach and engage relevant professionals.

    While I’ve handled other events before, this field is new to me. I’d appreciate any quick tips, resources, or key concepts to get up to speed.

    Also, if you have advice on how to approach senior roles (MDs, Heads of Departments, Chiefs, Presidents) effectively in this space.

    Thanks

  • I’m doing some local SEO for a firm and came across a business with two sites, two GBPs (!), using what looks like a massive doorway page setup across several sites.

    They’ve got thousands of pages like:

    /repair-and-service-centre-stockport.php

    /repair-and-service-centre-cheadle.php

    /repair-and-service-centre-manchester.php

    Same content every time. Just the town name swapped. No real local relevance. Thin copy, no useful information. Textbook doorway spam.

    But here’s what stands out. You can type anything you like in the URL and it still works. For example, you can insert…

    /repair-and-service-centre-big-boobs.php

    /repair-and-service-centre-wife-from-thailand.php

    …or anything you want in the URL. The site just dynamically inserts whatever you put in the URL. It’s all templated. Anyone with a script could generate thousands of these in a few minutes.

    This stuff is ranking well in local results. I’ve reported it through the Google report page a few times, but nothing ever seems to happen. I don’t even know if they read them?

    How is this still working in 2025? Has Google just given up on this type of spam?

    If anyone’s curious, I put together a short list of example URLs here: [https://pastebin.com/Wd4Bq6Yt](https://pastebin.com/Wd4Bq6Yt) \- It includes working manipulated URLs and examples of their broader spam setup. They lead you to manipulated versions too, lists of their pages doing it – all of it. How to get this network of spam actually noticed by Google?

    I’d be interested to know if anyone’s actually managed to get something like this taken down, or if we’re all just wasting our time with the report form. And more to the point, wasting our time trying to do local SEO properly? If rubbish like this still works, we may as well all just put up sites with a few thousand spammy doorway pages full of AI slop for every customer.

  • With technology ever-evolving, the future of AI in the U.S. hinges on how lawmakers and technologists address these challenges. It’s a pivotal moment for stakeholders at all levels—from policymakers to industry leaders—to collaborate in crafting regulations that foster innovation while safeguarding public interest.

    By closely monitoring the evolution of these federal policies, states and businesses can better prepare for the opportunities and challenges that lie ahead. As we move forward, the conversation about AI regulation will remain a critical discourse, shaping the trajectory of technological progress in the nation.

  • Hey everyone,

    I’ve been working as a computer vision engineer for about 2 years, mostly doing object detection, tracking, OCR, and similar projects. Lately though, I’ve gotten more interested in NLP and I’m thinking about switching fields.

    So far I’ve been learning on my own — I’ve built a few chatbots, trained custom NER models using spaCy, and played around with Hugging Face transformers like `bert-base-cased`. I’ve also made small apps using Streamlit and FastAPI for tasks like summarization, sentiment analysis, translation, etc.

    Now I’m planning to apply for NLP jobs, but I’m not exactly sure what kind of projects would make my profile stronger. Also wondering:

    * What kinds of NLP projects would be good to showcase in a portfolio?
    * How’s the NLP job market these days? Is it better to go for more general ML roles?
    * What should I focus on when preparing for interviews — what kind of technical questions usually come up?
    * Any advice or tips from folks who’ve made a similar switch?

    Would really appreciate any suggestions or experiences you’re willing to share. Thanks!