• Hey fellow travelers,

    I just wanted to share my recent experience booking with SparrowBid and let you know how amazed I am with their service. I had been searching for a unique and affordable hotel for my upcoming trip, and that’s when I stumbled upon SparrowBid.

    Their auction hotel model intrigued me, so I decided to give it a try. I placed a bid on a luxury hotel in my desired destination and crossed my fingers. To my delight, I won the bid and secured an incredible deal on a top-notch hotel!

    The entire process was smooth and straightforward. SparrowBid’s platform was user-friendly, and I could easily navigate through the available options. The bidding system was exciting, and I found it to be a fun way to get a great price on a high-quality hotel.

    But it doesn’t stop there. The customer service I received from SparrowBid was outstanding. From the moment I made my booking, I had a dedicated travel advisor who guided me through the entire journey. They were responsive, knowledgeable, and went above and beyond to ensure I had a seamless experience.

    Not only did SparrowBid help me save a significant amount of money, but they also provided excellent recommendations and tips for my destination. They truly personalized my trip and made me feel valued as a customer.

    If you’re looking for a unique and affordable way to book your next hotel, I highly recommend checking out SparrowBid. Their auction hotel model is a game-changer, and their customer service is top-notch. Trust me, you won’t be disappointed!

    Happy travels!

    sparrowbid.com

  • Hi folks,

    I run a (B2B/industrial) website that I built myself—call it “vibe-coded” with some programming chops + AI guidance.

    **What I’ve already done (AI-assisted):**

    * On-page basics: titles, metas, H1/H2 hierarchy, keyword targeting
    * Internal link structure & topical clusters (at least a first pass)
    * XML sitemap + robots.txt + Search Console setup
    * Basic schema (Organization/Product/FAQ)
    * Core Web Vitals fixes via Lighthouse/GSC suggestions
    * Content cadence: a couple of blog posts/month, intent-focused, AI-drafted then edited by me
    * Google Ads: set up campaign, negative keywords, conversion tracking, landing pages

    **Time & Results:**

    * \~2 hours/week on SEO/Ads combined
    * Rankings: slow upward trend for core keywords (from page 3–4 to 1–2 for some long tails)
    * Ads: acceptable CPC/CPA (for now), conversions trickling in

    **The Big Question:**
    What does a *good* SEO agency or consultant typically do that I’m *not* doing (or can’t replicate with AI + elbow grease)? At what point do I pull in a pro?

    Not trying to self-promote—just genuinely want to understand the delta between “DIY + AI” and “pro-grade” SEO.

    Thanks!

  • Thank you to the mods for continuing to support the bi-weekly job listings post.

    As a reminder, these are not my listings. All applications must be submitted through the proper channels listed in each job description below.

    [Sr. Technical SEO Manager \~ Gravity Global \~ Remote (UK)](https://www.seojobs.com/job/sr-technical-seo-manager-gravity-global/)

    [SEO Manager (Technical, AI, Content) \~ Helzberg \~ Hybrid (Kansas City, US)](https://www.seojobs.com/job/seo-manager-helzberg/)

    [SEO & GEO Growth Marketing Manager \~ Firecrawl \~ $70-100k \~ Remote (US)](https://www.seojobs.com/job/seo-geo-growth-marketing-manager-firecrawl/)

    [SEO Product Manager (ASO, AI, Content) \~ Pizza Hut \~ $122.1-160k \~ Remote (US)](https://www.seojobs.com/job/seo-product-manager-aso-ai-content-pizza-hut/)

    [Principal, SEO & Generative AI Search Strategy \~ Chime \~ $146-207k \~ Hybrid, San Francisco CA (US)](https://www.seojobs.com/job/principal-seo-generative-ai-search-strategy-chime/)

    [SEO Director (AEO, GEO, LLM) \~ Argano \~ $136-160k \~ Remote (US)](https://www.seojobs.com/job/seo-director-aeo-geo-llm-argano/)

    [SEO & AI Search Director \~ Animalz \~ $100-140k \~ Remote (WW)](https://www.seojobs.com/job/director-of-seo-ai-search-animalz/)

    [Search & Discovery Growth Manager (SEO, GEO, AI) \~ Johnson Outdoors \~ Remote (US)](https://www.seojobs.com/job/search-discovery-growth-manager-seo-geo-ai-johnson-outdoors/)

    [SEO Manager (AI, AEO) \~ Vanta \~ $119-140k \~ Remote (US)](https://www.seojobs.com/job/seo-manager-ai-aeo-vanta/)%5BSr. ](https://www.seojobs.com/job/sr-content-marketing-manager-seo-ai-authentic8/)

    [Lead Web & Organic Growth Strategist (AI) \~ Upstart \~ $130.8-181.1k \~ Remote (US)](https://www.seojobs.com/job/lead-web-organic-growth-strategist-ai-upstart/)

  • AMD Radeon AI PRO R9700

    Hey y’all. The R9700 was supposedly launched yesterday, but I couldn’t find any reviews or listings online for it, outside of one company that had a “request a quote” button instead of an actual price. So I kept digging and found Velocity Micro’s blog post, which is from yesterday. I’ve never heard of them before, but they appear to be a well-established System Integrator/boutique PC builder.

    In their blog post, they compared the RTX 5080 and the R9700’s AI Inference performance using Phi 3.5 MoE Q4, Mistral Small 3.1 24B Instruct 2503 Q8, Qwen 3 32B Q6, and DeepSeek R1 Distill Qwen 32B Q6. The results are shown in the screenshot above.

    Now, I’ll freely admit I’ve been an AMD fan for a long time (RX590 with ROCm 6.3 says hi), but those performance figures are **heavily** biased towards the R9700. There are two big, glaring issues here:

    1. No concrete tokens per second performance figures were presented, only relative performance uplift in percentage.

    2. ALL of the models used in the benchmark don’t fit within the RTX 5080’s 16GB VRAM buffer.

    That completely defeats the point of the benchmark lol. None of those models fully fit within the 5080’s VRAM, so God knows how many layers were offloaded to the CPU.

    They don’t mention the price in their blog post, but I checked the custom build configuration page of their ProMagix HD150 workstation, and the R9700 adds $1500 to the build cost, whereas the 5080 adds $1710. So I suppose there’s an argument to be made about comparing the two, considering how close in price they are, but… the models chosen reek of dishonesty.

    Oh, and as an aside, that’s not the only thing the post reeks of. It reeks of LLM-isms, like this one passage right beneath the benchmarks table: “The takeaway? For professionals running large prompts or full-sized models locally, the Radeon™ AI PRO R9700 isn’t just competitive—it’s transformative,” you know, with the classic “It isn’t just X, it’s Y!” But maaaybe I’m being just overly critical in this era of AI slop. idk lol.

  • In the world of cryptocurrency, BTCPay Server stands tall, serving as a vital project that embodies the essence of Bitcoin’s decentralization and freedom. Its creation stems from a community-driven initiative seeking to enhance financial autonomy by providing a free and open-source Bitcoin payment processor.

    The story of BTCPay Server is one of resilience and inspiration. It all began with Nicolas Dorier, a key figure in Bitcoin development, who set out to solve a pressing problem. The need for a decentralized payment system became evident when traditional payment processors posed limitations on cryptocurrency transactions. With a determination to create a fairer financial ecosystem, Dorier embarked on a journey to develop BTCPay Server as a response.

    BTCPay Server is more than just a payment processor; it’s a movement. It allows businesses to accept transactions directly without involving intermediaries, thus reducing fees and enhancing privacy. This is a game-changer for many small businesses and entrepreneurs who can now manage their finances independently, without the constraints of centralized entities.

    The beauty of BTCPay Server lies in its open-source nature. Being open-source means anyone can contribute to its development, ensuring that it remains secure, up-to-date, and democratic in nature. The vibrant community surrounding BTCPay Server is testament to its transparency and the shared vision of financial decentralization.

    Moreover, BTCPay Server empowers users by offering a wide array of features. Its integration capabilities are vast, supporting not just Bitcoin but various other cryptocurrencies. It can be easily integrated into existing platforms, making it accessible for businesses of all sizes. User-friendly documentation and supportive forums make it possible for even the non-technically inclined to set up and run the server with ease.

    The importance of such projects cannot be overstated. In a world where data privacy and financial freedom are becoming more critical, BTCPay Server serves as a beacon of hope and empowerment. It illustrates the potential of Bitcoin’s underlying philosophy: that financial self-sovereignty and transparency are achievable.

    Whether you are an entrepreneur, a small business owner, or simply a cryptocurrency enthusiast, BTCPay Server offers a powerful toolset for embracing the future of finance. By embracing BTCPay Server, you aren’t just adopting a payment system; you are joining a community and furthering a cause that believes in the potential of decentralized financial solutions for everyone.

    In closing, the rise of BTCPay Server is a reminder of what collective passion and ideas can achieve. It is proof that with the right tools and a dedicated community, the vision of decentralized finance can remain vibrant and accessible. So, as the world continues to explore the landscape of cryptocurrencies, BTCPay Server will undoubtedly remain a cornerstone of this transformative era.

  • Untitled Post

    Creating an environment that values learning and growth is beneficial. Encouraging continual education through workshops, online courses, and mentorship opportunities can help cultivate a team that’s enthusiastic and committed. Engage in regular feedback sessions with new graduates, focusing on strengths and areas for growth. This not only helps them develop professionally but also strengthens your relationship as a manager, reinforcing their role as a valued team member.

    By implementing these strategies, you not only enhance your team’s function but also promote a positive, productive work environment. Remember, the goal is to guide new graduates, unleashing their potential while ensuring your team remains cohesive and effective. Whether it’s through structured guidance or fostering a culture of openness, managing new graduates can lead to rewarding team dynamics and success.

  • From manipulating markets in India to unleashing SBF on the world (he obviously learned something from them), why is Jane Street not looked at as a bottom rung hack shop? When I see them do interviews they act very high and mighty, when by all accounts they just nickel and dime people on a large scale and are doing so in illegal ways.

  • I’ve been thinking about how the web used to be this chaotic but exciting place where you explored forums, blogs, weird little sites… and now, everything seems to funnel through a few controlled pipes: YouTube recommends, Reddit front page, TikTok FYP, Google top 3 links.

    It’s efficient, but it also feels like it’s training us to wait for content instead of seeking it. I miss getting *lost* online.

    Anyone else feel like the internet is starting to feel more like a curated feed than a rabbit hole? Is there any chance we course-correct from this?

  • # The Situation

    I’ve been wrestling with a messy freeform text dataset using BERTopic for the past few weeks, and I’m to the point of crowdsourcing solutions.

    The core issue is a pretty classic garbage-in, garbage-out situation: The input set consists of only 12.5k records of loosely structured, freeform comments, usually from internal company agents or reviewers. Around 40% of the records include copy/pasted questionnaires, which vary by department, and are inconsistenly pasted in the text field by the agent. The questionaires are prevalent enough, however, to strongly dominate the embedding space due to repeated word structures and identical phrasing.

    This leads to severe collinearity, reinforcing patterns that aren’t semantically meaningful. BERTopic naturally treats these recurring forms as important features, which muddies topic resolution.

    ## Issues & Desired Outcomes

    ### Symptoms

    * Extremely mixed topic signals.
    * Number of topics per run ranges wildly (anywhere from 2 to 115).
    * Approx. 50–60% of records are consistently flagged as outliers.

    Topic signal coherance is issue #1; I feel like I’ll be able to explain the outliers if I can just get clearer, more consistant signals.

    There is categorical data available, but it is inconsistently correct. The only way I can think to include this information during topic analysis is through concatenation, which just introduces it’s own set of problems (ironically related to what I’m trying to fix). The result is that emergent topics are subdued and noise gets added due to the inconsistency of correct entries.

    ### Things I’ve Tried

    * Stopword tuning: Both manual and through vectorizer\_model. Minor improvements.
    * “Breadcrumbing” cleanup: Identified boilerplate/questionnaire language by comparing nonsensical topic keywords to source records, then removed entire boilerplate statements (statements only; no single words removed).
    * N-gram adjustment via CountVectorizer: No significant difference.
    * Text normalization: Lowercasing and converting to simple ASCII to clean up formatting inconsistencies. Helped enforce stopwords and improved model performance in conjunction with breadcrumbing.
    * Outlier reduction via BERTopic’s built-in method.
    * Multiple embedding models: “all-mpnet-base-v2”, “all-MiniLM-L6-v2”, and some custom GPT embeddings.

    ### HDBSCAN Tuning

    I attempted tuning HDBScan through two primary means.

    1. Manual tuning via Topic Tuner – Tried a range of min\_cluster\_size and min\_samples combinations, using sparse, dense, and random search patterns. No stable or interpretable pattern emerged; results were all over the place.
    2. Brute-force Monte Carlo – Ran simulations across a broad grid of HDBSCAN parameters, and measured number of topics and outlier counts. Confirmed that the distribution of topic outputs is highly multimodal. I was able to garner some expectations of topic and outliers counts out of this method, which at least told me what to expect on any given run.

    ### A Few Other Failures

    * Attempted to stratify the data via department and model the subset, letting BERTopic omit the problem words beased on their prevalence – resultant sets were too small to model on.
    * Attempted to segment the data via department and scrub out the messy freeform text, with the intent of re-combining and then modeling – this was unsuccessful as well.

    ## Next Steps?

    At this point, I’m leaning toward preprocessing the entire dataset through an LLM before modeling, to summarize or at least normalize the input records and reduce variance. But I’m curious:

    Is there anything else I could try before handing the problem off to an LLM?

    EDIT – A SOLUTION:

    We eventually got approval to move forward with an LLM pre-processing step, which worked very well. We used 4o-mini and instructed the prompt to gather only the facts and intent of each record. My colleague suggested to add the parameter (paraphrasing) “If any question answer pairs exist, include information from the answers to support your response,” which worked exceptionally well.

    We wrote an evaluation prompt to help assess if any egregious factual errors existed across a random sample of 1k records – none were indicated. We then went through these by hand to verify, and none were found.

    Of note: I believe this may be a strong case for the use of 4o-mini. We sampled the results in 4o with the same prompt and saw very little difference; given the nature of the prompt, I think this is very expected. The performance and cost were much lower with 4o-mini – an added bonus. We saw far more variation in the evaluation prompt between 4o and 4o-mini. 4o was more succinct and able to reason “no significant problems” more easily. This was helpful in the final evaluation, but for the full pipeline 4o-mini is a great fit for this usecase.