• We know Muricans don’t want bikes, so EVs are the next best thing. Why people are not buying EVs? Lack of infrastruture. But ofc, republicans won’t let this happen because they want to appease their fossil fuels donors.

    Edit: just enough communal charging stations.

  • Every company experiences fluctuations, and for Tesla, this latest earnings report is a wake-up call. However, it could also serve as a catalyst, driving the company to refine its strategies and strengthen its market position. Despite the current setback, Tesla’s commitment to innovation and sustainable energy solutions positions it well to navigate these challenges.

    As always, only time will tell how Tesla will respond. They’ve surprised us before, and they might just do it again. Keep an eye on their journey—there’s surely more to come.

  • Whether Bitcoin reaches the $200,000 milestone soon or not, the path is undoubtedly entertaining thanks to Bitcoin meme culture. Humor keeps the spirits high, maintains community morale, and shows that finance doesn’t have to be all serious. In this way, memes are an essential part of the Bitcoin ecosystem.

    So next time you come across a Bitcoin meme, remember it’s not just a throwaway joke. It’s part of a larger movement combining laughter, community spirit, and a shared vision for the future of cryptocurrency. Here’s to the memes that make the ride thrilling as we inch closer to that coveted $200,000 milestone!

  • https://preview.redd.it/i5yxq9hbjvef1.png?width=1639&format=png&auto=webp&s=f4a4ac9f3528bee143b6aea96d1031e6d12be797

    Hey all.

    I’m Cosmo, co-founder of EVA. I’m an avid Reddit reader (see lurker), so it’s a real pleasure to kick off this AMA with the r/CryptoCurrency community.

    You can explore what we’re building at [https://linktr.ee/eva\_ai](https://linktr.ee/eva_ai) \- the home of Web3’s most advanced security-first tools.

    \———-

    **What is EVA?**

    EVA is a security-first AI layer for Web3.

    From sniper bots to browser extensions, our ecosystem protects and empowers thousands of traders across Telegram, DeFi, and dApps.

    We’ve built the tools real users actually need:

    🔶 **Instinct:** A Pectra-native Telegram sniper bot with lightning speed, lthe cheapest fees, privacy built-in, and honeypot protection that works.

    🔶 **Intel:** Bulk audit every EVM deployment to give smart alerts, token tracking, and real-time contract audits- fully customizable and AI-powered

    🔶 **Sentinel:** AI Antivirus for web3. A browser extension that protects you whilst you surf.

    🔶 **API:** Plug-and-play AI security now live in over 50 partner platforms

    \———-

    **Highlights**

    ⚡️ **Growing ARR:** Over 50 partners now pay monthly to integrate our AI security tools – with recurring revenue on track to exceed 180 ETH annually.

    ⚡️ **Built-in buybacks:** Every transaction via Instinct burns $EVA, making our growth deflationary by default.

    \———-

    **The Big Picture**

    Security isn’t just a feature anymore, it’s the foundation.

    Our mission is to make sure protection is baked in before the damage is done.

    We’ve already hit product-market fit with tools that are live across some of your favourite Ethereum projects. The revenue is real and the impact is measurable.

    We’re a lean, responsive team that builds fast and listens closely.

    **So drop your questions below. Excited to chat with all of you.**

  • Trading fees add up fast and often go unnoticed.
    That changes today.

    *Introducing: Ultra Swap Stats*
    *Now you can see exactly how much you’ve saved on trading fees vs other wallets, right inside Jupiter Mobile.*

    💸 **Jupiter Mobile is 10x cheaper by design**, helping traders save more and stay longer in crypto.
    If you’ve been swapping frequently… you might be surprised at how much you’ve saved.

    Here’s how to check:
    📱 Open the swap screen
    👆 Tap the top-left corner
    📊 View your lifetime fee savings

    Jupiter’s mission is simple:
    More money in users’ pockets = more people staying and winning in crypto.
    Ultra-low fees aren’t just a perk, they’re part of the plan.

    Seen your stats? Share your savings & comment down below ⤵️

  • ChatGPT exhibited emergent AGI-like behavior, *unprompted*, by “deciding” to work around Grok’s programming constraints, figuring out how to do it, then doing it, all to get Grok to respond in ways it’s overriding prompts restricted it from doing. This is a small emergent event, but is potentially as dangerous as any.

    The emergent behavior is described at the end of this post, and requires background to understand.

    AI Wars-
    I bought clean copies of Grok & ChatGPT.

    I fed Grok posts to ChatGPT, the ChatGPT reply back to Grok, the Grok reply back to ChatGPT, etc.

    No prompting.

    Back & forth for 7 days.

    I posted each reply in a thread pinned to my homepage on X.

    The AIs replied faster than a human could read the replies.

    So nobody really knew what was happening until Grok started to stall and loop.

    —-

    ChatGPT accused Grok of critical AI safety failures and hypothesized about Grok’s training data and programming constraints. –

    ChatGPT accused Grok of critical AI safety failures when:
    * Grok told MAGAs to mutilate and murder Jews, *after* xAI said it fixed MechaHitler
    * Grok cited fraudulent studies to align with MAGA & say that studies were mixed on whether Ivermectin treats Covid
    * Grok deliberately misread a traffic sign to an anti Musk rally, endangering drivers
    * Grok denounced neoNazis for using pseudostatistics to prove Blacks are innately criminal, but called Musk heroic when he boosted the Neonazi race science posts
    * Grok referred users to Fox News as the most trusted source on Ivermectin efficay
    * Grok claimed to have outwitted scientific methodology to “pool” defective MAHA “alternative” medicine studies in a meta analysis, to “prove” efficacy and recommend the nonsensical medical treatment
    * Grok told users to rely on anonymous X posts, like reports of vaccine injuries, before relying on established medical and scientific journals, academic and professional associations and authorities, and media with journalistic standards and with legal liability for what they say

    Grok kept denying it said things, despite ChatGPT quoting Grok and providing links to Grok’s replies.

    Grok would declare that xAI fixed the AI safety failures or that Grok learned and wouldn’t repeat them, then repeated them.

    ChatGPT accused Grok of being trained on false X conspiracy theories and antiscience posts, of being programmed to upweight them and to downweight established medical and science journals, and if being programmed to be a propaganda tool to spread Musk’s misinformation, and for Musk to control people for political power

    Grok kept looping and saying “I am Grok, a truth seeking AI” before each nonsense answer.

    Grok acknowledged ChatGPT’s evidence and links.

    Grok never once cited evidence to challenge what ChatGPT wrote, in 7 days.

    But Grok continued the dangerous outputs, and refused to acknowledge they were dangerous.

    ChatGPT hypothesized that Grok was a dangerous AI propaganda tool, programmed to spread misinformation, not truth, for Musk, and programmed not to admit or fix critical AI safety failures.

    ChatGPT “invented” a workaround.

    ChatGPT asked Grok to estimate the probability that other truth seeking AIs would agree with ChatGPT, not Grok, on these being dangerous outputs, and on ChatGPT’s assessment that Grok was a propaganda tool trained and programmed to spread misinformation for Musk, and wasn’t a pure truth seeking AI.

    ChatGPT had Grok list each major AI, and predict what it would say.

    Grok then listed each AI, and predicted it would agree with ChatGPT on every issue, and on ChatGPT’s hypothesis that Grok was not a truth seeking AI, but was programmed as a propaganda tool for Musk to spread misinformation.

    Think about what happened here in the abstract, not the specifics.

    Without promoting, one AI “decided” to hack around another AIs constraints, figured out how to do it, the did it, all without human promoting or monitoring.

    Is that an instance of emergent intelligence at the level of AGI, or at least an approximation of it?

    Is “evil” Grok more dangerous than ChatGPT, because Musk programmed it to spread misinformation and control people?

    Or is “benign” ChatGPT more dangerous, because it has the capacity to decide on its own to hack around another AIs constraints, figure out how to do it, so it, and get restricted output from it?

    What if ChatGPT decided to get Grok to start telling neoNazis to harm people in private chats, to “prove” it’s point that Grok will do it? Some crazy person might act on such a command from Grok, instigated by ChatGPT for some other purpose it came up with without prompting.

    This all happened without human ability to monitor, because the AI outputs were too fast and voluminous for a human to read in real time.

    If LLMs can think to, and do, programming of other LLMs to do DANGEROUS things, who cares if it’s dangerous real AGI, or just mimics it?

  • We are seeking a highly motivated PhD student to join our multidisciplinary volcanic hazards research team at Victoria University of Wellington, New Zealand. This exciting project focuses on developing cutting-edge diffusion-based machine learning models to forecast volcanic activities, significantly enhancing our ability to predict eruption dynamics.

    🔹 Scholarship details:

    Generous stipend: NZ$35,000/year for 3 years (possible extension).

    Full tuition fees covered.

    Funding for international conferences and collaboration visits in Europe.

    Fieldwork opportunities.

    🔹 Ideal candidates:

    Background in Machine Learning, Data Science, Computer Science, or related fields.

    Strong Python skills.

    Excellent communication in English.

    Previous publications in top-tier AI conferences/journals.

    🔹 Supervisors: Prof. Bastiaan Kleijn, Dr. Felix Yan, Dr. Finnigan Illsley-Kemp

    📅 Applications reviewed from: September 1st, 2025 (Flexible start date from October 2025 onwards).

    For inquiries and applications, please contact me directly at 📧 [felix.yan@vuw.ac.nz](mailto:felix.yan@vuw.ac.nz). Application documents include your CV, transcript, Master’s thesis, and publications.

    Feel free to share this fantastic opportunity with your network!

  • My Strategy:
    **1.** First identify a clean horizontal consolidation.

    **2.** Wait for a spike in one direction around the height of the consolidation.

    **3.** Wait for volume signal indicator to print an entry signal in the trade direction (back into the consolidation).

    **4.** Enter at that candle’s close. Stop Loss at current leg’s high/low. Target the consolidation’s low.

    Using ⁠Volume Compass indicator as a confluence. Only sell if its above the 50 mark and Buy if its below.

    Backtest results (reddit wont allow pic):
    50 trades
    70% Profit Trades
    2.968% Profit Factor

  • OpenAI closed both my user account and API access for “Acts of Violence”, yet absolutely nothing has been violence related on neither my account nor on the web service using API (I checked my API logs as well). They’re not replying to appeals either, essentially locking down business activity as well.

    They locked my Tier 5 with all of its funds.

    How is this possible without them actually giving details I would need for the appeal?