Sunday, July 13, 2025
Home Blog Page 1381

Is Inclusive Governance through Generative AI Empowering Public Companies to Serve a Broader Audience?

0

As the public sector evolves in tandem with technological advancements, its fundamental objective remains constant: ensuring equitable access to public services for all citizens, regardless of socioeconomic status, physical abilities, or geographical location, so that every individual can enjoy the same opportunities and benefits. The pursuit of inclusive governance has driven the sector to consistently leverage cutting-edge technologies, thereby fostering increased citizen participation, optimizing processes, and informing data-driven decisions. In the 1990s, the emergence of the internet propelled the public sector into the digital age, enabling government agencies to make their services accessible online and fostering citizen engagement through interactive websites. Today, generative AI occupies a similarly groundbreaking position, revolutionizing how customers interact with providers by delivering tailored experiences, enhancing accessibility, and simplifying workflows. As governments increasingly recognize the potential of generative AI, they are investing heavily in this technology, with productivity gains expected to reach significant heights by 2033, according to a report by BCG. Generative artificial intelligence is revolutionizing the trajectory of public provision, propelling innovative solutions that advance the ambitious goals of inclusive governance.

Enhancing Accessibility

Generative AI is breaking down barriers, rendering public services more inclusive and accessible to historically marginalized and underserved communities. Traditional public services often struggle to excel within such teams due to the lack of tailored support, linguistic barriers, and the difficulties faced by individuals with disabilities? Generative AI assists in addressing these challenges through various methodologies.

Artificial intelligence-driven tools such as chatbots and digital assistants have simplified the process of navigating complex bureaucratic procedures by providing personalized support. In Heidelberg, Germany, the town has introduced an innovative chatbot solution, empowering residents and visitors alike to effortlessly navigate various concerns, including address changes and information requests about waste collection. Lumi draws upon publicly available metropolitan knowledge bases and continually refines its understanding over time, primarily driven by user interactions.

Artificial intelligence-driven translation tools are revolutionizing the way people connect across linguistic boundaries, enabling non-native speakers to seamlessly access vital information and services in their native tongue. In multicultural societies where linguistic diversity is a vital component, it is essential that… In a pioneering effort, the cities of Stockton and Fairfield in California have introduced a innovative communication platform that empowers residents to engage with local governments in 71 languages via both cellular and online channels, leveraging Amazon Translate’s cutting-edge technology. In India, a pioneering initiative leverages generative AI-powered chatbots on WhatsApp and Telegram to facilitate access to government services for rural residents in their native tongues.

Currently, these chatbots are capable of responding to both written and spoken language inputs, supporting 10 languages and integrating with 171 government agencies, thereby simplifying access to relevant information for citizens.

AI-powered innovative technologies are significantly enhancing accessibility options for individuals with disabilities. Individuals with at least one disability comprised approximately 20% of the U.S. population by 2023. The percentage of inhabitants participating in the workforce increased from 21.3% in 2022 to 22.5% by 2023. The United States has implemented various programs to support individuals struggling with poverty and financial instability. The Department of Justice has recently issued guidelines mandating that state and local governments ensure their websites and mobile applications are accessible for all users. Organizations are adapting by developing AI-powered display readers, such as those available for iOS and Android, which empower individuals who are visually impaired to explore government websites and access information more autonomously.

Enhancing Citizen Engagement

Furthermore, a crucial aspect of inclusive governance is ensuring effective resident engagement and fostering a culture of participatory decision-making that prioritizes accessibility. Government agencies oversee a range of capacities, encompassing public health programs to tourism development efforts. Residents seeking information from businesses often pose a challenge for customer-facing teams, as they require rapid access to relevant data, a task that can prove both time-consuming and resource-intensive in its execution. Residents’ expectations are typically met by providing efficient and engaging communication channels.

Generative AI-powered digital assistants help address these challenges by providing tailored responses to citizen inquiries. Developed by U.S. researchers, for instance, is an innovative chatbot that showcases the nation’s technological prowess in natural language processing and artificial intelligence. U.S. Citizenship and Immigration Services (USCIS), a bureau under the U.S. Department of Homeland Security. Emma provides assistance to clients regarding diverse services, including immigration, green cards, and passports, offering bilingual support in both English and Spanish. The English model allows for seamless voice interactions, providing intuitive guidance to customers through the website. Emma successfully facilitates over one million interactions each month, unequivocally demonstrating the value of enhancing citizen engagement through its efforts.

Australia’s authorities leverages a cutting-edge chatbot to streamline interactions with individuals and businesses, providing seamless guidance on complex tax matters such as property rights, income, deductions, and return submissions. Alex skillfully steers customers towards relevant content, thereby expediting their journey and enhancing overall customer satisfaction.

Making Inclusive Resolution

A fundamental component of inclusive governance is the ability to make transparent and impartial decisions that prioritize the well-being of all community members, unaffected by factors such as socio-economic status, ethnic background, or personal relationships. Generative artificial intelligence empowers the public sector to make informed, data-driven decisions that prioritize inclusivity and fairness. Innovative applications of inclusive decision-making in the public sector have been gaining traction, with the adoption of generative AI-powered automated recruitment processes being a notable example. This AI-powered methodology showcases résumés and job listings in a manner that effectively eliminates subjective human judgment. Through anonymizing confidential information and concentrating exclusively on relevant qualifications and competencies, generative AI ensures a fair evaluation process where all applicants are assessed solely on their merits.

New York City’s adoption of generative AI technology has significantly expanded the candidate pool by streamlining evaluations and eliminating personal biases. The company leverages AI-driven technology to streamline its hiring process, objectively evaluating candidates based on skills and qualifications, thereby promoting a fairer and more inclusive recruitment approach.

Creating Inclusive Insurance policies

Artificial intelligence’s generative capabilities are revolutionizing the way we approach coverage improvement, facilitating a more comprehensive and inclusive strategy through rigorous knowledge assessment. Through meticulous examination of comprehensive datasets, AI facilitates the discovery of nuanced wants and preferences among diverse community segments, thereby ensuring that policies are informed by the genuine interests of all residents, ultimately yielding more equitable and just outcomes?

By integrating Metropolis with AI-driven analytics, cities can better understand neighborhood needs and streamline resource distribution. By leveraging insights from housing, transportation, and public health domains, the town can make informed decisions that address the pressing needs of its diverse and often underserved populations.

Similarly, the National Health Service (NHS) utilizes generative artificial intelligence to predict patient demand for healthcare services. This assessment of patient cognition and propensities enables the National Health Service to distribute resources more efficiently, ensuring that vulnerable groups receive timely treatment they require.

Ensuring Transparent and Responsible Deployment of Artificial Intelligence Technologies in Government Organizations.

While generative AI has vast potential to reshape the public sector, it must be employed with responsibility to ensure that its benefits are distributed fairly and equitably among all citizens. Public sector entities are developing innovative insurance solutions to mitigate the risks associated with these challenges. Examples of such insurance policies include ones that aim to govern high-risk AI functions, and others that focus on ensuring transparency and equity in AI technologies. Certain essential aspects to consider when evaluating insurance policies include:

  • Can generative AI techniques truly function transparently? This necessitates providing transparent accounts of generative AI’s decision-making mechanisms and ensuring that their inner workings are intelligible to individuals lacking technical expertise. Transparency fosters trust by enabling residents to comprehend decision-making processes and the rationales behind them.
  • Generative AI techniques may exacerbate existing biases embedded in the data they’re trained on. To effectively address the risk of biased AI, it is crucial to thoroughly scrutinize AI designs, identifying and addressing any acknowledged biases in a timely manner.
  • Ensuring the secure handling and management of sensitive information is a crucial challenge when deploying generative artificial intelligence within the public sector. Public sector knowledge is often extremely sensitive, requiring perpetual protection. To effectively harness the potential of generative AI, it is crucial to uphold rigorous safeguards for protecting sensitive information and ensuring utmost security.
  • Accountability is a crucial aspect of effectively harnessing generative AI for the promotion of inclusive governance. Establishing independent, impartial bodies or committees is crucial to monitor the deployment and outcomes of AI technologies. The initiative also involves establishing open forums where citizens can contribute their ideas, thereby ensuring that community feedback leads to tangible improvements and updates.

The Backside Line

Generative AI is revolutionizing the public sector by significantly improving accessibility, boosting citizen participation, and cultivating more inclusive decision-making processes. The potential for technology to provide personalized assistance, transcend language barriers, and support individuals with disabilities renders public services more inclusive and environmentally sustainable. As public sector organizations increasingly integrate generative AI, they must contend with the complexities surrounding transparency, equity, and data security. Accountability in implementation, driven by robust policy frameworks and ethical guidelines, is crucial for ensuring that generative AI truly fosters inclusive governance, thereby increasing provider accessibility and equity for all residents.

Now that a Brazilian court has banned Elon Musk’s SpaceX, here’s what might happen next?

0

The new policy came into effect over the weekend. A longstanding feud erupted between Elon Musk, owner of X, and Brazilian Supreme Court Justice Alexandre de Moraes, who previously mandated that the social media platform block far-right users.

The ban has outraged Musk. In the aftermath of this, he has alleged that de Moraes is a “bully” and that the “so-called experts” are so intimidated by those investigating the facts that they will financially ruin anyone who attempts.

Brazilian authorities are taking a firm stance against technology companies that disregard their country’s laws and regulations. Are various global destinations likely to respect swimwear guidelines?

Why did Brazil ban X?

The Brazilian government’s decision to outlaw X wasn’t taken lightly.

Between 2020 and 2023, Brazil’s Supreme Court conducted three landmark investigations into social media platforms’ legal implications.

Researchers explored the prevalence of deceptive data. The study also examined organisational teams that intentionally shape online discourse and participation (dubbed “digital militias”). The third group of individuals and teams investigated for their role in the attack on Brazil’s Congress in 2023, which occurred in the aftermath of former President Jair Bolsonaro’s defeat in the 2022 general election.

In April this year, Elon Musk announced plans to suspend or terminate a significant number of Twitter accounts that were spreading misinformation and disinformation regarding Jair Bolsonaro’s purported loss in the 2022 presidential election.

This was not the first time X had received an order of a similar nature.

In January 2023, following a Congressional attack, Brazil’s Supreme Court further compelled X and other social media platforms to block certain accounts. Despite initial reservations, Musk’s platform ultimately conformed to the court order.

Despite this, Musk declined, ignoring the demands from X’s authorized consultant in Brazil. This marked a significant milestone in our expansion, as Brazil’s regulatory environment necessitates that international companies establish a locally authorized presence.

By November 15, De Moraes has instructed Elon Musk to identify and propose a fresh consultant for consideration. Despite meeting the expectations, the tech billionaire’s failure to deliver on a specific commitment ultimately led to the prohibition of X.

Meanwhile, Luciano de Moraes, a key figure in the development of Elon Musk’s pioneering web satellite TV service, Starlink.

The ban on X will continue until Elon Musk fully complies with all relevant court orders, including nominating an authorized representative in Brazil.

As Brazil navigates its current state of turmoil, several key events are likely to unfold.

Prior to the ban, there had been virtually no.

From today, attempting to gain unauthorized access to the platform by using a software program can result in daily fines of up to AU$13,000.

Since the ban, numerous former users of X have defected to other social media platforms. Indeed, for instance, he joined the microblogging platform Bluesky, boasting “all-time highs” in his exercise regimen.

The recent ban is a pivotal step in Brazil’s ongoing struggle against social media platforms’ unchecked influence on Brazilian society. Demoraes has been at the forefront of this fight. During a recent interview,

In Brazil, citizens understand that the right to free expression does not equate to the license for verbal assault. While freedom of speech is a fundamental right, it does not extend to the perpetuation of hate speech, racism, misogyny, or homophobia.

Despite far-right factions and Bolsonaro enthusiasts being at odds. The groups have been outspoken in their criticism of the ban, with the Supreme Court frequently echoing their sentiments. The proposed ban may exacerbate existing social divisions, potentially sparking controversy and unrest.

In accordance with Brazilian legislation, a panel of supreme court justices is currently reviewing the ban. While they could decide to maintain the ban in place, they may choose to reverse the financial penalties imposed on individuals attempting to access X within their country. The possibility remains that other judges may ultimately reverse the ban itself.

Will varying global destinations acknowledge the presence of swimwear?

As a global phenomenon unfolds, various countries, including the United States, are set to follow suit and suspend access to Brazil’s social media platforms in response to this controversy.

There’s no empirical evidence to support this assertion, and the Brazilian ban has no jurisdiction beyond its territorial boundaries.

Despite these concerns, the growing perception exists globally that massive social media companies can still be constrained and are often exempt from national laws or other authority?

Last week, the founder of Telegram was accused of enabling criminal activities on his company’s direct messaging platform.

International locations exhibiting interest in strengthening regulation of social media platforms will closely monitor the progression of these cases.Now that a Brazilian court has banned Elon Musk’s SpaceX, here’s what might happen next?

Why settle for mediocre password management when you can have the best? For a limited time, we’re offering our premium alternative at an unbeatable Labour Day value.

0

To celebrate Labor Day, NordPass is offering a unique promotion, making this the ideal moment to enhance your online security. .

This supply also includes an additional option, boosting its overall value to $37.53 over the initial two-year period, a significant decrease from the standard price of $80.73. Plus, NordPass offers a 30-day money-back guarantee as a risk-free way to try their service.

Options of NordPass Premium

As a Premium user with NordPass, you gain access to an array of innovative features crafted to streamline and safeguard your online presence.

  • NordPass automates the process of storing and retrieving login credentials, effortlessly filling in credentials to save you precious time and eliminate frustrating login hassles when accessing your online accounts.
  • Retailers safeguard a broad range of sensitive information, including passwords, passkeys, and credit card details, with robust security measures. This feature enables seamless access to sensitive data across various platforms.
  • Whether switching between your phone and laptop or vice versa, NordPass seamlessly maintains your login session, streamlining the transition process with effortless ease.
  • The password scanning service detects vulnerabilities in your existing passwords, warning you about potential weaknesses and prompting the creation of more secure, unique alternatives for increased protection.
  • NordPass monitors the internet for data breaches and promptly notifies you if your personal information has been compromised, providing you with timely protection against identity theft. This proactive strategy enables you to take immediate action to secure your accounts.
  • You can link files to your stored assets, featuring an additional layer of organization and security for your sensitive documents.
  • Protect your real email address by crafting custom email pseudonyms. This feature assists in maintaining the confidentiality of personal information during online sign-ups, thereby preventing unwanted spam and data exposure.

Why Select NordPass?

In today’s digital landscape, managing multiple passwords has become an overwhelming task. NordPass streamlines the process, allowing you to focus on more important matters while keeping your data safely secure. The service employs end-to-end encryption, ensuring that all data transmitted remains confidential throughout the entire process. Developed by the consulting experts who brought you NordPass, this innovative solution has earned the trust of hundreds of thousands globally.

NordPass is more than just a top-notch password manager; it’s a comprehensive solution for anyone serious about online security. As the current state of digital security supplies reaches an all-time high, now is the ideal moment to invest in your online protection. With its compelling combination of premium features, an affordable monthly fee, and a risk-free guarantee, NordPass emerges as a shrewd choice for individuals and families seeking robust online protection.

While making purchases through links on Gizmodo’s site, the platform may earn a commission fee.

SparkLabs has closed a new $50 million venture capital fund, this time with a focus on investing in AI startups that are leveraging machine learning and artificial intelligence to drive innovation.

0

A venture capital firm renowned for its support of pioneering AI startups like Vectara, Allganize, Kneron, Anthropic, xAI, Glade (YC S23), and Lucidya AI is poised to further amplify its investment in innovative ventures within this realm. The venture capital agency announced Tuesday the closure of a new $50 million fund, the AIM AI Fund, designed to support AI startups emerging from its AIM-X accelerator in Saudi Arabia and other AI startups globally.

SparkLabs’ launch of a new fund underscores the lingering fascination with synthetic intelligence that has persisted over the past few years, reflecting the industry’s ongoing quest to unlock its vast potential. The proliferation of generative AI has sparked an influx of startups and investors seeking the next OpenAI or acquisition opportunity to enhance their own AI capabilities.

The expansion of AI alternatives also considers how they continue to spread beyond Silicon Valley. As part of its AI Mission, SparkLabs has launched AIM-X, a pioneering AI-focused startup accelerator in the kingdom, aimed at fortifying AI expertise over the next five years.

Significant growth has been witnessed globally in AI startup ecosystems over recent years. As of March 2024, more than 210 AI unicorns, collectively valued at over $1 billion, have emerged. Regardless of its global reach, the United States has been uniquely impacted by social media’s influence. According to a recent report, General nonetheless holds the top spot for launching the largest number of AI startups between 2013 and 2022, boasting an impressive total of 4,633 ventures.

According to SparkLabs’ co-founder and CEO Bernard Moon, approximately 35% of their new funding will support accelerator participants, while the remaining 65% will be directed towards making Series A and B investments outside Saudi Arabia.

According to Moon, the accelerator will focus on allocating around 10-20 percent of its resources to Saudi Arabian or Middle Eastern North African (MENA) investment opportunities, specifically targeting minority stakes. “The majority will likely comprise top-tier AI startups, regardless of location; however, I anticipate the majority to come from the United States.”

Accelerator participants’ typical investment will likely range from $200,000 to a maximum of $500,000, depending on the specific circumstances, according to Moon’s statement to TechCrunch. The Sequence A and Sequence B investments are likely to range from $1 million to $5 million, according to Moon’s statement. The investment aims to hold positions between 50% and 70% of the total assets in the fund.

While SparkLabs did not disclose its limited partners, Moon Ventures noted that its LPs comprise a sovereign fund of funds.

The inaugural batch is expected to debut at the International AI Summit in Riyadh on Tuesday, September 10. Moon announced to TechCrunch that SparkLabs had already backed 14 startups via its inaugural AI-focused investment fund.

  • A cutting-edge Hong Kong-based AI-powered video analytics platform specializing in enhancing office security and streamlining administrative processes.
  • A company founded in New York that has developed artificial intelligence solutions for analyzing physical movements without relying on sensors.
  • A Pakistani startup, having developed an AI-driven localized weather software platform, enables accurate measurements, assessments, and reports on emissions, facilitating the purchase and trading of certified carbon credits.
  • A leading Indian video platform leverages artificial intelligence to bridge the gap between manufacturers and customers through seamless connections on websites and mobile applications via innovative features such as videos, user-generated content, reviews, video advertisements, and live shopping experiences.
  • A cutting-edge AI-powered content creation platform in Italy revolutionizes the way businesses craft engaging stories.
  • A pioneering Singaporean AI-powered electric vehicle (EV) fleet management solution.
  • : in Germany
  • A pioneering digital hub fostering interdisciplinary exploration and in-depth learning experiences in the heart of San Francisco.
  • : a startup that spun out from the London Faculty of Hygiene & Tropical Medication, utilizing AI to create sensor-enabled merchandise for the detection of pests and ailments
  • A innovative AI-powered talent acquisition solution based in San Francisco.
  • An innovative Arabic generative AI platform headquartered in Riyadh.
  • A Mumbai-based startup has developed an AI-driven solution to enhance beauty companies’ customer loyalty, interactions, and revenue growth through its innovative technology.
  • A pioneering AI-driven video analytics and advertising platform headquartered in San Francisco.
  • A German-based agritech startup pioneering innovative vertical farming solutions in Berlin.

SparkLabs boasts a portfolio of over 14 international funds, with a notable presence in Saudi Arabia through two dedicated funds. With a portfolio that spans over 550 startups worldwide, its track record of innovative investment is truly impressive.

Google’s Gemini will see a significant boost in speed as the tech giant tackles latency.

0

Google’s Gemini will see a significant boost in speed as the tech giant tackles latency.

C. Scott Brown / Android Authority

TL;DR

  • According to reports, Google’s Gemini 1.5 Flash technology reportedly accelerated by a significant 50% in the past few weeks.
  • The Gemini’s Google Duties extension has been spotted on Pixel 8 devices, following its initial appearance on the Pixel 9.
  • Will we successfully expand our Android application’s reach by efficiently deploying it on more devices?

Twelve months ago, Google unveiled the next stage in the development of its generative AI system, Gemini 1.5 Flash, which promises not only to process complex queries more quickly but also to enable assistance with increasingly sophisticated inquiries. Following its launch roughly a month prior, Google started rolling out the offering globally, completely free of charge to customers worldwide. Despite the impressive debut of Gemini, Google has been quietly refining its operations, revealing a remarkable acceleration in performance.

According to Google, over the past few weeks, the company has successfully boosted the performance of Gemini 1.5 Flash responses, enabling them to be even more efficient than before? While the corporation doesn’t delve extensively into specific changes driving this success, it does highlight initiatives aimed at minimizing latency. While this type of optimization is not crucial for Gemini’s success, it can subtly enhance the underlying narrative, fostering more authentic interactions.

One key indicator that Gemini’s popularity is on the rise is its growing assist. We have eagerly anticipated the roll-out of enhancements enabling Gemini to seamlessly integrate with a multitude of Google applications and services, including Google Tasks. As the Pixel 9 debuted with the Duties Extension readily available, speculation grew about its rollout to more devices. It seems that rollout of the Android update is currently in progress, as evidenced by user reports on Reddit indicating that the new software is being installed on their Google Pixel 8 devices. Confirmed on the Google Pixel 8, the trend is unfolding nicely, with uncertainty surrounding its magnitude and pace, yet it’s undeniable that this development is now underway?

Once you’ve acquired the extension through your phone, you’re ready to unlock the full potential of Gemini’s Duties feature. By seamlessly integrating duty management, you’ve unlocked the power to dynamically swap tasks, allowing Gemini to utilize conversational context and ensure a seamless experience.

When I’ve had the opportunity to showcase Gemini’s Duties integration with flair?

 E-mail our workers at . The credit scores are a key aspect of financial life that helps you gauge your reliability in repaying debts on time.

What steps can I take to prove my humanity? Additionally, how do I alter the grid’s power source?

0

,.

As artificial intelligence evolves to mimic human behavior with increasing accuracy, it’s becoming increasingly challenging to distinguish between genuine human website users and sophisticated algorithms designed to simulate their actions?

While these techniques can have significant consequences, such as spreading misinformation or facilitating fraud, their application also renders online information remarkably resilient to skepticism.

Researchers have devised a potential solution: “personhood credentials,” a verification concept that confirms the authenticity of an individual without disclosing any identifying information. .

The facility’s electrical infrastructure relies on a sole source of fuel, which powers a range of high-voltage equipment requiring thermal insulation. The problem lies in the fact that methane, an extremely potent greenhouse gas, is also an incredibly effective one: a worst-case scenario for climate change.

Sulfur hexafluoride (SF6), often overlooked, surprisingly plays a significant role in global warming, accounting for approximately 1% of the total planetary heating—its impact dwarfed by carbon dioxide and methane, more widely recognized and abundant contributors to this issue. Despite efforts to reduce them, emissions from this fuel continue to rise annually. 

Corporations are now seeking alternatives to tools reliant on fossil fuels, striving to find substitutes that maintain optimal performance. .

Each year, the MIT Technology Review highlights 35 Innovators Under 35. Youthful innovators, scientists, and changemakers are pioneering cutting-edge solutions and innovative approaches to tackle the globe’s most pressing concerns in biotechnology, computer sciences, and climate research?

We’re excited to announce that on Monday, September 9, we will unveil our 2024 Innovator of the Year on LinkedIn, a prestigious honor recognizing individuals who have made significant contributions to their field and pushed boundaries through innovative thinking and achievements. Join us at 12:30 pm ET as we reveal the identity and explore the work of our guest, examining how their contributions are shaping this broadcast ahead of the album’s release? Sign up to stay in the loop and be one of the first to discover our latest news and updates!

The online phenomenon operated numerous top-rated fan profiles. ( $)
+ ( $)+ ( $)

As social media influencers’ online presence grows, so does their vulnerability to law enforcement scrutiny, with many of their devoted followers unexpectedly finding themselves under police questioning. ()
+ ()

Intel is poised to revamp its trajectory with a strategic plan to reignite growth. 
The former industry leader intends to divest a significant portion of its non-essential assets in an effort to streamline operations and optimize resources. ()
+ ( $)

Batteries for electric vehicles (EVs) have yet to gain widespread adoption, prompting manufacturers to explore alternative applications in the power grid instead. ( $)
+ ()

Tinder, Hinge, Bumble, and Grindr are introducing AI-powered bots that suggest clever icebreaker messages to help users spark more engaging conversations. ( $)
To help countries circumvent the USD-centric global financial system. ( $)
+ ()

On a distant planet, seeds, plants, animals, and microbial specimens could potentially thrive in a safer environment than they do on Earth. ( $)
+ ()
+ ()

US regulators have finally capped how much personal corporations can charge. ( $)

Social media fatigue has become a reality, with many users seeking respite from the noise, and platforms like Strava and Letterboxd capitalizing on this trend by offering unique experiences that transcend traditional social media norms. ( $)
+ ( $)
+ ()

From Stanley Kubrick’s 2001: A Space Odyssey to James Cameron’s Terminator and the Wachowskis’ The Matrix. ( $)

A Google employee advises colleagues to preserve chat histories while discussing sensitive topics, according to reports, with the US Federal Government claiming this shows employees were aware of the need to avoid creating a legal paper trail.

Abandoning fossil fuels and embracing lower-carbon technologies are our most promising options for mitigating the rapidly escalating threat of climate change. The introduction of rare earth components, crucial in many cutting-edge technologies, will largely influence the success of nations in achieving their greenhouse gas reduction goals.

As concerns mount globally, several countries, including the United States, are increasingly anxious about the long-term security of these critical components’ supply. Consequently, researchers and industries are committed to expanding access and fostering long-term viability by investigating alternative or unconventional resources. . 

As the seasons transition from summer to autumn, it’s a great opportunity to update your wardrobe and accessories with new styles.
The intricate details of the human brain, revealed with breathtaking clarity through the lens of an MRI machine – I’m captivated by this stunning visual representation of the captured image.
+ My favourite Olympic sport? ! Did you know that snails power their tiny conveyances?
+ actually do work.

Ransomware group RansomHub has launched attacks on a staggering 210 victims across critical sectors.

0

Since emerging on the cybercrime landscape in February 2024, the notorious ransomware gang, Menace, has reportedly carried out attacks against at least 210 unsuspecting victims, encrypting and exfiltrating sensitive data as it wreaks havoc globally, with a significant proportion of these incidents occurring in the United States. authorities stated.

Vulnerable sectors affected included water and wastewater management, information technology, government agencies and services, healthcare and public health, emergency response, food and agriculture, financial services, industrial processes, transportation, and critical communication infrastructure.

“Authorities have identified RansomHub, a ransomware-as-a-service variant formerly known as Cyclops and Knight, as a highly effective and lucrative model that has garnered the attention of prominent affiliates from notable variants like LockBit and ALPHV.”

A notorious ransomware-as-a-service variant, descended from the notorious Cyclops and Knight operations, has gained notoriety by attracting high-profile affiliates from prominent groups such as LockBit and ALPHV (also known as BlackCat), following recent law enforcement crackdowns on cybercrime activities.

ZeroFox’s latest evaluation revealed a stark escalation in RansomHub’s ransomware activities, representing a significant proportion of total attacks tracked by the cybersecurity vendor: 2% in Q1 2024, 5.1% in Q2, and an alarming 14.2% to date in Q3?

According to the company, approximately 34% of RansomHub attacks have targeted firms in Europe, a proportion that is significantly higher than the 25% average seen across the broader threat landscape.

The group allegedly employs a double-extortion model to illicitly extract sensitive information and encryption tactics, leveraging these tactics to extort victims who are instructed to communicate with operators via a unique Tor (.onion) URL. Firms that resist paying ransoms see their sensitive information publicly disclosed on notorious cybercrime websites, often remaining there for anywhere from three to ninety days.

Attackers leverage previously identified security weaknesses to gain initial access to compromised environments, exploiting vulnerabilities affecting a range of software products, including Apache ActiveMQ, Atlassian Confluence, Citrix ADC, F5 BIG-IP, and Fortinet’s FortiOS and FortiClientEMS platforms.

Associates subsequently conduct reconnaissance and community scanning using tools such as Angry IP Scanner, Nmap, and other living-off-the-land tactics. Ransomware operators in the RansomHub gang are employing a novel tactic: exploiting vulnerabilities to disable antivirus software, thereby evading detection and amplifying their malicious attacks.

Following a preliminary foothold, the RansomHub operatives established consumer accounts for persistence purposes, reactivated dormant accounts, and leveraged Mimikatz on Windows systems to extract sensitive credentials [T1003] and subsequently elevated their privileges to system-level access. The U.S. authorities advisory reads.

Associates subsequently transitioned seamlessly within the community through various tactics utilizing established protocols such as Distant Desktop Protocol (RDP), PsExec, AnyDesk, Connectwise, N-Position, Cobalt Strike, and Metasploit, or other widely employed command-and-control tools.

Ransomware operators often employ a notable tactic in RansomHub attacks: rapid encryption acceleration through intermittent encryption, which enables them to quickly exfiltrate data via tools like PuTTY, Amazon AWS S3 buckets, HTTP POST requests, WinSCP, Rclone, Cobalt Strike, and Metasploit, among others.

The Unit 42 researchers at Palo Alto Networks have shed light on the tactics employed by the notorious ShinyHunters ransomware group, which they track as Bling Libra, revealing a marked shift from publicising pilfered information to extorting victims instead? What’s at stake: The Menace Actor of 2020.

“The research team obtains genuine credentials, sourced from publicly available repositories, allowing for initial access to a company’s Amazon Web Services (AWS) environment,” said safety experts Margaret Zimmermann and Chandni Vaya.

As a result of the permissions tied to the compromised credentials limiting the impact of the breach, Bling Libra successfully gained access to the group’s Amazon Web Services (AWS) environment, initiating reconnaissance activities. A malicious actor group leveraged tools like the Amazon Easy Storage Service (S3) Browser and WinSCP to gather intelligence on S3 bucket configurations, access and delete sensitive data.

As ransomware attacks continue to evolve, they’ve transitioned beyond simple file encryption to employ sophisticated, multifaceted extortion tactics, including triple and quadruple schemes, according to SOCRadar’s findings.

As ransomware attacks escalate in sophistication, triple extortion becomes a potent threat, potentially compromising not just data but also critical systems and operations through the exploitation of past encryption and exfiltration.

The revised text reads: “This scenario may involve conducting a distributed denial-of-service (DDoS) attack against the victim’s systems or issuing direct threats to the victim’s customers, suppliers, or other associates, ultimately aimed at causing further operational and reputational harm through the extortion scheme.”

Quadruple extortion takes a malicious step further by targeting not only the initial victim but also their business partners and associates, exploiting these connections to demand even more concessions or threaten to expose sensitive information unless the ransom is paid.

The lucrative landscape of ransomware-as-a-service (RaaS) models has precipitated a proliferation of novel ransomware strains, including Conti, Maze, LockerGoga, REvil, DarkSide, HelloKitty, and LockBit. Iranian nation-state actors have also been incentivized to collaborate with identified groups in exchange for a reduction in their illicit financial gains.

Discovered this text fascinating? Follow us on social media and stay updated to discover more exclusive content we publish.

Amazon EMR 7.1’s optimized runtime for Apache Spark and Iceberg enables Apache Spark 3.5.1 and Iceberg 1.5.2 workloads to execute up to 2.7 times faster.

0

We explore the performance benefits of employing the runtime for Apache Spark and Apache Iceberg compared to running identical workloads with open-source Spark 3.5.1 on Iceberg tables. Iceberg is a widely utilized open-source solution for large-scale analytical data storage, offering high performance and efficiency in managing massive datasets. Our results show that Amazon EMR can accelerate TPC-DS 3TB workloads by a factor of 2.7, reducing the runtime from 1 hour and 54 minutes to just over 56 minutes. Furthermore, fee effectiveness increases by a factor of 2.2, as the total cost drops from $16.09 to $7.23 with the utilization of Amazon EC2’s On-Demand r5d.4xlarge instances, yielding substantial benefits for data processing tasks.

The solution provides a high-performance runtime environment while maintaining 100% API compatibility with open-source Apache Spark and Iceberg table formats. We previously outlined several optimizations, resulting in a four-fold speedup and an 2.8-times improvement in price-performance compared to open-source Spark 3.5.1 on the 3 TB TPC-DS benchmark. Despite this, many optimisations remain focused on DataSource V1, while Iceberg leverages Spark. As a result, we have focused on porting select existing optimisations from the EMR runtime for Spark to DataSource V2, while also introducing tailored improvements specifically designed for Iceberg. The enhancements build upon the Spark runtime’s advancements in question planning, physical plan operator enhancements, and optimizations leveraging Amazon S3 and the Java runtime. As part of our ongoing efforts to enhance performance, we have incorporated eight new incremental optimizations effective with the release of Amazon EMR 6.15 in 2023; these improvements are now standard in Amazon EMR 7.1, enabled by default. Within the scope of improvements, lie the following:

  • Optimizing DataSource V2 in Spark:
    • Dynamic filtering on non-partitioned columns
    • Eradicating redundant broadcast hash joins
    • Partial hash mixture pushdowns
    • Bloom filter-based joins
  • Iceberg-specific enhancements:
    • Information prefetch
    • Help for file size-based estimations

All four platforms, , , , and , utilize optimized runtimes. Consult with them directly for further details.

Benchmarked outcomes for Amazon EMR 7.1 versus traditional Hadoop distributions demonstrate significant improvements in performance and scalability. Apache Spark 3.5.1 and Apache Iceberg 1.5.2 provide a powerful combination for data engineering and analytics workloads.

We conducted a series of benchmark tests using the Spark engine’s efficiency in an Iceberg data format. Our results, based on the TPC-DS dataset, differ slightly from the official benchmarks due to varying setup configurations. Benchmarks have been conducted to evaluate the performance of Amazon EMR runtime configurations featuring Spark 3.5.0 and Iceberg 1.4.3-amzn-0 on EMR 7.1 clusters, as well as open-source Spark 3.5.1 and Iceberg 1.5.2 deployed on EC2 instances reserved for open-source testing purposes.

Please visit our website for the setup directions and technical particulars. To mitigate the impact of external catalogs like Hive, we leveraged Hadoop’s native catalog to manage our Iceberg tables. Because this application leverages the Amazon S3 file system, specifically to store and manage its comprehensive catalog. This setup can be outlined by configuring the property. spark.sql.catalog..kind. The fact tables leveraged the default partitioning strategy based on the date column, resulting in numerous partitions spanning over a range of approximately 1,100 days from 200 to 2,100. The data presented does not rely on previously calculated statistics.

We executed 104 SparkSQL queries across three consecutive rounds, tracking the average run time of each query to facilitate meaningful comparisons. Amazon EMR 7.1, equipped with Iceberg functionality, achieved a common runtime of 0.56 hours across the three rounds, showcasing a notable 2.7-fold acceleration improvement over open-source Spark 3.5.1 and Iceberg 1.5.2. The following determines the entire runtimes in seconds.

The next desk summarises the metrics.

Common runtime in seconds 2033.17 5575.19
Geometric calculations on large query sets occur within milliseconds. 10.13153 20.34651
Price* $7.23 $16.09

*Detailed price estimates will be provided later in this article.*

This chart illustrates the significant per-query performance enhancements achieved by Amazon EMR 7.1 in comparison to open-source Spark 3.5.1 and Iceberg 1.5.2. The performance boost achieved by Amazon EMR in comparison to open-source Spark varies significantly across questions, ranging from a notable 9.6-fold acceleration for question 93 to a relatively modest 1.04-fold increase for question 34; notably, Amazon EMR consistently outperforms open-source Spark when utilizing Iceberg tables.

This plot presents a comprehensive view of the 3TB TPC-DS benchmark queries ordered by their corresponding efficiency enhancements when utilizing Amazon EMR, with the horizontal axis featuring the queries arranged in descending order based on these improvements. The vertical axis represents the magnitude of this speedup in seconds.

Price comparability

Our benchmark provides comprehensive runtime and geometric insight into evaluating the efficiency of Spark and Iceberg in a realistic, large-scale data processing scenario. To gain further perspectives, we also examine the financial dimension. We derive price estimates through a formula-driven approach that takes into account EC2 On-Demand instances, Amazon Elastic Block Store (EBS), and Amazon Elastic MapReduce (EMR) expenses.

  • Amazon EC2 pricing (inclusive of SSD storage costs) equals a function of various job scenarios times the hourly rate for an r5d.4xlarge instance multiplied by the job’s processing time in hours.
    • The hourly rate for 4xLarge instances is $1.152 per hour.
  • Estimating the total cost of using Amazon Elastic Block Store (EBS): The cost is derived from a combination of factors including the hourly charges for each gigabyte (GB) of storage, the volume of root EBS allocated, and the job’s processing time in hours.
  • What’s the estimated cost of processing large datasets using AWS? The answer lies in this concise formula: (various scenarios * $2.40 per hour) x (job runtime in hours).
    • The cost of running a 4xlarge Amazon Elastic MapReduce (EMR) cluster is approximately $0.27 per hour.
  • Total cost = Amazon EC2 price + Root Amazon EBS price + Amazon EMR price

The analysis indicates that deploying Amazon EMR 7.1 results in a 2.2 times more cost-effective performance compared to running open-source Spark 3.5.1 and Iceberg 1.5.2 on the same benchmark task, showcasing significant cost savings.

Runtime in hours 0.564 1.548
Variety of EC2 situations 9 9
Amazon EBS Dimension 20gb 20gb
Amazon EC2 price $5.85 $16.05
Amazon EBS price $0.01 $0.04
Amazon EMR price $1.37 $0
Whole price $7.23 $16.09
Price financial savings Amazon EMR 7.1 shows a remarkable 2.2 times performance boost Baseline

Amazon EMR 7.1 demonstrated significant improvements in data scanning efficiency, with Spark occasion logs indicating a 67% reduction in data scanned from Amazon S3 and a 44% decrease compared to the open-source model on the TPC-DS 3 TB benchmark. This improvement in Amazon S3 information scanning enables cost savings for Amazon EMR workloads by reducing prices.

Benchmarks for open-source Apache Spark are run against Iceberg tables to evaluate query performance. These tests gauge the efficiency of various operations, including read and write speeds, and data ingestion rates. The results provide valuable insights into the scalability and reliability of data warehousing solutions built atop Iceberg and Spark.

We utilised distinct EC2 clusters, each equipped with nine r5d.4xlarge instances, to test each open-source Spark 3.5.1, Iceberg 1.5.2, and Amazon EMR 7.1 separately? The initial node was configured with 16 vCPUs and 128 GB of memory, while the eight employee nodes together boasted a total of 128 vCPUs and 1TB of memory. To demonstrate typical user proficiency, we employed Amazon EMR’s default settings and made minimal adjustments to Spark and Iceberg configurations for a fair comparison.

The next configuration summarises the Amazon EC2 settings for the initial node and eight worker nodes, all utilising the r5d.4xlarge instance type.

16 128 2 x 300 NVMe SSD 20 GB

Conditions

To successfully execute the benchmarking process, several crucial prerequisites must be met.

  1. Reorganize the TPC-DS supply data to optimize storage efficiency, arranging it in a logical structure within both your Amazon S3 bucket and local laptop, ensuring easy access and retrieval.
  2. Duplicate the benchmark utility to your Amazon Simple Storage Service (S3) bucket by using the AWS CLI command aws s3 cp or aws s3 sync, as follows: Alternatively, you can store a copy of the data in your Amazon S3 bucket.
  3. CREATE TABLE customer_address (
    c_cdemo_sk integer,
    c_address_line1 varchar(1024),
    c_city varchar(20),
    c_region varchar(20),
    c_postal_code char(10),
    c_county varchar(50),
    c_nation_key integer,
    c_state_province varchar(40),
    c_country varchar(3));

    CREATE TABLE customer_demographics (
    c_customer_sk integer,
    c_current_cust_age integer,
    c_marital_status_group char(1),
    c_income_group integer,
    c_education_level char(1),
    c_wed_marr_status char(1),
    c_race_type char(1));

    CREATE TABLE nation (
    n_nation_key integer,
    n_region_key integer,
    n_name varchar(50),
    n_comment text);

    CREATE TABLE region (
    r_region_key integer,
    r_name varchar(20)); Create Iceberg tables using the Hadoop Catalog? The script utilizes an Amazon Elastic MapReduce (EMR) 7.1 cluster to create tables.

aws emr add-steps --cluster-id  --steps "Kind=Spark,Title=Create Iceberg Tables" Args=[--class com.amazonaws.eks.tpcds.CreateIcebergTables, --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions, --conf spark.sql.catalog.hadoop_catalog=org.apache.iceberg.spark.SparkCatalog, --conf spark.sql.catalog.hadoop_catalog.type=hadoop, --conf spark.sql.catalog.hadoop_catalog.warehouse=s3:///, --conf spark.sql.catalog.hadoop_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO, s3:////spark-benchmark-assembly-3.5.1.jar, s3:///blogpost-sparkoneks-us-east-1/blog/BLOG_TPCDS-TEST-3T-partitioned/, /home/hadoop/tpcds-kit/tools parquet 3000 true , true true], ActionOnFailure=CONTINUE --region ? 

Identify the location of the Hadoop catalog warehouse as noted previously. We employ identical table schemas to execute benchmark tests on both Amazon EMR 7.1 and open-source Spark and Iceberg.

The benchmark utility was developed within the department. When building a novel benchmarking tool, redirect to the designated folder immediately upon obtaining the source code from the official GitHub repository.

To create and configure a YARN cluster on Amazon EC2, follow these steps: First, launch three t2.large instances in the same Availability Zone. Next, install Hadoop 2.x on each instance, configuring the `hadoop-env.sh` file to set the `HADOOP_HOME` variable. Then, format the HDFS by running the command `hdfs namenode -format`. After that, start the YARN and HDFS services by executing the commands `start-yarn.sh` and `start-dfs.sh`, respectively. Finally, verify the cluster’s status using the `yarn node -list` and `hadoop dfsadmin -report` commands.

To achieve identical iceberg efficiency between Amazon EMR on Amazon EC2 and open-source Spark on Amazon EC2, follow the instructions below to set up an open-source Spark cluster on Amazon EC2 using Flintrock with eight worker nodes.

Based primarily on the cluster selection for this purpose, the subsequent settings are employed.

Execute the TPC-DS benchmark using Apache Spark 3.5.1 in conjunction with Iceberg 1.5.2 to assess its performance capabilities?

To successfully execute the TPC-DS benchmark, proceed with the following sequential actions:

1. Ensure that your chosen database management system is properly installed and configured.
2. Download the official TPC-DS dataset from the Transaction Processing Performance Council (TPC) website and extract its contents to a designated directory.
3. Set up your test environment by allocating sufficient memory and processing power for optimal performance.
4. Install any necessary dependencies, such as Java or Python packages, depending on the chosen implementation language.
5. Update configuration files according to specific requirements outlined in the TPC-DS documentation, if applicable.
6. Execute the benchmarking script or application, carefully following the provided instructions to generate results and measure performance metrics.
7. Validate your findings by comparing them against established benchmarks and best practices within the database community.
8. Continuously monitor and fine-tune your setup as needed to optimize performance and eliminate potential bottlenecks.

  1. Log in to the Open Supply Cluster’s primary interface. flintrock login $CLUSTER_NAME.
  2. Submit your Spark job:
    1. To select a suitable Iceberg catalog warehouse location and database for your created Iceberg tables, follow these steps:

      Firstly, identify the warehouse location where your Iceberg tables are stored. Typically, this is defined in your Apache Hive metastore configuration file or overridden through environment variables.

      Next, determine which database (e.g., MySQL, PostgreSQL) is used by your Iceberg catalog and warehouse. This information can usually be found in the same configuration file as before or via environment variables.

      Lastly, verify that you have correctly configured the warehouse location and database in your Apache Hive metastore to match where your Iceberg tables are stored.

      With this information, you should now have a proper understanding of how to select an appropriate Iceberg catalog warehouse location and database for your existing Iceberg tables.

    2. The outcomes are created in s3:///benchmark_run.
    3. You can potentially track progress in various ways. /media/ephemeral0/spark_run.log.
spark-submit  --master yarn  --deploy-mode consumer  --class com.amazonaws.eks.tpcds.BenchmarkSQL  --conf spark.driver.cores=4  --conf spark.driver.reminiscence=10g  --conf spark.executor.cores=16  --conf spark.executor.reminiscence=100g  --conf spark.executor.situations=8  --conf spark.community.timeout=2000  --conf spark.executor.heartbeatInterval=300s  --conf spark.dynamicAllocation.enabled=false  --conf spark.shuffle.service.enabled=false  --conf spark.hadoop.fs.s3a.aws.credentials.supplier=com.amazonaws.auth.InstanceProfileCredentialsProvider  --conf spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  --conf spark.jars.packages=org.apache.hadoop:hadoop-aws:3.3.4,org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.iceberg:iceberg-aws-bundle:1.5.2  --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions    --conf spark.sql.catalog.native=org.apache.iceberg.spark.SparkCatalog     --conf spark.sql.catalog.native.kind=hadoop   --conf spark.sql.catalog.native.warehouse=s3a:////  --conf spark.sql.defaultCatalog=native    --conf spark.sql.catalog.native.io-impl=org.apache.iceberg.aws.s3.S3FileIO    spark-benchmark-assembly-3.5.1.jar    s3:///benchmark_run 3000 1 false   q1-v2.13,q10-v2.13,q11-v2.13,q12-v2.13,q13-v2.13,q14a-v2.13,q14b-v2.13,q15-v2.13,q16-v2.13, q17-v2.13,q18-v2.13,q19-v2.13,q2-v2.13,q20-v2.13,q21-v2.13,q22-v2.13,q23a-v2.13,q23b-v2.13, q24a-v2.13,q24b-v2.13,q25-v2.13,q26-v2.13,q27-v2.13,q28-v2.13,q29-v2.13,q3-v2.13,q30-v2.13, q31-v2.13,q32-v2.13,q33-v2.13,q34-v2.13,q35-v2.13,q36-v2.13,q37-v2.13,q38-v2.13,q39a-v2.13, q39b-v2.13,q4-v2.13,q40-v2.13,q41-v2.13,q42-v2.13,q43-v2.13,q44-v2.13,q45-v2.13,q46-v2.13, q47-v2.13,q48-v2.13,q49-v2.13,q5-v2.13,q50-v2.13,q51-v2.13,q52-v2.13,q53-v2.13,q54-v2.13, q55-v2.13,q56-v2.13,q57-v2.13,q58-v2.13,q59-v2.13,q6-v2.13,q60-v2.13,q61-v2.13,q62-v2.13, q63-v2.13,q64-v2.13,q65-v2.13,q66-v2.13,q67-v2.13,q68-v2.13,q69-v2.13,q7-v2.13,q70-v2.13, q71-v2.13,q72-v2.13,q73-v2.13,q74-v2.13,q75-v2.13,q76-v2.13,q77-v2.13,q78-v2.13,q79-v2.13, q8-v2.13,q80-v2.13,q81-v2.13,q82-v2.13,q83-v2.13,q84-v2.13,q85-v2.13,q86-v2.13,q87-v2.13, q88-v2.13,q89-v2.13,q9-v2.13,q90-v2.13,q91-v2.13,q92-v2.13,q93-v2.13,q94-v2.13,q95-v2.13, q96-v2.13,q97-v2.13,q98-v2.13,q99-v2.13,ss_max-v2.13     true  > /media/ephemeral0/spark_run.log 2>&1 &!

Summarize the outcomes

After the Spark job completes, retrieve the results file from the output S3 bucket at s3:///benchmark_run/timestamp=xxxx/abstract.csv/xxx.csv. The process of achieving this goal can be successfully executed either by accessing the Amazon S3 console and locating the target bucket, or via the AWS Command Line Interface (CLI), which enables users to manage their S3 resources programmatically. The Spark benchmark utility creates a timestamped directory and places an abstract file within a folder labelled abstract.csv. The output CSV file contains four columns without headers.

  • Question identify
  • Median time
  • Minimal time
  • Most time

Based on data from three separate runs with one iteration each, we will compute both the arithmetic mean (typical average) and geometric mean of the benchmark runtimes.

The scalability of data processing engines is crucial in today’s Big Data landscape. To evaluate the performance of Amazon EMR (Elastic MapReduce) runtime for Apache Spark, I ran the TPC-DS (Teradata Performance Council’s Decision Support System) benchmark on this infrastructure.

SKIP

Most of the instructions closely mirror each other, differing only slightly in their treatment of a handful of Iceberg-related nuances.

Conditions

Full the next prerequisite steps:

  1. Run aws configure To set up the AWS Command Line Interface (CLI) shell so that it defaults to the benchmarking AWS account for testing purposes. Consult with for directions.
  2. Upload the benchmark utility JAR file to Amazon S3 for seamless execution and monitoring of performance metrics.

Deploy the EMR cluster with optimized configurations for high-performance computing, leveraging Spark 3.x for enhanced scalability and faster processing times. Run the benchmark job on the freshly spun-up cluster, utilizing Hadoop’s distributed storage and processing capabilities to generate accurate performance metrics.

To successfully execute the benchmark job, proceed as follows:

  1. aws emr create-cluster –name MyCluster –release-label emr-6.7.0-latest –instance-type m4.large –num-instances 3 –use-default-roles –ec2-keyname my-keypair –subnet-id subnet-12345678 –security-group-ids sg-01234567,sg-90123456 –auto-terminate true Be certain to allow Iceberg. See for extra particulars. To identify a suitable Amazon Elastic MapReduce (EMR) model, consider the following factors: the type of workloads you’ll be processing, the size of your data, and the desired level of scalability and performance.

    You should select an EMR model that provides sufficient resources to handle your workload efficiently, considering the root quantity dimension (e.g., number of nodes, instance types) and identical useful resource configuration as the open source Flintrock setup. Consult with experts to gain a thorough understanding of the various AWS CLI options, leveraging their extensive knowledge to navigate the diverse array of features and functionalities at your disposal.

  2. Retrieving the cluster ID from the response: 12345? We would like to proceed with this step.
  3. The submission of a benchmark job on Amazon EMR utilises. add-steps from the AWS CLI:

    1. Use the cluster ID obtained in Step 2.
    2. The benchmark utility is at s3:///spark-benchmark-assembly-3.5.1.jar.
    3. The correct warehouse and database for the selected Iceberg catalog should be determined as follows: The Snowflake account name, the desired warehouse, and the desired database are all used to construct the fully qualified name of the Iceberg catalog, which takes the following format: ‘account_name’. ‘warehouse_name’.’database_name’; For instance:

      ‘acme_account’.’dev_warehouse’.’test_database’; This should remain identical to the original version used for the open-source TPC-DS benchmark run.

    4. The outcomes shall be in s3:///benchmark_run.
aws emr add-steps --cluster-id  --steps Kind=Spark,Title="SPARK Iceberg EMR TPCDS Benchmark Job",Args=[--class,com.amazonaws.eks.tpcds.BenchmarkSQL,--conf,spark.driver.cores=4,--conf,spark.driver.memory=10g,--conf,spark.executor.cores=16,--conf,spark.executor.memory=100g,--conf,spark.executor.instances=8,--conf,spark.network.timeout=2000,--conf,spark.executor.heartbeatInterval=300s,--conf,spark.dynamicAllocation.enabled=false,--conf,spark.shuffle.service.enabled=false,--conf,spark.sql.iceberg.data-prefetch.enabled=true,--conf,spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,--conf,spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog,--conf,spark.sql.catalog.local.type=hadoop,--conf,spark.sql.catalog.local.warehouse=s3://,--conf,spark.sql.defaultCatalog=local,--conf,spark.sql.catalog.local.io-impl=org.apache.iceberg.aws.s3.S3FileIO,s3:///spark-benchmark-assembly-3.5.1.jar,s3:///benchmark_run,3000,1,false,'q1-v2.13,q10-v2.13,q11-v2.13,q12-v2.13,q13-v2.13,q14a-v2.13,q14b-v2.13,q15-v2.13,q16-v2.13,q17-v2.13,q18-v2.13,q19-v2.13,q2-v2.13,q20-v2.13,q21-v2.13,q22-v2.13,q23a-v2.13,q23b-v2.13,q24a-v2.13,q24b-v2.13,q25-v2.13,q26-v2.13,q27-v2.13,q28-v2.13,q29-v2.13,q3-v2.13,q30-v2.13,q31-v2.13,q32-v2.13,q33-v2.13,q34-v2.13,q35-v2.13,q36-v2.13,q37-v2.13,q38-v2.13,q39a-v2.13,q39b-v2.13,q4-v2.13,q40-v2.13,q41-v2.13,q42-v2.13,q43-v2.13,q44-v2.13,q45-v2.13,q46-v2.13,q47-v2.13,q48-v2.13,q49-v2.13,q5-v2.13,q50-v2.13,q51-v2.13,q52-v2.13,q53-v2.13,q54-v2.13,q55-v2.13,q56-v2.13,q57-v2.13,q58-v2.13,q59-v2.13,q6-v2.13,q60-v2.13,q61-v2.13,q62-v2.13,q63-v2.13,q64-v2.13,q65-v2.13,q66-v2.13,q67-v2.13,q68-v2.13,q69-v2.13,q7-v2.13,q70-v2.13,q71-v2.13,q72-v2.13,q73-v2.13,q74-v2.13,q75-v2.13,q76-v2.13,q77-v2.13,q78-v2.13,q79-v2.13,q8-v2.13,q80-v2.13,q81-v2.13,q82-v2.13,q83-v2.13,q84-v2.13,q85-v2.13,q86-v2.13,q87-v2.13,q88-v2.13,q89-v2.13,q9-v2.13,q90-v2.13,q91-v2.13,q92-v2.13,q93-v2.13,q94-v2.13,q95-v2.13,q96-v2.13,q97-v2.13,q98-v2.13,q99-v2.13,ss_max-v2.13',true,],ActionOnFailure=CONTINUE --region 

Summarize the outcomes

After the step is complete, you’ll be able to view a concise summary of the benchmark’s outcome at s3:///benchmark_run/timestamp=xxxx/abstract.csv/xxx.csv Identically replicating the previous run, we calculate the mean and harmonic average of the query runtime.

Clear up

To avoid any future fees, promptly delete the assets you created as per the instructions provided in the relevant documentation.

Abstract

Amazon EMR persistently enhances the EMR runtime for Spark when used with Iceberg tables, achieving a speed-up of 2.7 times faster than open-source Spark 3.5.1 and Iceberg 1.5.2 on TPC-DS 3 TB v2.13. To fully leverage ongoing improvements in efficiency, we recommend staying current with the latest Amazon EMR releases.

To stay informed, consider subscribing to the AWS Large Information Weblog, where you’ll discover updates on EMR runtime for Spark and Iceberg, along with tips on configuration best practices and tuning recommendations.


Concerning the authors

Serves as a software program growth engineer for Amazon EMR at Amazon Web Services.

Serves as an Engineering Supervisor at EMR, a key organization within Amazon Network Companies.

AWS Delivers Major Infrastructural Boosts: Parallel Computing Service Launched, EC2 Standing Checks Enhanced, and More

0

As September approaches, just three months away, I’m thrilled about the imminent arrival of fresh providers and bulletins at the convention. Prior to the onset of the COVID-19 pandemic, I had planned to attend re:Invent 2019. At its most recent iteration, Amazon’s flagship event, re:Invent, drew a record-breaking crowd of over 60,000 attendees for me – my second time experiencing this premier gathering. It was a sheer delight to immerse myself in that enchanting atmosphere. Registration is now open for AWS re:Invent 2024. Join us in Las Vegas for an unforgettable 5-day experience featuring inspiring keynotes, engaging breakout sessions, thought-provoking chalk talks, hands-on learning opportunities, and life-enriching professional connections that will leave a lasting impact.

What new insights do this week’s bulletins bring to our attention?

The following launches caught my attention:

AWS offers the Parallel Computing Service (PCS), a fully managed solution for running and scaling high-performance computing (HPC) workloads in the cloud. Using advanced modeling tools, you can create complex scientific and engineering frameworks, then execute simulations seamlessly through a robustly managed scheduler, backed by comprehensive technical support and extensive customization options. Optimize your high-performance computing (HPC) environment to suit your unique needs by integrating it seamlessly with your preferred software stack. Design and deploy comprehensive High-Performance Computing (HPC) clusters that harmoniously integrate compute, storage, networking, and visualization capabilities, effortlessly scaling from scratch to hundreds of nodes. To gain additional knowledge, visit and briefly review.

Now, utilize standing checks to instantly verify whether the volumes connected to your cases are accessible and capable of executing full-scale I/O operations. With this new standing test, you can quickly identify attachment points or quantity impairments that may impact the performance of your applications running on Amazon EC2 instances. To further optimize performance, you can integrate these health checks within Auto Scaling groups to monitor the well-being of EC2 instances and seamlessly replace impaired instances, thereby ensuring maximum availability and reliability for your applications. Utilizing connected EBS standing checks in conjunction with occasion and system standing checks enables comprehensive monitoring of case health. Seek guidance from the available documentation to acquire additional knowledge.

You can now share views of embedded dashboards in Amazon QuickSight with others. This function enables seamless integration of additional collaboration features within your software by incorporating QuickSight dashboards. Additionally, consider enabling personalized features akin to bookmarking options for anonymous users. You possibly can share a novel link that showcases only your revisions while remaining within the application, using dashboard or console embedding to generate a shareable link to your software webpage with QuickSight’s reference encapsulated using the Q. Quicksight readers can easily share a shareable link with their friends. When a peer clicks the shared link, they are redirected to the web page within the application that contains the embedded QuickSight dashboard. To gather more insight, consult with.

– A fully managed service that leverages generative AI expertise to deploy and integrate with your organization’s proprietary data. Utilize the Amazon QuickSight Enterprise IAM federation feature to seamlessly integrate your applications with your identity provider, ensuring seamless access control and secure authentication by attaching necessary permissions and attributes. Prior to setup, ensure that your individual identification information is synced from the identity provider into the system, then integrate your Amazon QuickSight Enterprise applications with the IAM Identity Center for seamless user authentication. At launch, Amazon’s QuickSight Enterprise IAM federation seamlessly integrates with OpenID Connect (OIDC) and Security Assertion Markup Language 2.0 (SAML2.0) to facilitate secure identity provider connections. To learn more, visit www.example.com.

Publicizes support for cross-Area inference, a valuable feature that enables seamless handling of visitor surges by leveraging computing resources across multiple AWS regions. By leveraging on-demand mode, you can enjoy significantly higher throughput limits – up to double your allocated In-Area quotas – and improved robustness during times of peak demand, thanks to the intelligent application of cross-Area inference capabilities. By choosing to opt in, you eliminate the need to invest time and effort into predicting demand fluctuations. By leveraging dynamic routing, the system seamlessly directs users across multiple areas, guaranteeing maximum resource allocation and streamlined performance during peak usage periods. By selecting from a predefined set of Areas, you can manage the flow of inference data, aligning it with specific requirements for information residency and sovereignty laws. Discover the record at . To get started, consider seeking guidance from either of these resources.

We have successfully introduced new product lines and seasonal offerings across additional regions.

  • . C6G instances excel in compute-intensive workloads similar to high-performance computing (HPC), batch processing, and CPU-based machine learning inference applications. R6gD cases are designed for operating memory-intensive workloads akin to those found in open-source databases, in-memory caching applications, and real-time large-scale data analytics.
  • OpenSearch Serverless offers a seamless serverless deployment option for effortless search and analytics processing, eliminating the need for complex infrastructure management and administration.
  • AWS World Accelerator enables businesses to optimize the delivery and performance of services to global customers by reducing latency, improving throughput, and enhancing overall network efficiency.
  • Amazon Redshift Serverless enables organizations to effortlessly execute and scale analytics, reducing environmental impact while eliminating the need to configure and manage data warehousing infrastructure, allowing for instant scalability and reduced costs.
  • Amazon OpenSearch Service has expanded its capabilities to support AWS Graviton3 instances, delivering up to a 25% boost in efficiency compared to those running on Graviton2 architecture.
  • Conformance packs enable you to consolidate guidelines and corresponding remediation steps within a unified package, streamlining large-scale deployments by presenting all necessary information in one convenient location. These capabilities have been expanded to include the following regions: Asia-Pacific (Jakarta), Africa (Cape Town); Middle East (UAE); Asia-Pacific (Hyderabad) and Osaka; Europe (Milan) and Zurich; Canada West (Calgary); Israel (Tel Aviv); Spain. Additionally, AWS GovCloud has also seen expansion in both US-East and US-West regions.

Explore innovative collaboration spaces and cutting-edge immersive experiences that vividly demonstrate the power of AWS’s cloud and AI capabilities, providing startups and innovators with hands-on access to AI services and solutions, exclusive opportunities for mentorship from industry leaders, and valuable networking opportunities with peers and partners. and don’t overlook to register.

credit score: Antje Barth

Review your schedule and participate in forthcoming AWS events.

Are there free online and in-person opportunities that bring the cloud computing community together to connect, collaborate, and learn from one another about Amazon Web Services (AWS)? As AWS Summits come to a close for this year, Three additional dates remain available to register: September 5, September 11, and October 9.

Professional developers and IT experts engage in technical discussions, workshops, and hands-on labs led by experienced AWS customers and industry thought leaders from around the globe. As AWS Summits 2024 come to a close, the focus shifts to the dynamic AWS Group Days, which are now in full swing. AWS Group Days are scheduled for September 6 and 13. Notably, our colleague Antje Barth will deliver a keynote address on one of those dates. Additionally, on September 14, we have a busy day planned with not one, but two events taking place.

Browse all upcoming 

That’s all for this week. We’ll examine again next Monday for another Weekly Roundup!

Xbox’s August refresh kicks off, packing enhancements for cloud gaming, consoles, PC gaming, and equipment.

0

Staff at Xbox are introducing innovative approaches to elevate the gaming experience on Xbox, informed by user feedback and insights. This month’s updates bring fresh experiences across Xbox Cloud Gaming (Beta), consoles, PC gaming, and wireless controllers.

The Xbox app is now available on all compatible Fire TV devices, including the Fire TV Stick (1st generation) and Fire TV Cube (3rd generation).

In the early part of the year, Xbox collaborated with Amazon to bring a new level of gaming flexibility to customers worldwide. With this partnership, Xbox Game Pass Ultimate members in over 25 countries can now access and play a wide range of video games directly from the Xbox app on select Fire TV devices, utilizing cloud gaming technology to deliver seamless gameplay experiences. The Xbox app is now available on the Hearth TV, alongside the Hearth TV Stick 4K Max (2nd Gen) and Hearth TV Stick 4K (2nd Gen), also accessible for those with a Hearth TV Stick 4K Max (1st Gen) or Hearth TV Dice (3rd Gen). The proliferation of cloud gaming technology offers gamers a significantly expanded array of options for playing their preferred video games on devices they already own and are familiar with.

Gaming enthusiasts will have the opportunity to experience the thrill of contemporary video games directly. To start, download the Xbox app on your compatible Hearth TV device through the Amazon Appstore. Upon launching the application, you’ll be required to authenticate using your existing Microsoft account credentials. As an Xbox Game Pass Ultimate member, you’ll enjoy instant access to stream and play numerous cloud-enabled games within the massive library.

Not a member but? You can simply join us by downloading our app.

Connect your preferred Bluetooth controller and you’re ready to dive into the world of gaming. The Xbox wireless controller, Xbox Adaptive Controller, DualSense, and DualShock 4 all operate with effortless synchronization. Once connected, you’re all set!

Discover which Discord friends are online and join active servers directly from your Xbox.

You’re now easily notified whenever a friend on Discord is participating in a live stream, chat, or activity. Now you can seamlessly engage in voice chats or watch live streams directly from your Xbox console without needing to access the Discord app on a separate device.

When linking your Discord account with your Xbox profile, you can choose to include your Discord friends in your Mates list. When linking your Xbox profile and Discord account, navigate to the “Mates” section within your Xbox settings.

When your Discord friends are playing games or participating in voice chats, they’ll appear below “Now Playing” at any given moment.

Xbox Update Discord Happening Now Image

Stream live games on Discord directly to your Xbox console.

With seamless integration, now stream any user’s gameplay directly into your Xbox console, all it takes is a linked Discord account or an active voice channel. Is this moment of triumph being broadcasted to the world for maximum impact? You can also stream your gameplay directly to your Discord friends.

  • Xbox Update Discord Streaming Image
  • Xbox Update Discord Streaming Image

Setup Your Gaming Experience with Ease! Select settings for a seamless gaming experience on your Xbox console or PC.

Are you weighing the pros and cons of setting up individualized download strategies versus making a blanket selection for a sports-related endeavor? When installing a game on your Xbox console or PC via the Xbox app, if it offers additional options, you can either configure everything or customize your download to save time and storage space by setting specific preferences.

  • Xbox Update Custom Install Image
  • Xbox Update Custom Install Image

You can further enhance the user experience of your smart home controller by offering additional personalization options. This could include allowing users to tailor the layout and appearance of their dashboard, reorganize widgets and modules to suit their preferences, or even create custom scenes tailored to specific activities or moods.

Two new customization options are arriving at Xbox controllers: a toggle hold feature and a shortcut to showcase your device.

The toggle maintenance feature, available on Xbox Elite Wireless Controller Series 2 and the Xbox Adaptive Controller, operates similarly to Windows’ Sticky Keys functionality. When enabled, this feature allows the controller button to be held down momentarily with just one tap. The intuitive design enables effortless navigation by allowing participants to simply press multiple buttons simultaneously. To enable toggle maintenance on your Xbox controller, navigate to the Xbox Accessories app, and then select a profile to customize or create a new one. Next, press the “…” button, followed by the option to modify the setting you wish to adjust.

You can easily flip off units once you’ve finished using them with a simple shortcut from your new controller. With your device powered on and connected to your Xbox console, simply launch the Xbox Accessories app, navigate to the device’s menu (), and select it.


What’s next for Xbox? To reignite passion and drive growth, we must prioritize innovation, community, and a seamless gaming experience.

Firstly, we need to expand our ecosystem by embracing emerging technologies like cloud gaming, artificial intelligence, and virtual reality. This will not only attract new users but also enhance the overall gaming landscape.

Stay locked into Xbox Wire for ongoing updates and breaking news on everything Xbox. To receive assistance with Xbox updates, visit https://support.xbox.com/en-US/help-hub.

What do you think about the group? Whether or not you’d like to propose a novel feature you wish existed, or whether existing ones could be refined for better utilization, your opinions matter greatly to us. We’re constantly seeking innovative ways to elevate Xbox experiences for gamers globally. Want to shape the future of Xbox and gain early access to innovative features? Join the journey now by downloading the Xbox Insider Hub app for Android or iOS. Tell us what you assume!