12.9 C
New York
Saturday, March 29, 2025
Home Blog Page 3775

Apple Podcasts follows Maps’ lead by coming to your internet browser

0


Apple Podcasts has adopted Apple Maps by providing an internet model, permitting customers to subscribe and take heed to their favourite exhibits inside an internet browser.

Introduced on August 19, the online model of Apple Podcasts is offered on all internet browsers, resembling Google Chrome, Mozilla Firefox, and Microsoft Edge, in 170 international locations. You don’t want an Apple Account to make use of the online app, you possibly can seek for a present and take heed to an episode in seconds. Nonetheless, should you do register along with your Apple Account, your beforehand subscribed exhibits and play progress can be synced to the online app.



Google On-line Safety Weblog: Digital Escape; Actual Reward: Introducing Google’s kvmCTF


Google is dedicated to enhancing the safety of open-source applied sciences, particularly those who make up the muse for a lot of of our merchandise, like Linux and KVM. To this finish we’re excited to announce the launch of kvmCTF, a vulnerability reward program (VRP) for the Kernel-based Digital Machine (KVM) hypervisor first introduced in October 2023.

KVM is a strong hypervisor with over 15 years of open-source improvement and is broadly used all through the buyer and enterprise panorama, together with platforms akin to Android and Google Cloud. Google is an energetic contributor to the venture and we designed kvmCTF as a collaborative manner to assist establish & remediate vulnerabilities and additional harden this elementary safety boundary. 

Much like kernelCTF, kvmCTF is a vulnerability reward program designed to assist establish and handle vulnerabilities within the Kernel-based Digital Machine (KVM) hypervisor. It presents a lab surroundings the place members can log in and make the most of their exploits to acquire flags. Considerably, in kvmCTF the main focus is on zero day vulnerabilities and in consequence, we is not going to be rewarding exploits that use n-days vulnerabilities. Particulars relating to the  zero day vulnerability will likely be shared with Google after an upstream patch is launched to make sure that Google obtains them concurrently the remainder of the open-source neighborhood.  Moreover, kvmCTF makes use of the Google Naked Steel Answer (BMS) surroundings to host its infrastructure. Lastly, given how vital a hypervisor is to total system safety, kvmCTF will reward varied ranges of vulnerabilities as much as and together with code execution and VM escape.

The way it works

The surroundings consists of a naked metallic host operating a single visitor VM. Contributors will be capable to reserve time slots to entry the visitor VM and try to carry out a guest-to-host assault. The aim of the assault should be to use a zero day vulnerability within the KVM subsystem of the host kernel. If profitable, the attacker will get hold of a flag that proves their accomplishment in exploiting the vulnerability. The severity of the assault will decide the reward quantity, which will likely be primarily based on the reward tier system defined under. All experiences will likely be totally evaluated on a case-by-case foundation.

The rewards tiers are the next:

  • Full VM escape: $250,000

  • Arbitrary reminiscence write: $100,000

  • Arbitrary reminiscence learn: $50,000

  • Relative reminiscence write: $50,000

  • Denial of service: $20,000

  • Relative reminiscence learn: $10,000

To facilitate the relative reminiscence write/learn tiers and partly the denial of service, kvmCTF presents the choice of utilizing a bunch with KASAN enabled. In that case, triggering a KASAN violation will enable the participant to acquire a flag as proof.

The right way to take part

To start, begin by studying the guidelines of this system. There you will discover data on methods to reserve a time slot, hook up with the visitor and acquire the flags, the mapping of the assorted KASAN violations with the reward tiers and directions on methods to report a vulnerability, ship us your submission, or contact us on Discord.

Implement knowledge high quality checks on Amazon Redshift knowledge property and combine with Amazon DataZone

0


Information high quality is essential in knowledge pipelines as a result of it straight impacts the validity of the enterprise insights derived from the info. At this time, many organizations use AWS Glue Information High quality to outline and implement knowledge high quality guidelines on their knowledge at relaxation and in transit. Nonetheless, some of the urgent challenges confronted by organizations is offering customers with visibility into the well being and reliability of their knowledge property. That is significantly essential within the context of enterprise knowledge catalogs utilizing Amazon DataZone, the place customers depend on the trustworthiness of the info for knowledgeable decision-making. As the info will get up to date and refreshed, there’s a danger of high quality degradation as a consequence of upstream processes.

Amazon DataZone is an information administration service designed to streamline knowledge discovery, knowledge cataloging, knowledge sharing, and governance. It permits your group to have a single safe knowledge hub the place everybody within the group can discover, entry, and collaborate on knowledge throughout AWS, on premises, and even third-party sources. It simplifies the info entry for analysts, engineers, and enterprise customers, permitting them to find, use, and share knowledge seamlessly. Information producers (knowledge homeowners) can add context and management entry by predefined approvals, offering safe and ruled knowledge sharing. The next diagram illustrates the Amazon DataZone high-level structure. To study extra in regards to the core parts of Amazon DataZone, consult with Amazon DataZone terminology and ideas.

Implement knowledge high quality checks on Amazon Redshift knowledge property and combine with Amazon DataZone

To deal with the problem of knowledge high quality, Amazon DataZone now integrates straight with AWS Glue Information High quality, permitting you to visualise knowledge high quality scores for AWS Glue Information Catalog property straight inside the Amazon DataZone net portal. You possibly can entry the insights about knowledge high quality scores on numerous key efficiency indicators (KPIs) corresponding to knowledge completeness, uniqueness, and accuracy.

By offering a complete view of the info high quality validation guidelines utilized on the info asset, you can also make knowledgeable choices in regards to the suitability of the precise knowledge property for his or her supposed use. Amazon DataZone additionally integrates historic tendencies of the info high quality runs of the asset, giving full visibility and indicating if the standard of the asset improved or degraded over time. With the Amazon DataZone APIs, knowledge homeowners can combine knowledge high quality guidelines from third-party methods into a particular knowledge asset. The next screenshot exhibits an instance of knowledge high quality insights embedded within the Amazon DataZone enterprise catalog. To study extra, see Amazon DataZone now integrates with AWS Glue Information High quality and exterior knowledge high quality options.

On this publish, we present find out how to seize the info high quality metrics for knowledge property produced in Amazon Redshift.

Amazon Redshift is a quick, scalable, and absolutely managed cloud knowledge warehouse that means that you can course of and run your complicated SQL analytics workloads on structured and semi-structured knowledge. Amazon DataZone natively helps knowledge sharing for Amazon Redshift knowledge property.

With Amazon DataZone, the info proprietor can straight import the technical metadata of a Redshift database desk and views to the Amazon DataZone venture’s stock. As these knowledge property will get imported into Amazon DataZone, it bypasses the AWS Glue Information Catalog, creating a spot in knowledge high quality integration. This publish proposes an answer to counterpoint the Amazon Redshift knowledge asset with knowledge high quality scores and KPI metrics.

Answer overview

The proposed resolution makes use of AWS Glue Studio to create a visible extract, rework, and cargo (ETL) pipeline for knowledge high quality validation and a customized visible rework to publish the info high quality outcomes to Amazon DataZone. The next screenshot illustrates this pipeline.

Glue ETL pipeline

The pipeline begins by establishing a connection on to Amazon Redshift after which applies mandatory knowledge high quality guidelines outlined in AWS Glue based mostly on the group’s enterprise wants. After making use of the principles, the pipeline validates the info towards these guidelines. The result of the principles is then pushed to Amazon DataZone utilizing a customized visible rework that implements Amazon DataZone APIs.

The customized visible rework within the knowledge pipeline makes the complicated logic of Python code reusable in order that knowledge engineers can encapsulate this module in their very own knowledge pipelines to publish the info high quality outcomes. The rework can be utilized independently of the supply knowledge being analyzed.

Every enterprise unit can use this resolution by retaining full autonomy in defining and making use of their very own knowledge high quality guidelines tailor-made to their particular area. These guidelines preserve the accuracy and integrity of their knowledge. The prebuilt customized rework acts as a central element for every of those enterprise models, the place they will reuse this module of their domain-specific pipelines, thereby simplifying the combination. To publish the domain-specific knowledge high quality outcomes utilizing a customized visible rework, every enterprise unit can merely reuse the code libraries and configure parameters corresponding to Amazon DataZone area, function to imagine, and title of the desk and schema in Amazon DataZone the place the info high quality outcomes should be posted.

Within the following sections, we stroll by the steps to publish the AWS Glue Information High quality rating and outcomes to your Redshift desk to Amazon DataZone.

Stipulations

To observe alongside, you must have the next:

The answer makes use of a customized visible rework to publish the info high quality scores from AWS Glue Studio. For extra data, consult with Create your individual reusable visible transforms for AWS Glue Studio.

A customized visible rework helps you to outline, reuse, and share business-specific ETL logic along with your groups. Every enterprise unit can apply their very own knowledge high quality checks related to their area and reuse the customized visible rework to push the info high quality outcome to Amazon DataZone and combine the info high quality metrics with their knowledge property. This eliminates the chance of inconsistencies that may come up when writing related logic in numerous code bases and helps obtain a quicker improvement cycle and improved effectivity.

For the customized rework to work, it’s worthwhile to add two information to an Amazon Easy Storage Service (Amazon S3) bucket in the identical AWS account the place you propose to run AWS Glue. Obtain the next information:

Copy these downloaded information to your AWS Glue property S3 bucket within the folder transforms (s3://aws-glue-assets-/transforms). By default, AWS Glue Studio will learn all JSON information from the transforms folder in the identical S3 bucket.

customtransform files

Within the following sections, we stroll you thru the steps of constructing an ETL pipeline for knowledge high quality validation utilizing AWS Glue Studio.

Create a brand new AWS Glue visible ETL job

You need to use AWS Glue for Spark to learn from and write to tables in Redshift databases. AWS Glue offers built-in assist for Amazon Redshift. On the AWS Glue console, select Creator and edit ETL jobs to create a brand new visible ETL job.

Set up an Amazon Redshift connection

Within the job pane, select Amazon Redshift because the supply. For Redshift connection, select the connection created as prerequisite, then specify the related schema and desk on which the info high quality checks should be utilized.

dqrulesonredshift

Apply knowledge high quality guidelines and validation checks on the supply

The following step is so as to add the Consider Information High quality node to your visible job editor. This node means that you can outline and apply domain-specific knowledge high quality guidelines related to your knowledge. After the principles are outlined, you may select to output the info high quality outcomes. The outcomes of those guidelines might be saved in an Amazon S3 location. You possibly can moreover select to publish the info high quality outcomes to Amazon CloudWatch and set alert notifications based mostly on the thresholds.

Preview knowledge high quality outcomes

Selecting the info high quality outcomes mechanically provides the brand new node ruleOutcomes. The preview of the info high quality outcomes from the ruleOutcomes node is illustrated within the following screenshot. The node outputs the info high quality outcomes, together with the outcomes of every rule and its failure cause.

previewdqresults

Publish the info high quality outcomes to Amazon DataZone

The output of the ruleOutcomes node is then handed to the customized visible rework. After each information are uploaded, the AWS Glue Studio visible editor mechanically lists the rework as talked about in post_dq_results_to_datazone.json (on this case, Datazone DQ Outcome Sink) among the many different transforms. Moreover, AWS Glue Studio will parse the JSON definition file to show the rework metadata corresponding to title, description, and listing of parameters. On this case, it lists parameters such because the function to imagine, area ID of the Amazon DataZone area, and desk and schema title of the info asset.

Fill within the parameters:

  • Position to imagine is elective and might be left empty; it’s solely wanted when your AWS Glue job runs in an related account
  • For Area ID, the ID to your Amazon DataZone area might be discovered within the Amazon DataZone portal by selecting the consumer profile title

datazone page

  • Desk title and Schema title are the identical ones you used when creating the Redshift supply rework
  • Information high quality ruleset title is the title you need to give to the ruleset in Amazon DataZone; you can have a number of rulesets for a similar desk
  • Max outcomes is the utmost variety of Amazon DataZone property you need the script to return in case a number of matches can be found for a similar desk and schema title

Edit the job particulars and within the job parameters, add the next key-value pair to import the fitting model of Boto3 containing the newest Amazon DataZone APIs:

--additional-python-modules

boto3>=1.34.105

Lastly, save and run the job.

dqrules post datazone

The implementation logic of inserting the info high quality values in Amazon DataZone is talked about within the publish Amazon DataZone now integrates with AWS Glue Information High quality and exterior knowledge high quality options . Within the post_dq_results_to_datazone.py script, we solely tailored the code to extract the metadata from the AWS Glue Consider Information High quality rework outcomes, and added strategies to search out the fitting DataZone asset based mostly on the desk data. You possibly can evaluate the code within the script in case you are curious.

After the AWS Glue ETL job run is full, you may navigate to the Amazon DataZone console and make sure that the info high quality data is now displayed on the related asset web page.

Conclusion

On this publish, we demonstrated how you should utilize the facility of AWS Glue Information High quality and Amazon DataZone to implement complete knowledge high quality monitoring in your Amazon Redshift knowledge property. By integrating these two companies, you may present knowledge shoppers with helpful insights into the standard and reliability of the info, fostering belief and enabling self-service knowledge discovery and extra knowledgeable decision-making throughout your group.

For those who’re trying to improve the info high quality of your Amazon Redshift atmosphere and enhance data-driven decision-making, we encourage you to discover the combination of AWS Glue Information High quality and Amazon DataZone, and the brand new preview for OpenLineage-compatible knowledge lineage visualization in Amazon DataZone. For extra data and detailed implementation steering, consult with the next assets:


In regards to the Authors

Fabrizio Napolitano is a Principal Specialist Options Architect for DB and Analytics. He has labored within the analytics area for the final 20 years, and has lately and fairly abruptly develop into a Hockey Dad after transferring to Canada.

Lakshmi Nair is a Senior Analytics Specialist Options Architect at AWS. She focuses on designing superior analytics methods throughout industries. She focuses on crafting cloud-based knowledge platforms, enabling real-time streaming, massive knowledge processing, and strong knowledge governance.

Varsha Velagapudi is a Senior Technical Product Supervisor with Amazon DataZone at AWS. She focuses on bettering knowledge discovery and curation required for knowledge analytics. She is keen about simplifying clients’ AI/ML and analytics journey to assist them succeed of their day-to-day duties. Outdoors of labor, she enjoys nature and out of doors actions, studying, and touring.

APIs, SBOMs, and Static Evaluation


As a part of an ongoing effort to maintain you knowledgeable about our newest work, this weblog submit summarizes some current publications from the SEI within the areas of software programming interfaces (APIs), software program payments of supplies (SBOMs), safe growth, Structure Evaluation and Design Language (AADL), and static evaluation.

These publications spotlight the newest work from SEI technologists in these areas. This submit features a itemizing of every publication, writer(s), and hyperlinks the place they are often accessed on the SEI web site.

Utility Programming Interface (API) Vulnerabilities and Dangers
by McKinley Sconiers-Hasan

Internet-accessible software programming interfaces (APIs) are more and more widespread, and they’re usually designed and carried out in a approach that creates safety dangers. Constructing on a taxonomy from OWASP, this report describes 11 widespread vulnerabilities and three dangers associated to APIs, offering solutions about the way to repair or scale back their impression. Suggestions embody utilizing a normal API documentation course of, utilizing automated testing, and guaranteeing the safety of the id and entry administration system.
Learn the SEI Particular Report.

Software program Invoice of Supplies (SBOM) Issues for Operational Take a look at & Analysis Actions
by Michael Bandor

This white paper seems at potential roles for SBOM inside numerous Operational Take a look at & Analysis (OT&E) actions. It seems on the historical past and background of SBOMs, current developments (as of the creation of the white paper), normal challenges and inquiries to ask, and 5 particular use instances. It concludes with conclusions and suggestions.

SBOMs are at present in early and ranging levels of adoption throughout business and inside the DoD. There are nonetheless points with the standard (e.g., completeness, accuracy, forex, and so on.) of the SBOMs being produced, in addition to adherence to the minimal important parts recognized by the U.S. Division of Commerce. Legacy techniques in addition to cloud-based techniques current challenges for producing SBOMs. The DoD is at present creating proposed steerage for addressing the SBOM requirement by applications.

Given this early part of adoption, it’s endorsed that SBOMs be used to reinforce however not exchange the present strategies utilized by Operational Take a look at (OT) personnel in efficiency of the testing features and to not rely solely on the SBOM data. The constraints are usually not intrinsic, and we will anticipate that SBOMs will show to be more and more important and helpful for OT actions.
Learn the SEI white paper.

Safe Methods Don’t Occur by Accident
by Timothy A. Chick

Most cybersecurity breaches are as a consequence of defects in design or code, together with each coding and logic errors. One of the simplest ways to deal with these challenges is to design and construct safer options. On this webcast, Tim Chick discusses how safety will be an integral facet of all the software program lifecycle. The important thing to success is to comply with deliberate engineering practices centered on lowering safety dangers by way of the usage of software program assurance strategies.

What attendees will be taught:

  • the significance of cybersecurity, together with examples of safety failures
  • qualities to have a look at when evaluating third-party software program
  • the connection between high quality and safety
  • engineering strategies used all through the event lifecycle to cut back cyber dangers

View the webcast.

Reachability of System Operation Modes in AADL
by Lutz Wrage

Parts in an AADL (Structure Evaluation and Design Language) mannequin can have modes that decide which subcomponents and connections are lively. Transitions between modes are triggered by occasions originating from the modeled system’s surroundings or from different elements within the mannequin. Modes and transitions can happen on any stage of the element hierarchy. The mixtures of element modes (known as system operation modes or SOMs) outline the system’s configurations. You will need to know which SOMs can truly happen within the system, particularly within the space of system security, as a result of a system might include elements that shouldn’t be lively concurrently, for instance, a automotive’s brake and accelerator. This report presents an algorithm that constructs the set of reachable SOMs for a given AADL mannequin and the transitions between them.
Learn the SEI Technical Report.

Automated Restore of Static Evaluation Alerts
by David Svoboda

Builders know that static evaluation helps make code safer. Nevertheless, heuristic static evaluation instruments usually produce numerous false positives, hindering their usefulness. On this podcast, David Svoboda, a software program safety engineer within the SEI’s CERT Division, discusses Redemption, a brand new open-source instrument from the SEI that mechanically repairs widespread errors in C/C++ code generated from static evaluation alerts, making code safer and static evaluation much less overwhelming.
Take heed to/view the podcast.

Navigating Functionality-Primarily based Planning: The Advantages, Challenges, and Implementation Necessities
by Anandi Hira and William Nichols

Functionality-based planning (CBP) defines a framework for acquisition and design that encompasses a complete view of current skills and future wants for the aim of supporting strategic choices concerning what is required and the way to successfully obtain it. Each enterprise and authorities acquisition domains use CBP for monetary success or to design well-balanced protection techniques. Unsurprisingly, the definitions differ throughout these domains. This paper endeavors to reconcile these definitions to supply a overarching view of CBP, its potential, and sensible implementation of its rules.
Learn the white paper.

My Story in Computing, with Sam Procter
by Sam Procter

Sam Procter, an SEI senior structure researcher, began out finding out pc science on the College of Nebraska, however he didn’t like it. It wasn’t till he took his first software program engineering course that he knew he’d discovered his profession path. On this SEI podcast, Procter discusses early influences that formed his profession, the significance of embracing several types of variety in his analysis and work, and the worth of work-life stability.
Take heed to/view the podcast.

Extra Sources

View the newest SEI analysis within the SEI Digital Library.
View the newest podcasts within the SEI Podcast Sequence.
View the newest installments within the SEI Webcast Sequence.

Iran’s Charming Kitten Targets US Elections, Israeli Army


A menace group linked to Iran’s Islamic Revolutionary Guard Corps (IRGC) has launched new cyberattacks towards electronic mail accounts related to the upcoming US presidential election in addition to high-profile army and different political targets in Israel. The exercise — which predominantly comes within the type of socially engineered phishing campaigns — are in retaliation for Israel’s ongoing army marketing campaign in Gaza and the US’ help for it, and are anticipated to proceed as tensions rise within the area.

Google’s Menace Evaluation Group (TAG) detected and blocked “quite a few” makes an attempt by Iran-backed APT42, maybe greatest often called Charming Kitten, to log in to the private electronic mail accounts of a couple of dozen people affiliated with President Biden and with former President Trump, in keeping with a weblog submit revealed yesterday. Targets of the exercise included present and former US authorities officers in addition to people related to the respective campaigns.

Furthermore, the menace group stays persistent in its ongoing efforts to try to compromise the private accounts of people affiliated with the present US Vice President and now presidential candidate Kamala Harris, and former President Trump, “together with present and former authorities officers and people related to the marketing campaign,” in keeping with the submit.

The invention comes as a Telegram-based bot service known as “IntelFetch” has additionally been discovered to be aggregating compromised credentials linked to the DNC and Democratic Social gathering web sites.

Charming Kitten Bats Round Israeli Targets

Along with election-related assaults, TAG researchers even have been monitoring numerous phishing campaigns towards Israeli army and political targets — together with folks with connections to the protection sector, in addition to diplomats, teachers, and NGOs — which have ramped up considerably since April, in keeping with the submit.

Google not too long ago took down a number of Google Websites pages created by the group “masquerading as a petition from the reputable Jewish Company for Israel calling on the Israeli authorities to enter into mediation to finish the battle,” in keeping with the submit.

Charming Kitten additionally abused Google Websites in an April phishing marketing campaign focusing on Israeli army, protection, diplomats, teachers, and civil society by sending emails that impersonated a journalist requesting touch upon current air strikes to focus on former senior Israeli army officers and an aerospace govt.

“During the last six months, we’ve got systematically disrupted these attackers’ means to abuse Google Websites in additional than 50 comparable campaigns,” in keeping with Google TAG.

One such marketing campaign concerned a phishing lure that featured an attacker-controlled Google Websites hyperlink that will direct the sufferer to a pretend Google Meet touchdown web page, whereas different lures included OneDrive, Dropbox, and Skype.

New & Ongoing APT42 Phishing Exercise

In different assaults, Charming Kitten has engaged in a various vary of social engineering ways in phishing campaigns that replicate its geopolitical stance. The exercise will not be more likely to let up for the forseeable future, in keeping with Google TAG.

A current marketing campaign towards Israeli diplomats, teachers, NGOs, and political entities got here from accounts hosted by quite a lot of electronic mail service suppliers, they found. Although the messages didn’t comprise malicious content material, Google TAG surmised that they had been “probably meant to elicit engagement from the recipients earlier than APT42 tried to compromise the targets,” and Google suspended Gmail accounts related to the APT.

A separate June marketing campaign focused Israeli NGOs utilizing a benign PDF electronic mail attachment impersonating a reputable political entity that contained a shortened URL hyperlink that redirected to a phishing package touchdown web page designed to reap Google login credentials. Certainly, APT42 usually makes use of phishing hyperlinks embedded both immediately within the physique of the e-mail or as a hyperlink in an in any other case innocuous PDF attachment, the researchers famous.

“In such circumstances, APT42 would interact their goal with a social engineering lure to set-up a video assembly after which hyperlink to a touchdown web page the place the goal was prompted to login and despatched to a phishing web page,” in keeping with the submit.

One other APT42 marketing campaign template is sending reputable PDF attachments as a part of a social engineering lure to construct belief and encourage the goal to have interaction on different platforms like Sign, Telegram, or WhatsApp, almost certainly as a strategy to ship a phishing package to reap credentials, in keeping with Google TAG.

Politically Motivated Assaults to Proceed

All of that is frequent looking for APT42/Charming Kitten, which is well-known for politically motivated cyberattacks. Of late, it has been extraordinarily lively towards Israel, the US, and different world targets since Israel’s army marketing campaign in Gaza in retaliation for the Hamas Oct. 7 assault in Israel.

Iran total has a lengthy historical past of responding to tensions within the area with cyberattacks towards Israel and the US. Up to now six months alone, the US and Israel accounted for roughly 60% of APT42’s identified geographic focusing on, in keeping with Google TAG. Extra exercise is predicted after the Israel’s current assassination of prime Hamas chief on Iranian soil, as specialists imagine that our on-line world will stay a main battleground for Iran-backed menace actors.

“APT42 is a classy, persistent menace actor and so they present no indicators of stopping their makes an attempt to focus on customers and deploy novel ways,” in keeping with Google TAG. “As hostilities between Iran and Israel intensify, we will count on to see elevated campaigns there from APT42.”

The researchers additionally included an inventory of indicators of compromise (IoCs) in its submit that embrace domains and IP addresses identified for use by APT42. Organizations who could also be focused additionally ought to stay vigilant for the assorted social engineering and phishing ways utilized by the group in its not too long ago found menace campaigns.