Home Blog Page 3882

Notion exits Russia and can terminate accounts in September

0


Notion exits Russia and can terminate accounts in September

Notion has introduced it can exit the Russian market and is terminating all workspaces and accounts recognized linked to customers within the nation.

In an announcement on its web site, Notion says the choice was taken as a consequence of U.S.-government imposed restrictions on software program service suppliers, making it virtually not possible to proceed working in Russia.

“The U.S. authorities has launched restrictions prohibiting entry to sure software program services and products to any individual in Russia,” reads the announcement.

“As a consequence of these restrictions, on September 9, 2024, Notion will not present customers situated in Russia entry to the platform.”

Notion is a well-liked productiveness software combining note-taking, doc creation, process administration, databases, and numerous real-time collaboration instruments underneath a single platform. As of 2024, it’s estimated that Notion is utilized by over 30 million individuals worldwide.

Notion will now delete all Russian-based accounts on its platform and nuke workspaces within the nation, giving customers till September 8 to extract their knowledge. After that date, there will likely be no method for impacted customers to entry or retrieve their knowledge at Notion.

Directions on learn how to export content material or whole workspaces from Notion could be discovered on this official assist web page, protecting PDF, HTML, and CSV exports.

Some exporting restrictions could apply to Enterprise workspace members, as admins could have disabled this selection from the settings to maintain workspace knowledge inner. If that’s the case, customers is not going to see an export possibility in any respect.

Additionally, in instances of very giant recordsdata, Notion could ship an electronic mail with a obtain hyperlink as an alternative of serving the obtain immediately from the platform.

The software program platform clarified that workspace house owners not primarily based in Russia and collaborating with customers from the nation is not going to have their workspaces deleted. Nevertheless, their Russia-based collaborators is not going to be allowed to entry it after the set deadline.

Subscription plans for impacted customers will likely be unilaterally terminated on September 9, 2024, and no additional costs from Notion will encumber the house owners.

Impacted customers have additionally obtained electronic mail notices of the upcoming closure of their accounts, as Notion knowledgeable them immediately two weeks prior.

CISA Flags Essential Apache OFBiz Flaw Amid Lively Exploitation Studies


Aug 28, 2024Ravie LakshmananSoftware program Safety / Vulnerability

CISA Flags Essential Apache OFBiz Flaw Amid Lively Exploitation Studies

The U.S. Cybersecurity and Infrastructure Safety Company (CISA) on Tuesday added a important safety flaw affecting the Apache OFBiz open-source enterprise useful resource planning (ERP) system to its Recognized Exploited Vulnerabilities (KEV) catalog, citing proof of energetic exploitation within the wild.

The vulnerability, often called CVE-2024-38856, carries a CVSS rating of 9.8, indicating important severity.

Cybersecurity

“Apache OFBiz incorporates an incorrect authorization vulnerability that would permit distant code execution through a Groovy payload within the context of the OFBiz consumer course of by an unauthenticated attacker,” CISA mentioned.

Particulars of the vulnerability first got here to gentle earlier this month after SonicWall described it as a patch bypass for an additional flaw, CVE-2024-36104, that allows distant code execution through specifically crafted requests.

“A flaw within the override view performance exposes important endpoints to unauthenticated risk actors utilizing a crafted request, paving the way in which for distant code execution,” SonicWall researcher Hasib Vhora mentioned.

The event comes almost three weeks after CISA positioned a 3rd flaw impacting Apache OFBiz (CVE-2024-32113) to the KEV catalog, following reviews that it had been abused to deploy the Mirai botnet.

Whereas there are at present no public reviews about how CVE-2024-38856 is being weaponized within the wild, proof-of-concept (PoC) exploits have been made publicly out there.

Cybersecurity

The energetic exploitation of two Apache OFBiz flaws is a sign that attackers are exhibiting important curiosity in and an inclination to pounce on publicly disclosed vulnerabilities to opportunistically breach vulnerable cases for nefarious ends.

Organizations are really useful to replace to model 18.12.15 to mitigate towards the risk. Federal Civilian Govt Department (FCEB) businesses have been mandated to use the required updates by September 17, 2024.

Discovered this text fascinating? Observe us on Twitter and LinkedIn to learn extra unique content material we submit.



Reining in Cloud Wastage By Stopping Over-Provisioning

0


Reining in Cloud Wastage By Stopping Over-Provisioning

(TAW4/Shutterstock)

Firms will waste $135 billion price of cloud sources in 2024, or about 30% of the worldwide public cloud spending of $675 billion, in keeping with Gartner. That’s the dangerous information. The excellent news is that an array of FinOps service suppliers are lining as much as assist clients reclaim these cloud {dollars}.

Within the early days of the general public cloud, value financial savings and agility have been the large drivers. The thought was that corporations would transfer their information and functions to the cloud to get out of the enterprise of shopping for and managing infrastructure, which diminished their total IT spending and freed them to give attention to their core enterprise.

Through the years, the general public cloud mantra has morphed a bit, and immediately’s cloud pillars are velocity and comfort. The general public cloud is great for startups and fast-growing corporations that don’t need to commit massive sums to compute and storage infrastructure that they might or might not want. They’re prepared to pay a premium for the aptitude to quickly scale their wants on-demand.

Nevertheless, immediately’s public cloud isn’t as nice for corporations which might be bigger or slowly rising. Firms which might be paying on-demand costs however aren’t making the most of on-demand scalability are overpaying, whereas corporations that miscalculated how a lot compute and storage they would wish are discovering that the general public cloud isn’t practically as elastic as they have been led to imagine. That’s notably true in the case of storage.

Prospects’ cloud payments have elevated dramatically lately. A latest Flexera report found a 21% enhance yr over yr in organizations which might be spending $1 million or extra per thirty days on cloud.

In its latest Funds Planning Information for 2025, Forrester suggested expertise executives to “ruthlessly automate handbook duties” to scale back cloud sprawl.

Value financial savings within the cloud is the highest precedence this yr, in keeping with Flexera’s 2024 State of the Cloud Report

“Cease unbiased cloud spending by establishing a transparent FinOps observe,” the analyst group wrote. “This consists of organising a centralized group liable for managing cloud prices and optimizing useful resource utilization. Subsequent, implement a cloud value waste, and make data-driven choices to optimize prices. These instruments may also help automate the method of figuring out unused sources, right-sizing situations, and imposing value governance insurance policies.

One of many FinOps distributors serving to clients to remain on high of their cloud storage and compute spending is Zesty. Omer Hamerman, Principal Engineer at Zesty, says Zesty Disk permits clients to shrink and develop their AWS EBS storage on the fly based mostly on utilization, and reduce as much as 60% of their storage prices.

“The primary query I ask clients [is] how do you resolve how a lot storage you must provision for a typical utility?” he tells Datanami. “And no person is aware of the reply.”

If a buyer thinks they’ll want 10GB of block storage for his or her machine studying utility, for instance, they’ll provision a single block of 10GB from AWS and use it. In the event that they want extra storage, they’ll merely provision extra capability as they go. But when it seems they want much less storage, that’s the place issues get fascinating.

Zesty Disk breaks up Amazon EBS into smaller, extra simply managed blocks (Picture courtesy Zesty)

Zesty’s trick is to interrupt that preliminary 10GB order up into a number of blocks, similar to one 5GB block, two 2GB blocks, and one 1GB block, and unfold the purchasers’ information throughout all of them. If Zesty’s monitoring detects that the shopper utility solely wants 3GB, they’ll flip off the 5GB and 1GB blocks, and transfer the info below the covers to the 2 2GB blocks, for 4GB complete. Zesty Disk does this mechanically via API connections to AWS, and there’s no interruption to the movement of information or the applying, Hamerman says.

“We permit you to nonetheless hold your block storage along with your efficiency and the whole lot that comes with having block storage hooked up, however nonetheless achieve elasticity,” he says. “We’re like a DevOps engineer you put in in your server.”

This downside is pretty frequent amongst newer functions constructed atop trendy databases, the place clients don’t need to ever delete information, and with machine studying and AI functions, which might have unpredictable information storage wants, he says. Properly established functions, similar to CRM or ERP programs, are much less more likely to have this downside.

“That’s simply the character of functions. They develop over time. A few of them in a short time, a few of them fairly sluggish,” Hamerman says. “How do you forecast this stuff? Folks really can’t. It’s actually arduous to give attention to forecasts.”

Zesty, which is a member of the FinOps Basis, additionally gives Dedication Supervisor, which helps clients handle their EC2 and compute spending on AWS. The providing works by mechanically shifting workloads to the optimum mixture of AWS Reserved Occasion and Financial savings Plan situations, which include one- and three-year commitments.

If Zesty detects the shopper isn’t making good use of RI, for instance, it should shift workload to a different plan, or the Convertible Reserved Situations (CRIs). “The whole lot which you can reserve for both a yr or three years, we may also help you care for the reservations,” Hamerman says. The corporate is working to roll out an answer for AWS Spot situations, he says.

One other FinOps buyer to maintain in your radar is nOps.  The San Francisco firm says it offers “full visibility” into clients’ AWS prices, together with reserved situations and the AWS Spot market.

“With consciousness of all of your AWS commitments and the AWS Spot market, nOps mechanically fulfills your commitments and provisions extra compute to Spot,” the corporate says.

nOps, which is also a FinOps Basis member, says it has saved its clients greater than $1.5 billion in AWS spending, and grown its buyer base by 450% over the previous 18 months.

The rise of generative AI has introduced cloud compute optimization to a tipping level, says nOps CEO and Founder JT Giri.

“Whereas numerous level options tackle particular cloud optimization wants, engineering groups should not have the time to manually handle and optimize the ever-growing complexity of cloud sources,” he says in a press launch. “As a substitute, they want one resolution that gives full visibility into cloud spend, to automated optimization and single-click cloud waste clear up to allow them to give attention to innovation to drive firm progress. Because of this we based nOps and why now we have been so profitable.

Associated Objects:

Flexera 2024 State of the Cloud Reveals Spending because the High Problem of Cloud Computing

Waste Not, Need Not

The Cloud Is Nice for Knowledge, Apart from These Tremendous Excessive Prices

The best way to Construct and Practice a Transformer Mannequin from Scratch with Hugging Face Transformers


The best way to Construct and Practice a Transformer Mannequin from Scratch with Hugging Face Transformers
Picture by Editor | Midjourney

 

Hugging Face Transformers library gives instruments for simply loading and utilizing pre-trained Language Fashions (LMs) primarily based on the transformer structure. However, do you know this library additionally lets you implement and practice your transformer mannequin from scratch? This tutorial illustrates how via a step-by-step sentiment classification instance.

Necessary notice: Coaching a transformer mannequin from scratch is computationally costly, with a coaching loop usually requiring hours to say the least. To run the code on this tutorial, it’s extremely advisable to have entry to high-performance computing assets, be it on-premises or through a cloud supplier.

 

Step-by-Step Course of

 

Preliminary Setup and Dataset Loading

Relying on the kind of Python improvement surroundings you might be engaged on, you could want to put in Hugging Face’s transformers and datasets libraries, in addition to the speed up library to coach your transformer mannequin in a distributed computing setting.

!pip set up transformers datasets
!pip set up speed up -U

 

As soon as the mandatory libraries are put in, let’s load the feelings dataset for sentiment classification of Twitter messages from Hugging Face hub:

from datasets import load_dataset
dataset = load_dataset('jeffnyman/feelings')

 

Utilizing the info for coaching a transformer-based LM requires tokenizing the textual content. The next code initializes a BERT tokenizer (BERT is a household of transformer fashions appropriate for textual content classification duties), defines a operate to tokenize textual content information with padding and truncation, and applies it to the dataset in batches.

from transformers import AutoTokenizer

def tokenize_function(examples):
  return tokenizer(examples['text'], padding="max_length", truncation=True)

tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
tokenized_datasets = dataset.map(tokenize_function, batched=True)

 

Earlier than transferring on to initialize the transformer mannequin, let’s confirm the distinctive labels within the dataset. Having a verified set of present class labels helps forestall GPU-related errors throughout coaching by verifying label consistency and correctness. We are going to use this label set in a while.

unique_labels = set(tokenized_datasets['train']['label'])
print(f"Distinctive labels within the coaching set: {unique_labels}")

def check_labels(dataset):
  for label in dataset['train']['label']:
    if label not in unique_labels:
      print(f"Discovered invalid label: {label}")

check_labels(tokenized_datasets)

 

Subsequent, we create and outline a mannequin configuration, after which instantiate the transformer mannequin with this configuration. That is the place we specify hyperparameters in regards to the transformer structure like embedding dimension, variety of consideration heads, and the beforehand calculated set of distinctive labels, key in constructing the ultimate output layer for sentiment classification.

from transformers import BertConfig
from transformers import BertForSequenceClassification

config = BertConfig(
vocab_size=tokenizer.vocab_size,
hidden_size=512,
num_hidden_layers=6,
num_attention_heads=8,
intermediate_size=2048,
max_position_embeddings=512,
num_labels=len(unique_labels)
)

mannequin = BertForSequenceClassification(config)

 

We’re nearly prepared to coach our transformer mannequin. It simply stays to instantiate two crucial situations: TrainingArguments, with specs in regards to the coaching loop such because the variety of epochs, and Coach, which glues collectively the mannequin occasion, the coaching arguments, and the info utilized for coaching and validation.

from transformers import TrainingArguments, Coach

training_args = TrainingArguments(
  output_dir="./outcomes",
  evaluation_strategy="epoch",
  learning_rate=2e-5,
  per_device_train_batch_size=16,
  per_device_eval_batch_size=16,
  num_train_epochs=3,
  weight_decay=0.01,
)

coach = Coach(
  mannequin=mannequin,
  args=training_args,
  train_dataset=tokenized_datasets["train"],
  eval_dataset=tokenized_datasets["test"],
)

 

Time to coach the mannequin, sit again, and loosen up. Bear in mind this instruction will take a big period of time to finish:

 

As soon as skilled, your transformer mannequin needs to be prepared for passing in enter examples for sentiment prediction.

 

Troubleshooting

If issues seem or persist when executing the coaching loop or throughout its setup, you could want to examine the configuration of the GPU/CPU assets getting used. For example, if utilizing a CUDA GPU, including these directions at first of your code can assist forestall errors within the coaching loop:

import os
os.environ["CUDA_LAUNCH_BLOCKING"] = "1"

 

These strains disable the GPU and make CUDA operations synchronous, offering extra speedy and correct error messages for debugging.

Alternatively, if you’re attempting this code in a Google Colab occasion, chances are high this error message exhibits up throughout execution, even when you have beforehand put in the speed up library:

ImportError: Utilizing the `Coach` with `PyTorch` requires `speed up>=0.21.0`: Please run `pip set up transformers[torch]` or `pip set up speed up -U`

 

To handle this difficulty, strive restarting your session within the ‘Runtime’ menu: the speed up library usually requires resetting the run surroundings after being put in.

 

Abstract and Wrap-Up

 

This tutorial showcased the important thing steps to construct your transformer-based LM from scratch utilizing Hugging Face libraries. The principle steps and parts concerned could be summarized as:

  • Loading the dataset and tokenizing the textual content information.
  • Initializing your mannequin by utilizing a mannequin configuration occasion for the kind of mannequin (language job) it’s supposed for, e.g. BertConfig.
  • Establishing a Coach and TrainingArguments situations and working the coaching loop.

As a subsequent studying step, we encourage you to discover learn how to make predictions and inferences together with your newly skilled mannequin.
 
 

Iván Palomares Carrascosa is a pacesetter, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the actual world.

If it wasn’t for dangerous recommendation, Apple would not get any recommendation in any respect

0