Home Blog Page 3785

Vietnamese Human Rights Group Focused in Multi-Yr Cyberattack by APT32


Aug 29, 2024Ravie LakshmananCyber Espionage / Malware

Vietnamese Human Rights Group Focused in Multi-Yr Cyberattack by APT32

A non-profit supporting Vietnamese human rights has been the goal of a multi-year marketing campaign designed to ship quite a lot of malware on compromised hosts.

Cybersecurity firm Huntress attributed the exercise to a menace cluster referred to as APT32, a Vietnamese-aligned hacking crew that is also called APT-C-00, Canvas Cyclone (previously Bismuth), Cobalt Kitty, and OceanLotus. The intrusion is believed to have been ongoing for at the very least 4 years.

“This intrusion has quite a few overlaps with recognized strategies utilized by the menace actor APT32/OceanLotus, and a recognized goal demographic which aligns with APT32/OceanLotus targets,” safety researchers Jai Minton and Craig Sweeney mentioned.

OceanLotus, energetic since at the very least 2012, has a historical past of focusing on firm and authorities networks in East-Asian international locations, significantly Vietnam, the Philippines, Laos, and Cambodia with the top purpose of cyber espionage and mental property theft.

Cybersecurity

Assault chains usually make use of spear-phishing lures because the preliminary penetration vector to ship backdoors able to operating arbitrary shellcode and accumulating delicate info. That mentioned, the group has additionally been noticed orchestrating watering gap campaigns as early as 2018 to contaminate website guests with a reconnaissance payload or harvest their credentials.

The newest set of assaults pieced collectively by Huntress spanned 4 hosts, every of which was compromised so as to add numerous scheduled duties and Home windows Registry keys which are liable for launching Cobalt Strike Beacons, a backdoor that allows the theft of Google Chrome cookies for all person profiles on the system, and loaders liable for launching embedded DLL payloads.

The event comes as South Korean customers are the goal of an ongoing marketing campaign that probably leverages spear-phishing and weak Microsoft Trade servers to ship reverse shells, backdoors, and VNC malware to realize management of contaminated machines and steal credentials saved in internet browsers.

Discovered this text attention-grabbing? Comply with us on Twitter and LinkedIn to learn extra unique content material we submit.



Unify your information: AI and Analytics in an Open Lakehouse

0


Cloudera prospects run a few of the largest information lakes on earth. These lakes energy mission-critical, large-scale information analytics and AI use circumstances—together with enterprise information warehouses. Practically two years in the past, Cloudera introduced the final availability of Apache Iceberg within the Cloudera platform, which helps customers keep away from vendor lock-in and implement an open lakehouse. With an open information lakehouse powered by Apache Iceberg, companies can higher faucet into the facility of analytics and AI.

One of many major advantages of deploying AI and analytics inside an open information lakehouse is the flexibility to centralize information from disparate sources right into a single, cohesive repository. By leveraging the pliability of an information lake and the structured querying capabilities of an information warehouse, an open information lakehouse accommodates uncooked and processed information of assorted sorts, codecs, and velocities. This unified information surroundings eliminates the necessity for sustaining separate information silos and facilitates seamless entry to information for AI and analytics purposes.

Right here’s what implementing an open information lakehouse with Cloudera delivers:

  • Integration of Knowledge Lake and Knowledge Warehouse: An open information lakehouse brings collectively the perfect of each worlds by integrating the storage flexibility of an information lake with the question efficiency and structured querying capabilities of an information warehouse.
  • Openness: The time period “open” in open information lakehouse signifies interoperability and compatibility with varied information processing frameworks, analytics instruments, and programming languages. This openness promotes collaboration and innovation by empowering information scientists, analysts, and builders to leverage their most well-liked instruments and methodologies for exploring, analyzing, and deriving insights from information. Whether or not it’s conventional SQL-based querying, superior machine studying algorithms, or advanced information processing workflows, an open information lakehouse offers a versatile and extensible platform for accommodating numerous analytics workloads.
  • Scalability and Flexibility: Like conventional information lakes, an open information lakehouse is designed to scale horizontally, accommodating giant volumes of knowledge from numerous sources. It offers flexibility in storing each uncooked and processed information, permitting organizations to adapt to altering information necessities and analytical wants. As information volumes develop and analytical wants evolve, organizations can seamlessly scale their infrastructure horizontally to accommodate elevated information ingestion, processing, and storage calls for. This scalability ensures the info lakehouse stays responsive and performant, whilst information complexity and utilization patterns change over time.
  • Unified Knowledge Platform: An open information lakehouse serves as a unified platform for information storage, processing, and analytics, eliminating the necessity for sustaining separate information silos and ETL (Extract, Remodel, Load) processes. Deploying AI and analytics inside an open information lakehouse promotes information democratization and self-service analytics, empowering customers throughout the group to entry, analyze, and derive insights from information autonomously. By offering a unified and accessible information platform, organizations can break down information silos, democratize entry to information and analytics instruments, and foster a tradition of data-driven decision-making in any respect ranges. This democratization of knowledge and analytics enhances organizational agility and competitiveness and promotes a extra collaborative and data-literate workforce.
  • Assist for Trendy Analytics Workloads: With assist for each SQL-based querying and superior analytics frameworks (e.g., machine studying, graph processing), an open information lakehouse caters to a variety of analytics workloads, from ad-hoc querying to advanced information processing and predictive modeling.

Open information lakehouse structure represents a contemporary method to information administration and analytics, enabling organizations to harness the total potential of their information property whereas embracing openness, scalability, and interoperability. 

Study extra in regards to the Cloudera Open Knowledge Lakehouse right here.

Safety Engineering with Ben Huber


Ben Huber is a safety engineer who has labored at firms together with Crypto.com and Blackpanda. He joins the podcast to speak about his profession, penetration or “pen” testing, assault vectors, safety instruments, and far more.

Gregor Vand is a security-focused technologist, and is the founder and CTO of Mailpass. Beforehand, Gregor was a CTO throughout cybersecurity, cyber insurance coverage and normal software program engineering firms. He has been primarily based in Asia Pacific for nearly a decade and could be discovered through his profile at vand.hk.

 

Constructing event-driven functions simply obtained considerably simpler with Hookdeck, your go-to occasion gateway for managing webhooks and asynchronous messaging between first and third-party APIs and providers.

With Hookdeck you’ll be able to obtain, remodel, and filter webhooks from third-party providers and throttle the supply to your personal infrastructure.

You possibly can securely ship webhooks, triggered from your personal platform, to your buyer’s endpoints.

Ingest occasions at scale from IoT units or SDKs, and use Hookdeck as your asynchronous API infrastructure.

Irrespective of your use case, Hookdeck is constructed to assist your full software program improvement life cycle. Use the Hookdeck CLI to obtain occasions in your localhost. Automate dev, staging, and prod surroundings creation utilizing the Hookdeck API or Terraform Supplier. And, achieve full visibility of all occasions utilizing the Hookdeck logging and metrics within the Hookdeck dashboard.

Begin constructing dependable and scalable event-driven functions as we speak. Go to hookdeck.com/sedaily and signal as much as get a 3 month trial of the Hookdeck Staff plan without spending a dime.

This episode of Software program Engineering Day by day is dropped at you by Vantage.

Have you learnt what your cloud invoice will likely be for this month?

For a lot of firms, cloud prices are the quantity two line merchandise of their funds and the primary quickest rising class of spend.

Vantage helps you get a deal with in your cloud payments, with self-serve reviews and dashboards constructed for engineers, finance, and operations groups.

With Vantage, you’ll be able to put prices within the palms of the service house owners and managers who generate them—giving them budgets, alerts, anomaly detection, and granular visibility into each greenback.

With native billing integrations with dozens of cloud providers, together with AWS, Azure, GCP, Datadog, Snowflake, and Kubernetes, Vantage is the one FinOps platform to observe and cut back all of your cloud payments.

To get began, head to vantage.sh, join your accounts, and get a free financial savings estimate as a part of a 14-day free trial.

This episode of Software program Engineering Day by day is dropped at you by Starburst.

Struggling to ship analytics on the velocity your customers need with out your prices snowballing?

For information engineers who battle to construct and scale prime quality information pipelines, Starburst’s information lakehouse platform helps you ship distinctive person experiences at peta-byte scale, with out compromising on efficiency or value.

Trusted by the groups at Comcast, Doordash, and MIT, Starburst delivers the adaptability and suppleness a lakehouse ecosystem guarantees on an open structure that helps – Apache Iceberg, Delta Lake and Hudi, so that you all the time keep possession of your information.

Wish to see Starburst in motion? Get began as we speak with a free trial at starburst.io/sed.

Keep within the H2 know – offering clear water with Cisco industrial IoT


Safeguarding our most treasured useful resource

Water is without doubt one of the world’s most treasured sources. Human beings drink about 4 liters a day alone, and water is vital for each agriculture and business in addition to sustaining life.

Ontario Clear Water Company (OCWA) goals to be a “trusted water associate for all times.” OCWA’s precedence is to ship water and wastewater companies for the well being and sustainability of communities. The company treats water and wastewater, and gives different technical companies for 750 shopper amenities in Ontario, Canada, together with municipalities, First Nations, and industrial, industrial, authorities, and institutional shoppers.

OCWA’s municipal shoppers vary in dimension from populations as giant as 1.5 million within the Area of Peel, to as small as 2,400 in Moose Manufacturing unit, a neighborhood situated in Northern Ontario. This broad scope of expertise permits the company to unravel any points which will come up, regardless of the scale or kind of therapy course of within the province. Consequently, the company has grown its municipal shopper base yearly over the previous 30 years.

When the Canadian authorities imposed stricter necessities for monitoring water high quality after the Walkerton disaster in Might 2000, OCWA constructed a customized distant monitoring system. Remarkably, the homegrown resolution met the company’s wants for greater than 20 years. By the 2020s, the company wanted to modernize. OCWA had three objectives. One was changing information from varied sorts of tools in numerous vegetation into an ordinary format. One other was eliminating time-consuming compliance reporting required each time community disruptions brought on gaps in information. And at last, OCWA was enthusiastic about increasing its service portfolio so as to add worth for patrons.

Going the space

OCWA met its objectives with an industrial IoT resolution constructed on Cisco industrial routers. Functions working on the routers rework every plant’s information into an ordinary format for compliance and enterprise reporting, making expensive customized work a factor of the previous – a report written for one facility will work for all amenities. “It’s highly effective to standardize monitoring this fashion,” mentioned Ciprian Panfilie, Director of Operational Methods at OCWA. “As a substitute of getting a specialist for every facility, we constructed groups that present specialised companies to all amenities across the province, optimizing our method.”

The answer additionally ensures OCWA is ready to meet regulatory necessities and mitigate the chance of community outages which will create information gaps. If the hyperlink from a facility to OCWA’s workplaces is down, the router retains the information on its built-in storage, transferring it to the cloud as soon as connectivity is restored. “We tried dozens of options, however solely Cisco’s resolution labored flawlessly,” Panfilie mentioned.

As for increasing companies, OCWA lately added a complicated vitality administration resolution to its portfolio. The routers present an ordinary community and cybersecurity template for vitality administration, enabling baselines, forecasts, and real-time vitality administration. One other space of service improvement is close to real-time asset efficiency monitoring and predictive upkeep utilizing LORAWAN sensors and gateways.

Wanting forward

Cisco’s industrial IoT resolution is up and working in over 165 OCWA-monitored amenities so far and counting. By 2030, the company expects to deploy the commercial IoT resolution within the majority of its remotely monitored amenities and is exploring different long-term alternatives. Pending additional pilot testing, some ideas could also be working machine studying purposes on the Cisco routers to foretell and repair points earlier than they happen, equivalent to out-of-bounds modifications in wastewater effluent high quality.

Be taught extra

Share:

What they’re and the best way to use them



What they’re and the best way to use them

Information pre-processing: What you do to the info earlier than feeding it to the mannequin.
— A easy definition that, in observe, leaves open many questions. The place, precisely, ought to pre-processing cease, and the mannequin start? Are steps like normalization, or numerous numerical transforms, a part of the mannequin, or the pre-processing? What about information augmentation? In sum, the road between what’s pre-processing and what’s modeling has all the time, on the edges, felt considerably fluid.

On this scenario, the arrival of keras pre-processing layers adjustments a long-familiar image.

In concrete phrases, with keras, two alternate options tended to prevail: one, to do issues upfront, in R; and two, to assemble a tfdatasets pipeline. The previous utilized every time we would have liked the entire information to extract some abstract info. For instance, when normalizing to a imply of zero and a normal deviation of 1. However typically, this meant that we needed to remodel back-and-forth between normalized and un-normalized variations at a number of factors within the workflow. The tfdatasets method, however, was elegant; nonetheless, it might require one to put in writing a variety of low-level tensorflow code.

Pre-processing layers, out there as of keras model 2.6.1, take away the necessity for upfront R operations, and combine properly with tfdatasets. However that isn’t all there’s to them. On this submit, we wish to spotlight 4 important features:

  1. Pre-processing layers considerably cut back coding effort. You might code these operations your self; however not having to take action saves time, favors modular code, and helps to keep away from errors.
  2. Pre-processing layers – a subset of them, to be exact – can produce abstract info earlier than coaching correct, and make use of a saved state when known as upon later.
  3. Pre-processing layers can velocity up coaching.
  4. Pre-processing layers are, or may be made, a part of the mannequin, thus eradicating the necessity to implement impartial pre-processing procedures within the deployment surroundings.

Following a brief introduction, we’ll develop on every of these factors. We conclude with two end-to-end examples (involving pictures and textual content, respectively) that properly illustrate these 4 features.

Pre-processing layers in a nutshell

Like different keras layers, those we’re speaking about right here all begin with layer_, and could also be instantiated independently of mannequin and information pipeline. Right here, we create a layer that can randomly rotate pictures whereas coaching, by as much as 45 levels in each instructions:

library(keras)
aug_layer <- layer_random_rotation(issue = 0.125)

As soon as we now have such a layer, we will instantly take a look at it on some dummy picture.

tf.Tensor(
[[1. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0.]
 [0. 0. 1. 0. 0.]
 [0. 0. 0. 1. 0.]
 [0. 0. 0. 0. 1.]], form=(5, 5), dtype=float32)

“Testing the layer” now actually means calling it like a perform:

tf.Tensor(
[[0.         0.         0.         0.         0.        ]
 [0.44459596 0.32453176 0.05410459 0.         0.        ]
 [0.15844001 0.4371609  1.         0.4371609  0.15844001]
 [0.         0.         0.05410453 0.3245318  0.44459593]
 [0.         0.         0.         0.         0.        ]], form=(5, 5), dtype=float32)

As soon as instantiated, a layer can be utilized in two methods. Firstly, as a part of the enter pipeline.

In pseudocode:

# pseudocode
library(tfdatasets)
 
train_ds <- ... # outline dataset
preprocessing_layer <- ... # instantiate layer

train_ds <- train_ds %>%
  dataset_map(perform(x, y) checklist(preprocessing_layer(x), y))

Secondly, the best way that appears most pure, for a layer: as a layer contained in the mannequin. Schematically:

# pseudocode
enter <- layer_input(form = input_shape)

output <- enter %>%
  preprocessing_layer() %>%
  rest_of_the_model()

mannequin <- keras_model(enter, output)

In truth, the latter appears so apparent that you simply may be questioning: Why even permit for a tfdatasets-integrated various? We’ll develop on that shortly, when speaking about efficiency.

Stateful layers – who’re particular sufficient to deserve their personal part – can be utilized in each methods as effectively, however they require a further step. Extra on that beneath.

How pre-processing layers make life simpler

Devoted layers exist for a mess of data-transformation duties. We will subsume them beneath two broad classes, characteristic engineering and information augmentation.

Function engineering

The necessity for characteristic engineering might come up with all sorts of information. With pictures, we don’t usually use that time period for the “pedestrian” operations which are required for a mannequin to course of them: resizing, cropping, and such. Nonetheless, there are assumptions hidden in every of those operations , so we really feel justified in our categorization. Be that as it might, layers on this group embrace layer_resizing(), layer_rescaling(), and layer_center_crop().

With textual content, the one performance we couldn’t do with out is vectorization. layer_text_vectorization() takes care of this for us. We’ll encounter this layer within the subsequent part, in addition to within the second full-code instance.

Now, on to what’s usually seen as the area of characteristic engineering: numerical and categorical (we’d say: “spreadsheet”) information.

First, numerical information typically must be normalized for neural networks to carry out effectively – to attain this, use layer_normalization(). Or possibly there’s a purpose we’d prefer to put steady values into discrete classes. That’d be a job for layer_discretization().

Second, categorical information are available numerous codecs (strings, integers …), and there’s all the time one thing that must be executed with a purpose to course of them in a significant method. Typically, you’ll wish to embed them right into a higher-dimensional house, utilizing layer_embedding(). Now, embedding layers count on their inputs to be integers; to be exact: consecutive integers. Right here, the layers to search for are layer_integer_lookup() and layer_string_lookup(): They may convert random integers (strings, respectively) to consecutive integer values. In a distinct state of affairs, there may be too many classes to permit for helpful info extraction. In such instances, use layer_hashing() to bin the info. And eventually, there’s layer_category_encoding() to supply the classical one-hot or multi-hot representations.

Information augmentation

Within the second class, we discover layers that execute [configurable] random operations on pictures. To call just some of them: layer_random_crop(), layer_random_translation(), layer_random_rotation() … These are handy not simply in that they implement the required low-level performance; when built-in right into a mannequin, they’re additionally workflow-aware: Any random operations will likely be executed throughout coaching solely.

Now we now have an thought what these layers do for us, let’s concentrate on the particular case of state-preserving layers.

Pre-processing layers that hold state

A layer that randomly perturbs pictures doesn’t must know something concerning the information. It simply must observe a rule: With likelihood (p), do (x). A layer that’s alleged to vectorize textual content, however, must have a lookup desk, matching character strings to integers. The identical goes for a layer that maps contingent integers to an ordered set. And in each instances, the lookup desk must be constructed upfront.

With stateful layers, this information-buildup is triggered by calling adapt() on a freshly-created layer occasion. For instance, right here we instantiate and “situation” a layer that maps strings to consecutive integers:

colours <- c("cyan", "turquoise", "celeste");

layer <- layer_string_lookup()
layer %>% adapt(colours)

We will test what’s within the lookup desk:

[1] "[UNK]"     "turquoise" "cyan"      "celeste"  

Then, calling the layer will encode the arguments:

layer(c("azure", "cyan"))
tf.Tensor([0 2], form=(2,), dtype=int64)

layer_string_lookup() works on particular person character strings, and consequently, is the transformation ample for string-valued categorical options. To encode complete sentences (or paragraphs, or any chunks of textual content) you’d use layer_text_vectorization() as a substitute. We’ll see how that works in our second end-to-end instance.

Utilizing pre-processing layers for efficiency

Above, we mentioned that pre-processing layers may very well be utilized in two methods: as a part of the mannequin, or as a part of the info enter pipeline. If these are layers, why even permit for the second method?

The principle purpose is efficiency. GPUs are nice at common matrix operations, similar to these concerned in picture manipulation and transformations of uniformly-shaped numerical information. Due to this fact, when you’ve got a GPU to coach on, it’s preferable to have picture processing layers, or layers similar to layer_normalization(), be a part of the mannequin (which is run fully on GPU).

Then again, operations involving textual content, similar to layer_text_vectorization(), are greatest executed on the CPU. The identical holds if no GPU is out there for coaching. In these instances, you’d transfer the layers to the enter pipeline, and attempt to profit from parallel – on-CPU – processing. For instance:

# pseudocode

preprocessing_layer <- ... # instantiate layer

dataset <- dataset %>%
  dataset_map(~checklist(text_vectorizer(.x), .y),
              num_parallel_calls = tf$information$AUTOTUNE) %>%
  dataset_prefetch()
mannequin %>% match(dataset)

Accordingly, within the end-to-end examples beneath, you’ll see picture information augmentation occurring as a part of the mannequin, and textual content vectorization, as a part of the enter pipeline.

Exporting a mannequin, full with pre-processing

Say that for coaching your mannequin, you discovered that the tfdatasets method was the perfect. Now, you deploy it to a server that doesn’t have R put in. It might appear to be that both, you need to implement pre-processing in another, out there, know-how. Alternatively, you’d must depend on customers sending already-pre-processed information.

Luckily, there’s something else you are able to do. Create a brand new mannequin particularly for inference, like so:

# pseudocode

enter <- layer_input(form = input_shape)

output <- enter %>%
  preprocessing_layer(enter) %>%
  training_model()

inference_model <- keras_model(enter, output)

This method makes use of the useful API to create a brand new mannequin that prepends the pre-processing layer to the pre-processing-less, unique mannequin.

Having centered on a number of issues particularly “good to know”, we now conclude with the promised examples.

Instance 1: Picture information augmentation

Our first instance demonstrates picture information augmentation. Three sorts of transformations are grouped collectively, making them stand out clearly within the total mannequin definition. This group of layers will likely be energetic throughout coaching solely.

library(keras)
library(tfdatasets)

# Load CIFAR-10 information that include keras
c(c(x_train, y_train), ...) %<-% dataset_cifar10()
input_shape <- dim(x_train)[-1] # drop batch dim
courses <- 10

# Create a tf_dataset pipeline 
train_dataset <- tensor_slices_dataset(checklist(x_train, y_train)) %>%
  dataset_batch(16) 

# Use a (non-trained) ResNet structure
resnet <- application_resnet50(weights = NULL,
                               input_shape = input_shape,
                               courses = courses)

# Create a knowledge augmentation stage with horizontal flipping, rotations, zooms
data_augmentation <-
  keras_model_sequential() %>%
  layer_random_flip("horizontal") %>%
  layer_random_rotation(0.1) %>%
  layer_random_zoom(0.1)

enter <- layer_input(form = input_shape)

# Outline and run the mannequin
output <- enter %>%
  layer_rescaling(1 / 255) %>%   # rescale inputs
  data_augmentation() %>%
  resnet()

mannequin <- keras_model(enter, output) %>%
  compile(optimizer = "rmsprop", loss = "sparse_categorical_crossentropy") %>%
  match(train_dataset, steps_per_epoch = 5)

Instance 2: Textual content vectorization

In pure language processing, we regularly use embedding layers to current the “workhorse” (recurrent, convolutional, self-attentional, what have you ever) layers with the continual, optimally-dimensioned enter they want. Embedding layers count on tokens to be encoded as integers, and remodel textual content to integers is what layer_text_vectorization() does.

Our second instance demonstrates the workflow: You will have the layer be taught the vocabulary upfront, then name it as a part of the pre-processing pipeline. As soon as coaching has completed, we create an “all-inclusive” mannequin for deployment.

library(tensorflow)
library(tfdatasets)
library(keras)

# Instance information
textual content <- as_tensor(c(
  "From every based on his capability, to every based on his wants!",
  "Act that you simply use humanity, whether or not in your personal individual or within the individual of some other, all the time similtaneously an finish, by no means merely as a method.",
  "Purpose is, and ought solely to be the slave of the passions, and may by no means fake to some other workplace than to serve and obey them."
))

# Create and adapt layer
text_vectorizer <- layer_text_vectorization(output_mode="int")
text_vectorizer %>% adapt(textual content)

# Test
as.array(text_vectorizer("To every based on his wants"))

# Create a easy classification mannequin
enter <- layer_input(form(NULL), dtype="int64")

output <- enter %>%
  layer_embedding(input_dim = text_vectorizer$vocabulary_size(),
                  output_dim = 16) %>%
  layer_gru(8) %>%
  layer_dense(1, activation = "sigmoid")

mannequin <- keras_model(enter, output)

# Create a labeled dataset (which incorporates unknown tokens)
train_dataset <- tensor_slices_dataset(checklist(
    c("From every based on his capability", "There may be nothing larger than purpose."),
    c(1L, 0L)
))

# Preprocess the string inputs
train_dataset <- train_dataset %>%
  dataset_batch(2) %>%
  dataset_map(~checklist(text_vectorizer(.x), .y),
              num_parallel_calls = tf$information$AUTOTUNE)

# Practice the mannequin
mannequin %>%
  compile(optimizer = "adam", loss = "binary_crossentropy") %>%
  match(train_dataset)

# export inference mannequin that accepts strings as enter
enter <- layer_input(form = 1, dtype="string")
output <- enter %>%
  text_vectorizer() %>%
  mannequin()

end_to_end_model <- keras_model(enter, output)

# Check inference mannequin
test_data <- as_tensor(c(
  "To every based on his wants!",
  "Purpose is, and ought solely to be the slave of the passions."
))
test_output <- end_to_end_model(test_data)
as.array(test_output)

Wrapup

With this submit, our aim was to name consideration to keras’ new pre-processing layers, and present how – and why – they’re helpful. Many extra use instances may be discovered within the vignette.

Thanks for studying!

Picture by Henning Borgersen on Unsplash