Home Blog Page 3776

Halliburton cyberattack linked to RansomHub ransomware gang


Halliburton cyberattack linked to RansomHub ransomware gang

The RansomHub ransomware gang is behind the latest cyberattack on oil and fuel providers large Halliburton, which disrupted the corporate’s IT methods and enterprise operations.

The assault induced widespread disruption, and BleepingComputer was advised that prospects could not generate invoices or buy orders as a result of the required methods had been down.

Halliburton disclosed the assault final Friday in an SEC submitting, stating they suffered a cyberattack on August 21, 2024, by an unauthorized occasion.

“On August 21, 2024, Halliburton Firm (the “Firm”) turned conscious that an unauthorized third occasion gained entry to sure of its methods,” learn Halliburton’s SEC submitting.

“When the Firm discovered of the difficulty, the Firm activated its cybersecurity response plan and launched an investigation internally with the help of exterior advisors to evaluate and remediate the unauthorized exercise.”

The corporate offers a quite a few providers to grease and fuel firms, together with properly development, drilling, hydraulic fracturing (fracking), and IT software program and providers. As a result of firm’s wide selection of providers, there may be a substantial amount of connectivity between them and their prospects.

Nevertheless, the corporate has not shared many particulars in regards to the assault, with a buyer within the oil and fuel business telling BleepingComputer that they’ve been left at the hours of darkness about figuring out if the assault impacted them and the best way to defend themselves.

This has induced different prospects to disconnect from Halliburton because of the ignorance being shared.

BleepingComputer has additionally been advised that some firms are working with ONG-ISAC—an company that acts as a central level of coordination and communication for bodily and cybersecurity threats in opposition to the oil and fuel business—to obtain technical details about the assault to find out in the event that they had been breached as properly.

RansomHub ransomware behind the assault

For days, there have been rumors that Halliburton suffered a RansomHub ransomware assault, with customers claiming this on Reddit and on the job layoff dialogue web site, TheLayoff, the place a partial RansomHub ransom be aware was printed.

When BleepingComputer contacted Halliburton about these claims, Halliburton mentioned they weren’t making any additional feedback.

“We aren’t commenting past what was included in our submitting. Any subsequent communications can be within the type of an 8-Ok,” Halliburton advised BleepingComputer.

Nevertheless, in an August 26 electronic mail despatched to suppliers and shared with BleepingComputer, Halliburton offered further info stating that the corporate took methods offline to guard them and is working with Mandiant to analyze the incident.

“We’re reaching out to replace you a few cybersecurity problem affecting Halliburton,” reads the letter seen by BleepingComputer.

“As quickly as we discovered of the difficulty, we activated our cybersecurity response plan and took steps to deal with it, together with (1) proactively taking sure methods offline to assist defend them, (2) partaking the help of main exterior advisors, together with Mandiant, and (3) notifying legislation enforcement.”

Additionally they acknowledged that their electronic mail methods proceed to function as they’re hosted on Microsoft Azure infrastructure. A workaround can be out there for transacting and issuing buy orders.

This electronic mail features a listing of IOCs containing file names and IP addresses related to the assault that prospects can use to detect related exercise on their community.

One in all these IOCs is for a Home windows executable named upkeep.exe, which BleepingComputer has confirmed to be a RansomHub ransomware encryptor.

After analyzing the pattern, it seems to be a more recent model than beforehand analyzed, because it incorporates a brand new “-cmd string” command-line argument, which can execute a command on the gadget earlier than encrypting information.

RansomHub encryptor used in Halliburton attack
RansomHub encryptor utilized in Halliburton assault
Supply: BleepingComputer

RansomHub

The RansomHub ransomware operation launched in February 2024, claiming it was an information theft extortion and extortion group that bought stolen information to the best bidder.

Nevertheless, quickly after, it was found that the operation additionally utilized ransomware encryptors in its double-extortion assaults, the place the menace actors breached networks, stole knowledge, after which encrypted information.

The encrypted information and the menace to leak stolen knowledge had been then used as leverage to scare firms into paying a ransom.

Symantec analyzed the ransomware encryptors and reported that they had been based mostly on the Knight ransomware encryptors, previously often known as Cyclops.

The Knight operation claimed they bought their supply code in February 2024 and shut down simply as RansomHub launched. This has made many researchers consider that RansomHub is a rebrand of the Knight ransomware operation.

Immediately, the FBI launched an advisory about RansomHub, sharing the menace actor’s techniques and warning that they breached not less than 210 victims since February.

It’s common for the FBI and CISA to publish coordinated advisories on menace actors quickly after they conduct a extremely impactful assault on important infrastructure, reminiscent of Halliburton. Nevertheless, it’s not identified if the advisory and the assault are linked.

Because the begin of the yr, RansomHub has been chargeable for quite a few high-profile assaults, together with these on American not-for-profit credit score union Patelco, the Ceremony Help drugstore chain, the Christie’s public sale home, and U.S. telecom supplier Frontier Communications.

The ransomware operation’s knowledge leak web site was additionally utilized to leak stolen knowledge belonging to Change Healthcare following the shutdown of the BlackCat and ALPHV ransomware operation.

It’s believed that after BlackCat shut down, a few of its associates moved to RansomHub, permitting them to rapidly escalate their assaults with skilled ransomware menace actors.

Mastering Multimodal AI | Databricks Weblog

0


Introduction

Twelve Labs Embed API permits customers to make use of pure language to discover the content material of video libraries, in addition to generate summaries of present movies.

With Twelve Labs, contextual vector representations might be generated that seize the connection between visible expressions, physique language, spoken phrases, and general context inside movies. Databricks Mosaic AI Vector Search gives a strong, scalable infrastructure for indexing and querying high-dimensional vectors. This weblog submit will information you thru harnessing these complementary applied sciences to unlock new prospects in video AI purposes.

Why Twelve Labs + Databricks Mosaic AI?

Integrating Twelve Labs Embed API with Databricks Mosaic AI Vector Search addresses key challenges in video AI, resembling environment friendly processing of large-scale video datasets and correct multimodal content material illustration. This integration reduces growth time and useful resource wants for superior video purposes, enabling complicated queries throughout huge video libraries and enhancing general workflow effectivity.

Mastering Multimodal AI Twelve Labs

The unified strategy to dealing with multimodal information is especially noteworthy. As a substitute of juggling separate fashions for textual content, picture, and audio evaluation, customers can now work with a single, coherent illustration that captures the essence of video content material in its entirety. This not solely simplifies deployment structure but additionally permits extra nuanced and context-aware purposes, from refined content material advice techniques to superior video serps and automatic content material moderation instruments.

Furthermore, this integration extends the capabilities of the Databricks ecosystem, permitting seamless incorporation of video understanding into present information pipelines and machine studying workflows. Whether or not firms are growing real-time video analytics, constructing large-scale content material classification techniques, or exploring novel purposes in Generative AI, this mixed resolution gives a strong basis. It pushes the boundaries of what is attainable in video AI, opening up new avenues for innovation and problem-solving in industries starting from media and leisure to safety and healthcare.

Understanding Twelve Labs Embed API

Twelve Labs’ Embed API represents a major development in multimodal embedding know-how, particularly designed for video content material. Not like conventional approaches that depend on frame-by-frame evaluation or separate fashions for various modalities, this API generates contextual vector representations that seize the intricate interaction of visible expressions, physique language, spoken phrases, and general context inside movies.

The Embed API presents a number of key options that make it significantly highly effective for AI engineers working with video information. First, it gives flexibility for any modality current in movies, eliminating the necessity for separate text-only or image-only fashions. Second, it employs a video-native strategy that accounts for movement, motion, and temporal info, making certain a extra correct and temporally coherent interpretation of video content material. Lastly, it creates a unified vector house that integrates embeddings from all modalities, facilitating a extra holistic understanding of the video content material.

For AI engineers, the Embed API opens up new prospects in video understanding duties. It permits extra refined content material evaluation, improved semantic search capabilities, and enhanced advice techniques. The API’s capacity to seize refined cues and interactions between totally different modalities over time makes it significantly worthwhile for purposes requiring a nuanced understanding of video content material, resembling emotion recognition, context-aware content material moderation, and superior video retrieval techniques.

Conditions

Earlier than integrating Twelve Labs Embed API with Databricks Mosaic AI Vector Search, be certain you could have the next conditions:

  1. A Databricks account with entry to create and handle workspaces. (Join a free trial at https://www.databricks.com/try-databricks)
  2. Familiarity with Python programming and primary information science ideas.
  3. A Twelve Labs API key. (Enroll at https://api.twelvelabs.io)
  4. Primary understanding of vector embeddings and similarity search ideas.
  5. (Elective) An AWS account if utilizing Databricks on AWS. This isn’t required if utilizing Databricks on Azure or Google Cloud.

Step 1: Set Up the Atmosphere

To start, arrange the Databricks setting and set up the required libraries:

1. Create a brand new Databricks workspace

2. Create a brand new cluster or hook up with an present cluster

Virtually any ML cluster will work for this utility. The beneath settings are supplied for these looking for optimum worth efficiency.

  • In your Compute tab, click on “Create compute”
  • Choose “Single node” and Runtime: 14.3 LTS ML non-GPU
    • The cluster coverage and entry mode might be left because the default
  • Choose “r6i.xlarge” because the Node sort
    • This can maximize reminiscence utilization whereas solely costing $0.252/hr on AWS and 1.02 DBU/hr on Databricks earlier than any discounting
    • It was additionally one of many quickest choices we examined
  • All different choices might be left because the default
  • Click on “Create compute” on the backside and return to your workspace

3. Create a brand new pocket book in your Databricks workspace

  • In your workspace, click on “Create” and choose “Pocket book”
  • Identify your pocket book (e.g., “TwelveLabs_MosaicAI_VectorSearch_Integration”)
  • Select Python because the default language

4. Set up the Twelve Labs and Mosaic AI Vector Search SDKs

Within the first cell of your pocket book, run the next Python command: 

%pip set up twelvelabs databricks-vectorsearch

5. Arrange Twelve Labs authentication

Within the subsequent cell, add the next Python code:

from twelvelabs import TwelveLabs
import os

# Retrieve the API key from Databricks secrets and techniques (really useful)
# You will have to arrange the key scope and add your API key first
TWELVE_LABS_API_KEY = dbutils.secrets and techniques.get(scope="your-scope", key="twelvelabs-api-key")

if TWELVE_LABS_API_KEY is None:
    increase ValueError("TWELVE_LABS_API_KEY setting variable just isn't set")

# Initialize the Twelve Labs shopper
twelvelabs_client = TwelveLabs(api_key=TWELVE_LABS_API_KEY)

Word: For enhanced safety, it is really useful to make use of Databricks secrets and techniques to retailer your API key reasonably than arduous coding it or utilizing setting variables.

Step 2: Generate Multimodal Embeddings

Use the supplied generate_embedding perform to generate multimodal embeddings utilizing Twelve Labs Embed API. This perform is designed as a Pandas user-defined perform (UDF) to work effectively with Spark DataFrames in Databricks. It encapsulates the method of making an embedding activity, monitoring its progress, and retrieving the outcomes.

Subsequent, create a process_url perform, which takes the video URL as string enter and invokes a wrapper name to the Twelve Labs Embed API – returning an array.

Here is tips on how to implement and use it.

1. Outline the UDF:

from pyspark.sql.capabilities import pandas_udf
from pyspark.sql.sorts import ArrayType, FloatType
from twelvelabs.fashions.embed import EmbeddingsTask
import pandas as pd

@pandas_udf(ArrayType(FloatType()))
def get_video_embeddings(urls: pd.Collection) -> pd.Collection:
    def generate_embedding(video_url):
        twelvelabs_client = TwelveLabs(api_key=TWELVE_LABS_API_KEY)
        activity = twelvelabs_client.embed.activity.create(
            engine_name="Marengo-retrieval-2.6",
            video_url=video_url
        )
        activity.wait_for_done()
        task_result = twelvelabs_client.embed.activity.retrieve(activity.id)
        embeddings = []
        for v in task_result.video_embeddings:
            embeddings.append({
                'embedding': v.embedding.float,
                'start_offset_sec': v.start_offset_sec,
                'end_offset_sec': v.end_offset_sec,
                'embedding_scope': v.embedding_scope
            })
        return embeddings

    def process_url(url):
        embeddings = generate_embedding(url)
        return embeddings[0]['embedding'] if embeddings else None

    return urls.apply(process_url)

2. Create a pattern DataFrame with video URLs:

video_urls = [
    "https://example.com/video1.mp4",
    "https://example.com/video2.mp4",
    "https://example.com/video3.mp4"
]
df = spark.createDataFrame([(url,) for url in video_urls], ["video_url"])

3. Apply the UDF to generate embeddings:

df_with_embeddings = df.withColumn("embedding", get_video_embeddings(df.video_url))

4. Show the outcomes:

df_with_embeddings.present(truncate=False)

This course of will generate multimodal embeddings for every video URL in a DataFrame that can seize the multimodal essence of the video content material, together with visible, audio, and textual info.

Keep in mind that producing embeddings might be computationally intensive and time-consuming for big video datasets. Contemplate implementing batching or distributed processing methods for production-scale purposes. Moreover, guarantee that you’ve got acceptable error dealing with and logging in place to handle potential API failures or community points.

Step 3: Create a Delta Desk for Video Embeddings

Now, create a supply Delta Desk to retailer video metadata and the embeddings generated by Twelve Labs Embed API. This desk will function the inspiration for a Vector Search index in Databricks Mosaic AI Vector Search.

First, create a supply DataFrame with video URLs and metadata:

from pyspark.sql import Row

# Create a listing of pattern video URLs and metadata
video_data = [
Row(url='http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4', title='Elephant Dream'), 

Row(url='http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/Sintel.mp4', title='Sintel'),

Row(url='http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4', title='Big Buck Bunny')
]

# Create a DataFrame from the record
source_df = spark.createDataFrame(video_data)
source_df.present()

Subsequent, declare the schema for the Delta desk utilizing SQL:

%sql
CREATE TABLE IF NOT EXISTS videos_source_embeddings (
  id BIGINT GENERATED BY DEFAULT AS IDENTITY,
  url STRING,
  title STRING,
  embedding ARRAY<FLOAT>
) TBLPROPERTIES (delta.enableChangeDataFeed = true);

Word that Change Information Feed has been enabled on the desk, which is essential for creating and sustaining the Vector Search index.

Now, generate embeddings on your movies utilizing the get_video_embeddings perform outlined earlier:

embeddings_df = source_df.withColumn("embedding", get_video_embeddings("url"))

This step could take a while, relying on the quantity and size of your movies.

Along with your embeddings generated, now you may write the information to your Delta Desk:

embeddings_df.write.mode("append").saveAsTable("videos_source_embeddings")

Lastly, confirm your information by displaying the DataFrame with embeddings:

show(embeddings_df)

This step creates a strong basis for Vector Search capabilities. The Delta Desk will routinely keep in sync with the Vector Search index, making certain that any updates or additions to our video dataset are mirrored in your search outcomes.

Some key factors to recollect:

  • The id column is auto-generated, offering a singular identifier for every video.
  • The embedding column shops the high-dimensional vector illustration of every video, generated by Twelve Labs Embed API.
  • Enabling Change Information Feed permits Databricks to effectively observe modifications within the desk, which is essential for sustaining an up-to-date Vector Search index.

Step 4: Configure Mosaic AI Vector Search

On this step, arrange Databricks Mosaic AI Vector Search to work with video embeddings. This entails making a Vector Search endpoint and a Delta Sync Index that can routinely keep in sync along with your videos_source_embeddings Delta desk.

First, create a Vector Search endpoint:

from databricks.vector_search.shopper import VectorSearchClient

# Initialize the Vector Search shopper and identify the endpoint
mosaic_client = VectorSearchClient()
endpoint_name = "twelve_labs_video_endpoint"

# Delete the prevailing endpoint if it exists
attempt:
    mosaic_client.delete_endpoint(endpoint_name)
    print(f"Deleted present endpoint: {endpoint_name}")
besides Exception:
    move  # Ignore non-existing endpoints

# Create the brand new endpoint
endpoint = mosaic_client.create_endpoint(
    identify=endpoint_name,
    endpoint_type="STANDARD"
)

This code creates a brand new Vector Search endpoint or replaces an present one with the identical identify. The endpoint will function the entry level on your Vector Search operations.

Subsequent, create a Delta Sync Index that can routinely keep in sync along with your videos_source_embeddings Delta desk:

# Outline the supply desk identify and index identify
source_table_name = "twelvelabs.default.videos_source_embeddings"
index_name = "twelvelabs.default.video_embeddings_index"

index = mosaic_client.create_delta_sync_index(
    endpoint_name="twelve_labs_video_endpoint",
    source_table_name=source_table_name,
    index_name=index_name,
    primary_key="id",
    embedding_dimension=1024,
    embedding_vector_column="embedding",
    pipeline_type="TRIGGERED"
)

print(f"Created index: {index.identify}")

This code creates a Delta Sync Index that hyperlinks to your supply Delta desk. If you’d like the index to routinely replace inside seconds of modifications made to the supply desk (making certain your Vector Search outcomes are at all times up-to-date), then set pipeline_type=“CONTINUOUS”

To confirm that the index has been created and is syncing appropriately, use the next code to set off the sync:

# Test the standing of the index; this may increasingly take a while
index_status = mosaic_client.get_index(
    endpoint_name="twelve_labs_video_endpoint",
    index_name="twelvelabs.default.video_embeddings_index"
)
print(f"Index standing: {index_status}")

# Manually set off the index sync
attempt:
    index.sync()
    print("Index sync triggered efficiently.")
besides Exception as e:
    print(f"Error triggering index sync: {str(e)}")

This code lets you test the standing of your index and manually set off a sync if wanted. In manufacturing, chances are you’ll favor to set the pipeline to sync routinely based mostly on modifications to the supply Delta desk.

Key factors to recollect:

  1. The Vector Search endpoint serves because the entry level for Vector Search operations.
  2. The Delta Sync Index routinely stays in sync with the supply Delta desk, making certain up-to-date search outcomes.
  3. The embedding_dimension ought to match the dimension of the embeddings generated by Twelve Labs’ Embed API (1024).
  4. The primary_key is ready to “id”, which ought to correspond to the distinctive identifier in our supply desk.

The embedding_vector_column is ready to “embedding,” which ought to match the column identify in our supply desk containing the video embeddings.

Step 5: Implement Similarity Search

The subsequent step is to implement similarity search performance utilizing your configured Mosaic AI Vector Search index and Twelve Labs Embed API. This can let you discover movies just like a given textual content question by leveraging the facility of multimodal embeddings.

First, outline a perform to get the embedding for a textual content question utilizing Twelve Labs Embed API:

def get_text_embedding(text_query):
    # Twelve Labs Embed API helps text-to-embedding
    text_embedding = twelvelabs_client.embed.create(
      engine_name="Marengo-retrieval-2.6",
      textual content=text_query,
      text_truncate="begin"
    )

    return text_embedding.text_embedding.float

This perform takes a textual content question and returns its embedding utilizing the identical mannequin as video embeddings, making certain compatibility within the vector house.

Subsequent, implement the similarity search perform:

def similarity_search(query_text, num_results=5):
    # Initialize the Vector Search shopper and get the question embedding
    mosaic_client = VectorSearchClient()
    query_embedding = get_text_embedding(query_text)

    print(f"Question embedding generated: {len(query_embedding)} dimensions")

    # Carry out the similarity search
    outcomes = index.similarity_search(
        query_vector=query_embedding,
        num_results=num_results,
        columns=["id", "url", "title"]
    )
    return outcomes

This perform takes a textual content question and the variety of outcomes to return. It generates an embedding for the question, after which makes use of the Mosaic AI Vector Search index to seek out comparable movies.

To parse and show the search outcomes, use the next helper perform:

def parse_search_results(raw_results):
    attempt:
        data_array = raw_results['result']['data_array']
        columns = [col['name'] for col in raw_results['manifest']['columns']]
        return [dict(zip(columns, row)) for row in data_array]
    besides KeyError:
        print("Sudden end result format:", raw_results)
        return []

Now, put all of it collectively and carry out a pattern search:

# Instance utilization
question = "A dragon"
raw_results = similarity_search(question)

# Parse and print the search outcomes
search_results = parse_search_results(raw_results)
if search_results:
    print(f"Prime {len(search_results)} movies just like the question: '{question}'")
    for i, end result in enumerate(search_results, 1):
        print(f"{i}. Title: {end result.get('title', 'N/A')}, URL: {end result.get('url', 'N/A')}, Similarity Rating: {end result.get('rating', 'N/A')}")
else:
    print("No legitimate search outcomes returned.")

This code demonstrates tips on how to use Twelve Labs’ similarity search perform to seek out movies associated to the question “A dragon”. It then parses and shows the ends in a user-friendly format.

Key factors to recollect:

  1. The get_text_embedding perform makes use of the identical Twelve Labs mannequin as our video embeddings, making certain compatibility.
  2. The similarity_search perform combines text-to-embedding conversion with Vector Search to seek out comparable movies.
  3. Error dealing with is essential, as community points or API modifications might have an effect on the search course of.
  4. The parse_search_results perform helps convert the uncooked API response right into a extra usable format.
  5. You possibly can alter the num_results parameter within the similarity_search perform to regulate the variety of outcomes returned.

This implementation permits highly effective semantic search capabilities throughout your video dataset. Customers can now discover related movies utilizing pure language queries, leveraging the wealthy multimodal embeddings generated by Twelve Labs Embed API.

Step 6: Construct a Video Suggestion System

Now, it’s time to create a primary video advice system utilizing the multimodal embeddings generated by Twelve Labs Embed API and Databricks Mosaic AI Vector Search. This method will counsel movies just like a given video based mostly on their embedding similarities.

First, implement a easy advice perform:

def get_video_recommendations(video_id, num_recommendations=5):
    # Initialize the Vector Search shopper
    mosaic_client = VectorSearchClient()

    # First, retrieve the embedding for the given video_id
    source_df = spark.desk("videos_source_embeddings")
    video_embedding = source_df.filter(f"id = {video_id}").choose("embedding").first()

    if not video_embedding:
        print(f"No video discovered with id: {video_id}")
        return []

    # Carry out similarity search utilizing the video's embedding
    attempt:
        outcomes = index.similarity_search(
            query_vector=video_embedding["embedding"],
            num_results=num_recommendations + 1,  # +1 to account for the enter video
            columns=["id", "url", "title"]
        )
        
        # Parse the outcomes
        suggestions = parse_search_results(outcomes)
        
        # Take away the enter video from suggestions if current
        suggestions = [r for r in recommendations if r.get('id') != video_id]
        
        return suggestions[:num_recommendations]
    besides Exception as e:
        print(f"Error throughout advice: {e}")
        return []

# Helper perform to show suggestions
def display_recommendations(suggestions):
    if suggestions:
        print(f"Prime {len(suggestions)} really useful movies:")
        for i, video in enumerate(suggestions, 1):
            print(f"{i}. Title: {video.get('title', 'N/A')}")
            print(f"   URL: {video.get('url', 'N/A')}")
            print(f"   Similarity Rating: {video.get('rating', 'N/A')}")
            print()
    else:
        print("No suggestions discovered.")

# Instance utilization
video_id = 1  # Assuming it is a legitimate video ID in your dataset
suggestions = get_video_recommendations(video_id)
display_recommendations(suggestions)

This implementation does the next:

  1. The get_video_recommendations perform takes a video ID and the variety of suggestions to return.
  2. It retrieves the embedding for the given video from a supply Delta desk.
  3. Utilizing this embedding, it performs a similarity search to seek out probably the most comparable movies.
  4. The perform removes the enter video from the outcomes (if current) to keep away from recommending the identical video.
  5. The display_recommendations helper perform codecs and prints the suggestions in a user-friendly method.

To make use of this advice system:

  1. Guarantee you could have movies in your videos_source_embeddings desk with legitimate embeddings.
  2. Name the get_video_recommendations perform with a sound video ID out of your dataset.
  3. The perform will return and show a listing of really useful movies based mostly on similarity.

This primary advice system demonstrates tips on how to leverage multimodal embeddings for content-based video suggestions. It may be prolonged and improved in a number of methods:

  • Incorporate person preferences and viewing historical past for customized suggestions.
  • Implement variety mechanisms to make sure various suggestions.
  • Add filters based mostly on video metadata (e.g., style, size, add date).
  • Implement caching mechanisms for continuously requested suggestions to enhance efficiency.

Keep in mind that the standard of suggestions relies on the dimensions and variety of your video dataset, in addition to the accuracy of the embeddings generated by Twelve Labs Embed API. As you add extra movies to your system, the suggestions ought to grow to be extra related and various.

Take This Integration to the Subsequent Degree

Replace and Sync the Index

As your video library grows and evolves, it is essential to maintain your Vector Search index up-to-date. Mosaic AI Vector Search presents seamless synchronization along with your supply Delta desk, making certain that suggestions and search outcomes at all times mirror the most recent information.

Key concerns for index updates and synchronization:

  1. Incremental updates: Leverage Delta Lake’s change information feed to effectively replace solely the modified or new data in your index.
  2. Scheduled syncs: Implement common synchronization jobs utilizing Databricks workflow orchestration instruments to keep up index freshness.
  3. Actual-time updates: For time-sensitive purposes, take into account implementing close to real-time index updates utilizing Databricks Mosaic AI streaming capabilities.
  4. Model administration: Make the most of Delta Lake’s time journey characteristic to keep up a number of variations of your index, permitting for straightforward rollbacks if wanted.
  5. Monitoring sync standing: Implement logging and alerting mechanisms to trace profitable syncs and rapidly establish any points within the replace course of.

By mastering these strategies, you will be sure that your Twelve Labs video embeddings are at all times present and available for superior search and advice use instances.

Optimize Efficiency and Scaling

As your video evaluation pipeline grows, it is very important proceed optimizing efficiency and scaling your resolution. Distributed computing capabilities from Databricks, mixed with environment friendly embedding technology from Twelve Labs, present a strong basis for dealing with large-scale video processing duties.

Contemplate these methods for optimizing and scaling your resolution:

  1. Distributed processing: Leverage Databricks Spark clusters to parallelize embedding technology and indexing duties throughout a number of nodes.
  2. Caching methods: Implement clever caching mechanisms for continuously accessed embeddings to cut back API calls and enhance response occasions.
  3. Batch processing: For big video libraries, implement batch processing workflows to generate embeddings and replace indexes throughout off-peak hours.
  4. Question optimization: High-quality-tune Vector Search queries by adjusting parameters like num_results and implementing environment friendly filtering strategies.
  5. Index partitioning: For enormous datasets, discover index partitioning methods to enhance question efficiency and allow extra granular updates.
  6. Auto-scaling: Make the most of Databricks auto-scaling options to dynamically alter computational sources based mostly on workload calls for.
  7. Edge computing: For latency-sensitive purposes, take into account deploying light-weight variations of your fashions nearer to the information supply.

By implementing these optimization strategies, you will be well-equipped to deal with rising video libraries and growing person calls for whereas sustaining excessive efficiency and price effectivity.

Monitoring and Analytics

Implementing sturdy monitoring and analytics is important to making sure the continuing success of your video understanding pipeline. Databricks gives highly effective instruments for monitoring system efficiency, person engagement, and enterprise impression.

Key areas to deal with for monitoring and analytics:

  1. Efficiency metrics: Monitor key efficiency indicators resembling question latency, embedding technology time, and index replace length.
  2. Utilization analytics: Monitor person interactions, well-liked search queries, and continuously really useful movies to achieve insights into person conduct.
  3. High quality evaluation: Implement suggestions loops to guage the relevance of search outcomes and proposals, utilizing each automated metrics and person suggestions.
  4. Useful resource utilization: Regulate computational useful resource utilization, API name volumes, and storage consumption to optimize prices and efficiency.
  5. Error monitoring: Arrange complete error logging and alerting to rapidly establish and resolve points within the pipeline.
  6. A/B testing: Make the most of experimentation capabilities from Databricks to check totally different embedding fashions, search algorithms, or advice methods.
  7. Enterprise impression evaluation: Correlate video understanding capabilities with key enterprise metrics like person engagement, content material consumption, or conversion charges.
  8. Compliance monitoring: Guarantee your video processing pipeline adheres to information privateness rules and content material moderation pointers.

By implementing a complete monitoring and analytics technique, you will achieve worthwhile insights into your video understanding pipeline’s efficiency and impression. This data-driven strategy will allow steady enchancment and aid you exhibit the worth of integrating superior video understanding capabilities from Twelve Labs with the Databricks Information Intelligence Platform.

Conclusion

Twelve Labs and Databricks Mosaic AI present a strong framework for superior video understanding and evaluation. This integration leverages multimodal embeddings and environment friendly Vector Search capabilities, enabling builders to assemble refined video search, advice, and evaluation techniques.

This tutorial has walked via the technical steps of organising the setting, producing embeddings, configuring Vector Search, and implementing primary search and advice functionalities. It additionally addresses key concerns for scaling, optimizing, and monitoring your resolution.

Within the evolving panorama of video content material, the power to extract exact insights from this medium is vital. This integration equips builders with the instruments to handle complicated video understanding duties. We encourage you to discover the technical capabilities, experiment with superior use instances, and contribute to the neighborhood of AI engineers advancing video understanding know-how.

Further Assets

To additional discover and leverage this integration, take into account the next sources:

  1. Twelve Labs Documentation
  2. Databricks Vector Search Documentation
  3. Databricks Neighborhood Boards
  4. Twelve Labs Discord Neighborhood

How one can add a picture selenium webdriver with javascript?


When you’re working in a testing firm on the automation half then add file situation is the most typical drawback. There are 03 methods to automate the identical.

  1. Utilizing selenium
  2. Utilizing AutoIT
  3. Utilizing Robotic class

Utilizing Selenium: We are able to merely add it utilizing selenium if the HTML accommodates attribute enter[@type=”file”]. if this factor shouldn’t be current within the HTML a part of the appliance then it will not be potential utilizing selenium then we have to look into one other different. Whether it is current then we are able to use beneath syntax for a similar.

WebElement upload_file = driver.findElement(By.xpath("//enter[@id='file_upload']"));
upload_file.sendKeys("C:/Customers/abc/Desktop/add.jpg");

Utilizing AutoIT:
Open the AutoIT editor.
We have to write a easy code in AutoIT editor, required for file add operation (the identify of the file to be uploaded, might be talked about within the code).
Now shut the editor and proper click on on it, you will notice compile script choice.
Select compile script (x64) choice for 64 bit machine and go together with compile script (x86) for a 32-bit machine.
As quickly because the above step is accomplished, a .exe file is created and this file might be talked about in our selenium eclipse code.
After compilation‘fileupload.exe’ file will get created.
Now we are able to make use of this file within the Selenium net driver script.

WebElement browser = d.findElement(By.xpath("//enter[@id='pimCsvImport_csvFile']"));   //Browse button
browser.click on();                                
Runtime.getRuntime().exec("C:CustomersChaitDesktopautoitfileupload.exe");
  Thread.sleep(3000);
 
WebElement add = d.findElement(By.id("btnSave"));     //Uploadbutton
add.click on();     
System.out.println("File Uploaded Efficiently");   // Affirmation message

When this system executes this line, it goes via the fileupload.exe file the place the AutoIT code is executed as proven beneath:
ControlFocus(“File Add”,””,”Edit1″)
ControlSetText(“File Add”,””,”Edit1″,”C:UsersChaitDesktopautoitdata_file.csv”)
ControlClick(“File Add”,””,”Button1″)

Robotic Class
Its already coated above utilizing the identical we are able to add.

Iran’s ‘Fox Kitten’ Group Aids Ransomware Assaults on US Targets


Iran’s state-sponsored Fox Kitten menace group is actively abetting ransomware actors in assaults towards organizations within the US and different international locations, the FBI and US Cybersecurity and Infrastructure Safety Company (CISA) warned this week.

The continued exercise seems to be an effort by the menace actor to monetize its entry to sufferer networks throughout a number of sectors, together with finance, protection, healthcare, and schooling. It’s separate from Fox Kitten’s continued campaigns to steal delicate technical knowledge from organizations within the US, Israel, and Azerbaijan, the 2 authorities companies stated in a joint cybersecurity advisory this week.

Preliminary Entry Dealer

 “A major share of the group’s US-focused cyber exercise is in furtherance of acquiring and sustaining technical entry to sufferer networks to allow future ransomware assaults,” the FBI and CISA warned. “The actors supply full area management privileges, in addition to area admin credentials, to quite a few networks worldwide.”

Fox Kitten is a comparatively well-known menace actor that totally different safety distributors variously observe as Pioneer Kitten, UC757, Parisite, Lemon Sandstorm, and Rubidium. CrowdStrike believes the group first started operations in 2017 and is probably going a contractor for the Iranian authorities. The FBI and CISA suppose the group is utilizing an Iranian firm, Danesh Novin Sahand, as cowl for its cyber-espionage and different intelligence gathering operations for Tehran.

Beginning way back to 2020, CrowdStrike noticed the group trying to promote entry on underground boards to networks it had compromised. Fox Kitten actors had been possible doing this with none approval from their Iranian-government sponsors. In lots of cases the place Fox Kitten gained entry to a sufferer community, they did so through exploits that focused vulnerabilities in a company’s Web-facing property.

In 2021, Microsoft, which tracks Fox Kitten as Rubidum, recognized the menace actor as one in every of six Iranian state-backed teams engaged in a variety of cyber-enabled info theft, disruption, and harmful actions towards US entities. Earlier this 12 months, Securin listed Fox Kitten amongst a bunch of menace actors it described as most actively focusing on VPN vulnerabilities and different distant entry merchandise from a number of distributors.

This week’s CISA-FBI advisory recognized Fox Kitten as offering the operators of ransomware strains akin to ALPHV (or BlackCat), Ransomhouse, and NoEscape with preliminary entry to compromised networks in return for a share of any ransom they may acquire. In lots of cases, the Iranian menace group has labored with ransomware associates to encrypt sufferer networks and strategized with them on tips on how to extort ransoms. The FBI stated that Fox Kitten actors are partaking with ransomware actors with out disclosing their location in Iran or their ties to the nation.

Previous Techniques, New Vulns

The group’s preliminary entry strategies in latest assaults have been the identical as at all times: exploiting vulnerabilities in VPN gadgets and different externally uncovered providers on enterprise networks. Most just lately, Fox Kitten actors have focused CVE-2024-24919, a now-patched zero-day bug in Verify Level VPNs to try to break right into a sufferer community. The menace actor has additionally been noticed going after CVE-2024-3400, a zero-day bug in Palo Alto Networks’ PAN-OS; CVE-2019-19781 and CVE-2023-3519 in Citrix Netscaler; and CVE-2022-1388 in BIG-IP F5 gadgets, CISA and the FBI stated.

As soon as Fox Kitten good points entry to a community, its sport plan — relying on the kind of system it has compromised — is to seize login credentials, deploy Internet shells, create rogue accounts, load malware, transfer laterally, and escalate privileges.

The truth that many organizations haven’t mitigated a number of the vulnerabilities that Fox Kitten is focusing on could also be serving to the menace actor in its assaults. An evaluation that Tenable carried out, as an example, discovered that hardly half of all property affected by CVE-2019-19781 and CVE-2022-1388, two flaws that Fox Kitten is focusing on, are remediated. “It is not shocking that menace actors are leveraging these vulnerabilities for preliminary entry on condition that there are tens of 1000’s of probably weak gadgets for every of the related applied sciences discoverable on Shodan.io,” a search engine for locating Web-connected gadgets, Tenable stated in a weblog submit this week.



Delicate robotic instrument supplies new ‘eyes’ in endovascular surgical procedure

0


Scientists on the Max Planck Institute for Clever Programs in Stuttgart have developed a delicate robotic instrument that guarantees to sooner or later remodel minimally invasive endovascular surgical procedure. The 2-part magnetic instrument will help to visualise in actual time the fantastic morphological particulars of partial vascular blockages corresponding to stenoses, even within the narrowest and most curved vessels. It could additionally discover its manner by means of extreme blockages corresponding to continual complete occlusions. This instrument might sooner or later take the notion of endovascular medical units a step additional.

Intravascular imaging strategies and microcatheter procedures have gotten ever extra superior, revolutionizing the prognosis and therapy of many ailments. Nonetheless, present strategies usually fail to precisely detect the fantastic options of vascular illness, corresponding to these seen from inside occluded vessels, resulting from limitations corresponding to uneven distinction agent diffusion and problem in safely accessing occluded vessels. Such limitations can delay speedy intervention and therapy of a affected person.

Scientists on the Max Planck Institute for Clever Programs in Stuttgart have checked out this drawback. They’ve leveraged the ideas of sentimental robotics and microfabrication to develop a miniature delicate magnetic instrument that appears like a really slim eel. This instrument could sooner or later take the notion capabilities of endovascular units one step additional. In a paper and in a video, the workforce reveals how the instrument, which is propelled ahead by the blood stream, travels by means of the narrowest synthetic vessels – whether or not there’s a sharp bend, curve, or impediment.

When the instrument reaches an occlusion like {a partially} blocked artery, it performs a wave-like deformation given the exterior magnetic subject (extra on that under). Then, the deformed delicate physique shall be gently involved with the encircling occluded buildings. Lastly, the real-time shapes of the system after we retract it would ‘visualize’ the morphological particulars contained in the vessel, which facilitates the drug launch at occlusion, in addition to the sizing and placement of medical units like stents and balloons for following therapy.

When there’s a extreme occlusion with solely tiny microchannels for the blood to stream by means of, the instrument can make the most of the pressure from the blood to simply slide by means of these slim channels. Which manner was chosen signifies to the surgeon which entry path to take for the next medical operation.

“The strategies of diagnosing and treating endovascular slim ailments corresponding to vascular stenosis or continual complete occlusion are nonetheless very restricted. It’s tough to precisely detect and cross these areas within the very advanced community of vessels contained in the physique”, says Yingbo Yan, who’s a visitor researcher within the Bodily Intelligence Division at MPI-IS. He’s the primary writer of the paper “Magnetically-assisted delicate milli-tools for occluded lumen morphology detection”, which was revealed in Science Advances on August 18, 2023. “We hope that our new delicate robotic instrument can sooner or later assist precisely detect and navigate by means of the various advanced and slim vessels inside a physique, and carry out therapies extra successfully, decreasing potential dangers.”

This tiny and delicate instrument has a 20 mm lengthy magnetic Lively Deformation Phase (ADS) and a 5mm lengthy Fluid Drag-driven Phase (FDS). The magnetization profile of ADS is pre-programmed with a vibrating-sample magnetometer, offering a uniform magnetic subject. Beneath an exterior magnetic subject, this half can deform right into a sinusoidal form, simply adapting to the encircling atmosphere and deforming into varied shapes. Thus, steady monitoring of the form adjustments of ADS whereas retracting it could possibly present detailed morphological info of the partial occlusions inside a vessel.

The FDS was fabricated utilizing a delicate polymer. Small beams on its facet are bent by the fluidic drag from the incoming stream. On this manner, the complete instrument is carried in direction of the world with the very best stream velocity. Due to this fact, studying the situation of the FDS whereas advancing it could possibly level to the situation and the route of the microchannel contained in the extreme occlusions.

“Detection of vascular ailments within the distal and hard-to-reach vascular areas such because the mind could be more difficult clinically, and our instrument might work with Stentbot within the untethered mode”, says Tianlu Wang, a postdoc within the Bodily Intelligence Division at MPI-IS and one other first writer of the work. “Stentbot is a wi-fi robotic used for locomotion and medical features within the distal vasculature we lately developed in our analysis group. We consider this new delicate robotic instrument can add new capabilities to wi-fi robots and contribute new options in these difficult areas.”

“Our instrument reveals potential to enormously enhance minimally invasive drugs. This expertise can attain and detect areas that have been beforehand tough to entry. We anticipate that our robotic will help make the prognosis and therapy of, for example, stenosis or a CTO extra exact and timelier”, says Metin Sitti, Director of the Bodily Intelligence Division at MPI-IS, Professor at Koç College and ETH Zurich.


Max Planck Institute for Clever Programs
‘s aim is to analyze and perceive the organizing rules of clever techniques and the underlying perception-action-learning loop.