Home Blog Page 3774

pgEdge Raises $10M to Advance Distributed PostgreSQL Platform

0


(monticello/Shutterstock)

pgEdge, a Virginia-based firm specializing in distributed database options primarily based on PostgreSQL, introduced $10 million in new funding. The funding shall be used to develop the corporate’s operations and additional develop its superior distributed Postgres expertise.

The funding spherical was led by Rally Ventures, with extra contributions from current traders Sands Capital Ventures and Grotech Ventures.

pgEdge claims that it’s the solely absolutely distributed PostgreSQL database that’s each open-source and fully primarily based on commonplace PostgreSQL. This allows the corporate to supply a singular answer for enterprises requiring ultra-high availability and decreased latency throughout geographically dispersed areas.

In keeping with the 2024 Stack Overflow survey, PostgreSQL continues to be the preferred database amongst builders.  That is largely as a result of PostgreSQL’s fame for reliability, scalability, and help for complicated queries and operations. 

pgEdge enhances PostgreSQL’s utility by incorporating multi-master (active-active) replication expertise. Not like conventional databases that stay centralized, pgEdge allows information to be distributed and optimized on the community edge.

Based in 2022 by Phillip Merrick and Denis Lussier, pgEdge addresses the problem of centralizing databases in a world the place different software program elements are more and more distributed. pgEdge’s mission is to offer an open-source, Postgres-based distributed platform for contemporary purposes that require swift response occasions, steady availability, and international entry. 

The pgEdge founders have intensive expertise within the startup ecosystem. Lussier is the founding father of OpenSCG, a startup acquired by AWS in 2018. Following the acquisition, he joined AWS as a Postgres product supervisor for Aurora and RDS. Merrick has a number of startup credentials, together with cofounder of Enterprise DB, coinventor of VisualCV, CEO of Fugue, and CEO of Sparkpost. 

Together with the brand new funding, pgEdge has additionally introduced the becoming a member of of Ben Fried to the corporate’s Board of Administrators. Fried is a former CIO of Google and MD of Morgan Stanley. He not too long ago joined Rally Ventures in a full-time function as a Accomplice. 

Rally Ventures is thought for its investments in early-stage enterprise expertise startups, with a selected give attention to corporations which can be creating vital new markets or introducing transformative approaches to current ones.

In his 14-year tenure at Google, Fried performed a pivotal function in overseeing the event and implementation of applied sciences that drive Google’s international enterprise operations. At Morgan Stanley, Fried designed the agency’s internet expertise platform and led software program growth infrastructure and enterprise intelligence groups.

“pgEdge is on the forefront of innovation in distributed PostgreSQL, a expertise that’s more and more essential for enterprises needing ultra-high availability and decreased latency throughout geographic areas,” stated Fried.

 “The corporate’s extremely skilled management crew, with a confirmed monitor file in scaling early-stage corporations, coupled with the corporate’s industry-leading and open distributed Postgres expertise, made this a compelling addition to our portfolio.”

Cofounder Merrick additionally shared his enthusiasm through the press launch for the brand new funding and Fried’s onboarding. He’s assured that the funding and experience will advance pgEdge’s mission. He’s assured that this funding and Fried’s experience will advance pgEdge’s mission. Merrick emphasised that pgEdge’s robust income cycle and the addition of main SaaS prospects replicate the corporate’s rising momentum out there.

The latest funding spherical for pgEdge underscores the dynamic nature of the distributed database and edge computing market. As corporations like pgEdge, Cockroach Labs, and MongoDB vie for prominence, the inflow of latest capital may very well be the enhance pgEdge must additional innovate and develop its database options.

Associated Objects

ClickHouse Acquires PeerDB to Advance Postgres CDC Integration Options

AWS Cancels Serverless Postgres Service That Scales to Zero

MariaDB Unveils Distributed SQL Imaginative and prescient at OpenWorks 2023, Boosting Scalability for MySQL and PostgreSQL Communities

AuthenticID Unveils Enhanced Good ReAuth™ for Prompt Biometric Reauthentication


PRESS RELEASE

SEATTLE, WA – August 28, 2024 – AuthenticID, a world chief in id verification and fraud prevention options, introduced the latest launch of Good ReAuth™. Good ReAuth™ is a reauthentication answer that leverages a selfie to re-confirm an id. First launched in 2021, this enhanced model gives verifications in lower than one second, with fast deployment.

 Good ReAuth™’s biometric-based method ensures quick, correct and safe id verification. Throughout enrollment, customers move by means of a full id verification transaction to confirm them as a very good, authenticated buyer. Shifting ahead, the customers can extra seamlessly show their id to be used instances like high-risk or high-dollar worth transactions, to reset a password or just log into an utility. This permits companies to confidently confirm prospects or staff at any level of their journey, whereas blocking fraudsters and account takeovers (ATO).

For companies, Good ReAuth™ gives versatile deployment choices.  It may be quickly deployed through an internet service or seamlessly embedded right into a consumer’s personal net or cell workflow. The intuitive service provider supervisor gives a user-friendly admin console and a seamless expertise for customers. Good ReAuth will be customizable primarily based on particular use instances and danger tolerance. 

Safety and privateness are paramount with the Good ReAuth™ platform. Even beforehand verified identities will be compromised by refined fraud and spoofing makes an attempt. With liveness detection and anti-fraud checks, Good ReAuth™ confirms the particular person is current and is who they declare to be. The platform additionally gives safe encryption storage: as soon as a consumer is reverified, their data is saved in a separate, secured, and encrypted database throughout the AWS cluster with customized knowledge retention insurance policies permitting immediate knowledge deletion.

As documented in each the 2024 State of Id Fraud Report and the 2024 Mid-Yr Id Fraud Overview, each id fraud makes an attempt in addition to knowledge breaches are at report highs, impacting each shoppers and companies amidst surges in fraud. Unhealthy actors are continually discovering new safety loopholes to take advantage of, permitting them entry to techniques and delicate knowledge. As a result of stopping fraud is an ongoing course of, extra safety measures like Good ReAuth™ are actually a needed software for companies to scale back fraud loss, particularly for unhealthy actors who try and entry techniques or knowledge on a number of makes an attempt.

“Companies are continually making an attempt to steadiness each safety and buyer expertise, however with the rise in refined id fraud threats, companies want the improved safety that biometrics gives” famous Alex Wong, AuthenticID VP of Product Administration. “Good ReAuth™ was developed to make sure that this answer isn’t only a highly effective software to combat fraud, but in addition improves the client expertise, giving fraud-weary prospects peace of thoughts whereas eliminating the necessity for cumbersome processes like KBA or passwords.”

Good ReAuth™ is a part of AuthenticID’s complete id verification know-how suite that provides totally automated verifications with over 99% accuracy in detecting counterfeit IDs, and 98% first-time move charges. Good ReAuth™ can be utilized in each client and workforce use instances together with however not restricted to:

  • Account takeover safety through step-up: Good ReAuth™ gives highly effective safety towards ATO makes an attempt and deepfake detection, which can happen throughout account restoration, password or account change requests, massive buy makes an attempt, misplaced/ stolen card requests, and extra.

  • Workforce Entry: Defend delicate knowledge whereas permitting staff and contractors quick access to the techniques primarily based on safety ranges set by the enterprise. 

  • Zero Belief Mannequin: Use biometric Good ReAuth™ with a zero-trust safety structure to make sure customers have the sources they want whereas preventing fraud.

AuthenticID will proceed to drive innovation ahead in its know-how to make sure corporations can keep forward of adjusting fraud methods and regulatory necessities whereas delivering best-in-class buyer expertise.

For extra data on AuthenticID’s Good ReAuth™ platform and to request a demo, go to www.authenticid.com/merchandise/smart-reauth

About AuthenticID

AuthenticID is a pioneer within the Id Verification and Fraud Detection area. AuthenticID’s fully- automated answer protects identities and mitigates fraud with out sacrificing consumer expertise. Based in 2001, AuthenticID has deep area experience in verifying government-issued identification paperwork with the market’s strongest, next-generation, automated AI platform for fraud detection and id verification. AuthenticID’s patented platform is utilized by the world’s main monetary providers companies, wi-fi carriers, and id verification platforms



Posit AI Weblog: Revisiting Keras for R



Posit AI Weblog: Revisiting Keras for R

Earlier than we even speak about new options, allow us to reply the plain query. Sure, there shall be a second version of Deep Studying for R! Reflecting what has been occurring within the meantime, the brand new version covers an prolonged set of confirmed architectures; on the identical time, you’ll discover that intermediate-to-advanced designs already current within the first version have turn out to be somewhat extra intuitive to implement, because of the brand new low-level enhancements alluded to within the abstract.

However don’t get us flawed – the scope of the ebook is totally unchanged. It’s nonetheless the right selection for individuals new to machine studying and deep studying. Ranging from the essential concepts, it systematically progresses to intermediate and superior subjects, leaving you with each a conceptual understanding and a bag of helpful software templates.

Now, what has been occurring with Keras?

State of the ecosystem

Allow us to begin with a characterization of the ecosystem, and some phrases on its historical past.

On this publish, once we say Keras, we imply R – versus Python – Keras. Now, this instantly interprets to the R bundle keras. However keras alone wouldn’t get you far. Whereas keras gives the high-level performance – neural community layers, optimizers, workflow administration, and extra – the essential information construction operated upon, tensors, lives in tensorflow. Thirdly, as quickly as you’ll must carry out less-then-trivial pre-processing, or can not maintain the entire coaching set in reminiscence due to its dimension, you’ll need to look into tfdatasets.

So it’s these three packages – tensorflow, tfdatasets, and keras – that must be understood by “Keras” within the present context. (The R-Keras ecosystem, however, is kind of a bit greater. However different packages, reminiscent of tfruns or cloudml, are extra decoupled from the core.)

Matching their tight integration, the aforementioned packages are inclined to comply with a standard launch cycle, itself depending on the underlying Python library, TensorFlow. For every of tensorflow, tfdatasets, and keras , the present CRAN model is 2.7.0, reflecting the corresponding Python model. The synchrony of versioning between the 2 Kerases, R and Python, appears to point that their fates had developed in related methods. Nothing could possibly be much less true, and realizing this may be useful.

In R, between present-from-the-outset packages tensorflow and keras, duties have all the time been distributed the best way they’re now: tensorflow offering indispensable fundamentals, however usually, remaining fully clear to the consumer; keras being the factor you employ in your code. In actual fact, it’s doable to coach a Keras mannequin with out ever consciously utilizing tensorflow.

On the Python aspect, issues have been present process vital adjustments, ones the place, in some sense, the latter growth has been inverting the primary. To start with, TensorFlow and Keras had been separate libraries, with TensorFlow offering a backend – one amongst a number of – for Keras to utilize. In some unspecified time in the future, Keras code received integrated into the TensorFlow codebase. Lastly (as of right this moment), following an prolonged interval of slight confusion, Keras received moved out once more, and has began to – once more – significantly develop in options.

It’s simply that fast progress that has created, on the R aspect, the necessity for intensive low-level refactoring and enhancements. (In fact, the user-facing new performance itself additionally needed to be carried out!)

Earlier than we get to the promised highlights, a phrase on how we take into consideration Keras.

Have your cake and eat it, too: A philosophy of (R) Keras

If you happen to’ve used Keras previously, you understand what it’s all the time been supposed to be: a high-level library, making it simple (so far as such a factor can be simple) to coach neural networks in R. Truly, it’s not nearly ease. Keras permits customers to jot down natural-feeling, idiomatic-looking code. This, to a excessive diploma, is achieved by its permitting for object composition although the pipe operator; it’s also a consequence of its considerable wrappers, comfort capabilities, and practical (stateless) semantics.

Nonetheless, as a result of method TensorFlow and Keras have developed on the Python aspect – referring to the large architectural and semantic adjustments between variations 1.x and a couple of.x, first comprehensively characterised on this weblog right here – it has turn out to be more difficult to offer the entire performance obtainable on the Python aspect to the R consumer. As well as, sustaining compatibility with a number of variations of Python TensorFlow – one thing R Keras has all the time carried out – by necessity will get increasingly more difficult, the extra wrappers and comfort capabilities you add.

So that is the place we complement the above “make it R-like and pure, the place doable” with “make it simple to port from Python, the place needed”. With the brand new low-level performance, you received’t have to attend for R wrappers to utilize Python-defined objects. As a substitute, Python objects could also be sub-classed instantly from R; and any further performance you’d like so as to add to the subclass is outlined in a Python-like syntax. What this implies, concretely, is that translating Python code to R has turn out to be so much simpler. We’ll catch a glimpse of this within the second of our three highlights.

New in Keras 2.6/7: Three highlights

Among the many many new capabilities added in Keras 2.6 and a couple of.7, we shortly introduce three of an important.

  • Pre-processing layers considerably assist to streamline the coaching workflow, integrating information manipulation and information augmentation.

  • The flexibility to subclass Python objects (already alluded to a number of instances) is the brand new low-level magic obtainable to the keras consumer and which powers many user-facing enhancements beneath.

  • Recurrent neural community (RNN) layers acquire a brand new cell-level API.

Of those, the primary two positively deserve some deeper remedy; extra detailed posts will comply with.

Pre-processing layers

Earlier than the appearance of those devoted layers, pre-processing was once carried out as a part of the tfdatasets pipeline. You’ll chain operations as required; perhaps, integrating random transformations to be utilized whereas coaching. Relying on what you needed to attain, vital programming effort could have ensued.

That is one space the place the brand new capabilities will help. Pre-processing layers exist for a number of sorts of information, permitting for the same old “information wrangling”, in addition to information augmentation and have engineering (as in, hashing categorical information, or vectorizing textual content).

The point out of textual content vectorization results in a second benefit. In contrast to, say, a random distortion, vectorization is just not one thing which may be forgotten about as soon as carried out. We don’t need to lose the unique data, specifically, the phrases. The identical occurs, for numerical information, with normalization. We have to maintain the abstract statistics. This implies there are two sorts of pre-processing layers: stateless and stateful ones. The previous are a part of the coaching course of; the latter are known as prematurely.

Stateless layers, however, can seem in two locations within the coaching workflow: as a part of the tfdatasets pipeline, or as a part of the mannequin.

That is, schematically, how the previous would look.

library(tfdatasets)
dataset <- ... # outline dataset
dataset <- dataset %>%
  dataset_map(perform(x, y) record(preprocessing_layer(x), y))

Whereas right here, the pre-processing layer is the primary in a bigger mannequin:

enter <- layer_input(form = input_shape)
output <- enter %>%
  preprocessing_layer() %>%
  rest_of_the_model()
mannequin <- keras_model(enter, output)

We’ll speak about which method is preferable when, in addition to showcase a couple of specialised layers in a future publish. Till then, please be at liberty to seek the advice of the – detailed and example-rich vignette.

Subclassing Python

Think about you needed to port a Python mannequin that made use of the next constraint:

vignette for quite a few examples, syntactic sugar, and low-level particulars.

RNN cell API

Our third level is at the very least half as a lot shout-out to glorious documentation as alert to a brand new function. The piece of documentation in query is a brand new vignette on RNNs. The vignette offers a helpful overview of how RNNs perform in Keras, addressing the same old questions that have a tendency to return up when you haven’t been utilizing them shortly: What precisely are states vs. outputs, and when does a layer return what? How do I initialize the state in an application-dependent method? What’s the distinction between stateful and stateless RNNs?

As well as, the vignette covers extra superior questions: How do I move nested information to an RNN? How do I write customized cells?

In actual fact, this latter query brings us to the brand new function we needed to name out: the brand new cell-level API. Conceptually, with RNNs, there’s all the time two issues concerned: the logic of what occurs at a single timestep; and the threading of state throughout timesteps. So-called “easy RNNs” are involved with the latter (recursion) facet solely; they have an inclination to exhibit the basic vanishing-gradients downside. Gated architectures, such because the LSTM and the GRU, have specifically been designed to keep away from these issues; each will be simply built-in right into a mannequin utilizing the respective layer_x() constructors. What if you happen to’d like, not a GRU, however one thing like a GRU (utilizing some fancy new activation methodology, say)?

With Keras 2.7, now you can create a single-timestep RNN cell (utilizing the above-described %py_class% API), and procure a recursive model – a whole layer – utilizing layer_rnn():

rnn <- layer_rnn(cell = cell)

If you happen to’re , try the vignette for an prolonged instance.

With that, we finish our information from Keras, for right this moment. Thanks for studying, and keep tuned for extra!

Picture by Hans-Jurgen Mager on Unsplash

The iPhone 16’s greatest competitor is the opposite model of itself

0


Apple caught between rock and onerous place coping with AI porn

0


Altered picture present in face swap advert


Apple caught between rock and onerous place coping with AI porn

Apple appears unable to cease inflow of so-called “twin use” apps that look harmless on the floor however assist customers create deepfake porn — at a steep value.

Apple takes delight in regulating the App Retailer, and a part of that management is stopping pornographic apps altogether. Nonetheless, there are limits to this management provided that some apps can provide options that customers can simply abuse — seemingly with out Apple being conscious.

In keeping with a report from 404 Media, Apple struggles with a “twin use” drawback present in apps that supply options like face swapping. Whereas the characteristic is harmless sufficient at first look, customers are swapping faces onto pornography, typically utilizing minor’s faces.

The difficulty grew to become obvious when a reporter got here throughout a paid advert on Reddit for a face swap app. Face swapping tends to be simply discovered and sometimes free, so such an app would want a enterprise mannequin that enables paid advert placement.

What they discovered was an app providing customers the power to swap any face onto video from their “favourite web site,” with a picture suggesting Porn Hub as an choice. Apple does not enable porn-related apps on the App Retailer, however some apps referring to person content material usually characteristic such pictures and movies as a sort of loophole.

When Apple was alerted to the dual-use case of the marketed app, it was pulled. Nonetheless, it appeared Apple wasn’t conscious of the difficulty in any respect, and the app hyperlink needed to be shared.

This is not the primary time innocent-looking apps get by means of app assessment and provide a service that violates Apple’s tips. Whereas it is not as blatant a violation as altering a youngsters’s app right into a on line casino, the power to generate nonconsensual intimate imagery (NCII) was clearly not one thing on Apple’s radar.

Smartphone screen displaying face swap apps with various face transformation features like gender swap, hair color change, and face editing tools.

Face swap apps are a preferred class on the App Retailer

Synthetic intelligence options in apps can create extremely sensible deep fakes, and it’s important for corporations like Apple to get forward of those issues. Whereas Apple will not be capable to cease such use instances from current, it may possibly no less than implement a coverage that may be enforced in app assessment — clear tips and guidelines round pornographic picture technology. It already stopped deepfake AI web sites from utilizing sign-in with Apple.

For instance, no app ought to be capable to supply video from Porn Hub. Apple may have particular guidelines in place for potential dual-use apps, like zero-tolerance bans for apps found attempting to create such content material.

Apple has taken nice care to make sure Apple Intelligence will not make nude pictures, however that should not be the top of its oversight. On condition that Apple argues it’s the finest arbiter of the App Retailer, it must take cost of things like NCII technology being promoted in adverts.

Face-swapping apps aren’t the one apps with an issue. Even apps that blatantly promote infidelity, intimate video chat, grownup chat, or different euphemisms get by means of app assessment.

Stories have lengthy recommended that app assessment is damaged, and regulators are uninterested in platitudes. Apple must get a deal with on the App Retailer or threat shedding management.