Home Blog Page 3786

higher dplyr interface, extra sdf_* features, and RDS-based serialization routines



higher dplyr interface, extra sdf_* features, and RDS-based serialization routines

We’re thrilled to announce sparklyr 1.5 is now
accessible on CRAN!

To put in sparklyr 1.5 from CRAN, run

On this weblog submit, we’ll spotlight the next features of sparklyr 1.5:

Higher dplyr interface

A big fraction of pull requests that went into the sparklyr 1.5 launch had been centered on making
Spark dataframes work with varied dplyr verbs in the identical approach that R dataframes do.
The complete record of dplyr-related bugs and have requests that had been resolved in
sparklyr 1.5 could be present in right here.

On this part, we’ll showcase three new dplyr functionalities that had been shipped with sparklyr 1.5.

Stratified sampling

Stratified sampling on an R dataframe could be completed with a mixture of dplyr::group_by() adopted by
dplyr::sample_n() or dplyr::sample_frac(), the place the grouping variables specified within the dplyr::group_by()
step are those that outline every stratum. As an illustration, the next question will group mtcars by quantity
of cylinders and return a weighted random pattern of measurement two from every group, with out substitute, and weighted by
the mpg column:

## # A tibble: 6 x 11
## # Teams:   cyl [3]
##     mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
##             
## 1  33.9     4  71.1    65  4.22  1.84  19.9     1     1     4     1
## 2  22.8     4 108      93  3.85  2.32  18.6     1     1     4     1
## 3  21.4     6 258     110  3.08  3.22  19.4     1     0     3     1
## 4  21       6 160     110  3.9   2.62  16.5     0     1     4     4
## 5  15.5     8 318     150  2.76  3.52  16.9     0     0     3     2
## 6  19.2     8 400     175  3.08  3.84  17.0     0     0     3     2

Ranging from sparklyr 1.5, the identical may also be finished for Spark dataframes with Spark 3.0 or above, e.g.,:

library(sparklyr)

sc <- spark_connect(grasp = "native", model = "3.0.0")
mtcars_sdf <- copy_to(sc, mtcars, change = TRUE, repartition = 3)

mtcars_sdf %>%
  dplyr::group_by(cyl) %>%
  dplyr::sample_n(measurement = 2, weight = mpg, change = FALSE) %>%
  print()
# Supply: spark> [?? x 11]
# Teams: cyl
    mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
            
1  21       6 160     110  3.9   2.62  16.5     0     1     4     4
2  21.4     6 258     110  3.08  3.22  19.4     1     0     3     1
3  27.3     4  79      66  4.08  1.94  18.9     1     1     4     1
4  32.4     4  78.7    66  4.08  2.2   19.5     1     1     4     1
5  16.4     8 276.    180  3.07  4.07  17.4     0     0     3     3
6  18.7     8 360     175  3.15  3.44  17.0     0     0     3     2

or

## # Supply: spark> [?? x 11]
## # Teams: cyl
##     mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
##             
## 1  21       6 160     110  3.9   2.62  16.5     0     1     4     4
## 2  21.4     6 258     110  3.08  3.22  19.4     1     0     3     1
## 3  22.8     4 141.     95  3.92  3.15  22.9     1     0     4     2
## 4  33.9     4  71.1    65  4.22  1.84  19.9     1     1     4     1
## 5  30.4     4  95.1   113  3.77  1.51  16.9     1     1     5     2
## 6  15.5     8 318     150  2.76  3.52  16.9     0     0     3     2
## 7  18.7     8 360     175  3.15  3.44  17.0     0     0     3     2
## 8  16.4     8 276.    180  3.07  4.07  17.4     0     0     3     3

Row sums

The rowSums() performance supplied by dplyr is useful when one must sum up
numerous columns inside an R dataframe which can be impractical to be enumerated
individually.
For instance, right here now we have a six-column dataframe of random actual numbers, the place the
partial_sum column within the end result incorporates the sum of columns b by means of d inside
every row:

## # A tibble: 5 x 7
##         a     b     c      d     e      f partial_sum
##                   
## 1 0.781   0.801 0.157 0.0293 0.169 0.0978        1.16
## 2 0.696   0.412 0.221 0.941  0.697 0.675         2.27
## 3 0.802   0.410 0.516 0.923  0.190 0.904         2.04
## 4 0.200   0.590 0.755 0.494  0.273 0.807         2.11
## 5 0.00149 0.711 0.286 0.297  0.107 0.425         1.40

Starting with sparklyr 1.5, the identical operation could be carried out with Spark dataframes:

## # Supply: spark> [?? x 7]
##         a     b     c      d     e      f partial_sum
##                   
## 1 0.781   0.801 0.157 0.0293 0.169 0.0978        1.16
## 2 0.696   0.412 0.221 0.941  0.697 0.675         2.27
## 3 0.802   0.410 0.516 0.923  0.190 0.904         2.04
## 4 0.200   0.590 0.755 0.494  0.273 0.807         2.11
## 5 0.00149 0.711 0.286 0.297  0.107 0.425         1.40

As a bonus from implementing the rowSums characteristic for Spark dataframes,
sparklyr 1.5 now additionally presents restricted assist for the column-subsetting
operator on Spark dataframes.
For instance, all code snippets beneath will return some subset of columns from
the dataframe named sdf:

# choose columns `b` by means of `e`
sdf[2:5]
# choose columns `b` and `c`
sdf[c("b", "c")]
# drop the primary and third columns and return the remaining
sdf[c(-1, -3)]

Weighted-mean summarizer

Much like the 2 dplyr features talked about above, the weighted.imply() summarizer is one other
helpful perform that has change into a part of the dplyr interface for Spark dataframes in sparklyr 1.5.
One can see it in motion by, for instance, evaluating the output from the next

with output from the equal operation on mtcars in R:

each of them ought to consider to the next:

##     cyl mpg_wm
##     
## 1     4   25.9
## 2     6   19.6
## 3     8   14.8

New additions to the sdf_* household of features

sparklyr offers numerous comfort features for working with Spark dataframes,
and all of them have names beginning with the sdf_ prefix.

On this part we’ll briefly point out 4 new additions
and present some instance eventualities during which these features are helpful.

sdf_expand_grid()

Because the identify suggests, sdf_expand_grid() is solely the Spark equal of develop.grid().
Somewhat than operating develop.grid() in R and importing the ensuing R dataframe to Spark, one
can now run sdf_expand_grid(), which accepts each R vectors and Spark dataframes and helps
hints for broadcast hash joins. The instance beneath reveals sdf_expand_grid() making a
100-by-100-by-10-by-10 grid in Spark over 1000 Spark partitions, with broadcast hash be part of hints
on variables with small cardinalities:

library(sparklyr)

sc <- spark_connect(grasp = "native")

grid_sdf <- sdf_expand_grid(
  sc,
  var1 = seq(100),
  var2 = seq(100),
  var3 = seq(10),
  var4 = seq(10),
  broadcast_vars = c(var3, var4),
  repartition = 1000
)

grid_sdf %>% sdf_nrow() %>% print()
## [1] 1e+06

sdf_partition_sizes()

As sparklyr person @sbottelli instructed right here,
one factor that will be nice to have in sparklyr is an environment friendly technique to question partition sizes of a Spark dataframe.
In sparklyr 1.5, sdf_partition_sizes() does precisely that:

library(sparklyr)

sc <- spark_connect(grasp = "native")

sdf_len(sc, 1000, repartition = 5) %>%
  sdf_partition_sizes() %>%
  print(row.names = FALSE)
##  partition_index partition_size
##                0            200
##                1            200
##                2            200
##                3            200
##                4            200

sdf_unnest_longer() and sdf_unnest_wider()

sdf_unnest_longer() and sdf_unnest_wider() are the equivalents of
tidyr::unnest_longer() and tidyr::unnest_wider() for Spark dataframes.
sdf_unnest_longer() expands all components in a struct column into a number of rows, and
sdf_unnest_wider() expands them into a number of columns. As illustrated with an instance
dataframe beneath,

library(sparklyr)

sc <- spark_connect(grasp = "native")
sdf <- copy_to(
  sc,
  tibble::tibble(
    id = seq(3),
    attribute = record(
      record(identify = "Alice", grade = "A"),
      record(identify = "Bob", grade = "B"),
      record(identify = "Carol", grade = "C")
    )
  )
)
sdf %>%
  sdf_unnest_longer(col = file, indices_to = "key", values_to = "worth") %>%
  print()

evaluates to

## # Supply: spark> [?? x 3]
##      id worth key
##     
## 1     1 A     grade
## 2     1 Alice identify
## 3     2 B     grade
## 4     2 Bob   identify
## 5     3 C     grade
## 6     3 Carol identify

whereas

sdf %>%
  sdf_unnest_wider(col = file) %>%
  print()

evaluates to

## # Supply: spark> [?? x 3]
##      id grade identify
##     
## 1     1 A     Alice
## 2     2 B     Bob
## 3     3 C     Carol

RDS-based serialization routines

Some readers should be questioning why a model new serialization format would have to be carried out in sparklyr in any respect.
Lengthy story quick, the reason being that RDS serialization is a strictly higher substitute for its CSV predecessor.
It possesses all fascinating attributes the CSV format has,
whereas avoiding a lot of disadvantages which can be frequent amongst text-based knowledge codecs.

On this part, we’ll briefly define why sparklyr ought to assist a minimum of one serialization format aside from arrow,
deep-dive into points with CSV-based serialization,
after which present how the brand new RDS-based serialization is free from these points.

Why arrow isn’t for everybody?

To switch knowledge between Spark and R accurately and effectively, sparklyr should depend on some knowledge serialization
format that’s well-supported by each Spark and R.
Sadly, not many serialization codecs fulfill this requirement,
and among the many ones that do are text-based codecs corresponding to CSV and JSON,
and binary codecs corresponding to Apache Arrow, Protobuf, and as of current, a small subset of RDS model 2.
Additional complicating the matter is the extra consideration that
sparklyr ought to assist a minimum of one serialization format whose implementation could be totally self-contained inside the sparklyr code base,
i.e., such serialization shouldn’t depend upon any exterior R bundle or system library,
in order that it might accommodate customers who wish to use sparklyr however who don’t essentially have the required C++ compiler instrument chain and
different system dependencies for organising R packages corresponding to arrow or
protolite.
Previous to sparklyr 1.5, CSV-based serialization was the default different to fallback to when customers don’t have the arrow bundle put in or
when the kind of knowledge being transported from R to Spark is unsupported by the model of arrow accessible.

Why is the CSV format not superb?

There are a minimum of three causes to consider CSV format isn’t the only option on the subject of exporting knowledge from R to Spark.

One purpose is effectivity. For instance, a double-precision floating level quantity corresponding to .Machine$double.eps must
be expressed as "2.22044604925031e-16" in CSV format so as to not incur any lack of precision, thus taking on 20 bytes
moderately than 8 bytes.

However extra necessary than effectivity are correctness issues. In a R dataframe, one can retailer each NA_real_ and
NaN in a column of floating level numbers. NA_real_ ought to ideally translate to null inside a Spark dataframe, whereas
NaN ought to proceed to be NaN when transported from R to Spark. Sadly, NA_real_ in R turns into indistinguishable
from NaN as soon as serialized in CSV format, as evident from a fast demo proven beneath:

##     x is_nan
## 1  NA  FALSE
## 2 NaN   TRUE
csv_file <- "/tmp/knowledge.csv"
write.csv(original_df, file = csv_file, row.names = FALSE)
deserialized_df <- learn.csv(csv_file)
deserialized_df %>% dplyr::mutate(is_nan = is.nan(x)) %>% print()
##    x is_nan
## 1 NA  FALSE
## 2 NA  FALSE

One other correctness concern very a lot just like the one above was the truth that
"NA" and NA inside a string column of an R dataframe change into indistinguishable
as soon as serialized in CSV format, as accurately identified in
this Github concern
by @caewok and others.

RDS to the rescue!

RDS format is likely one of the most generally used binary codecs for serializing R objects.
It’s described in some element in chapter 1, part 8 of
this doc.
Amongst benefits of the RDS format are effectivity and accuracy: it has a fairly
environment friendly implementation in base R, and helps all R knowledge sorts.

Additionally price noticing is the truth that when an R dataframe containing solely knowledge sorts
with smart equivalents in Apache Spark (e.g., RAWSXP, LGLSXP, CHARSXP, REALSXP, and so forth)
is saved utilizing RDS model 2,
(e.g., serialize(mtcars, connection = NULL, model = 2L, xdr = TRUE)),
solely a tiny subset of the RDS format shall be concerned within the serialization course of,
and implementing deserialization routines in Scala able to decoding such a restricted
subset of RDS constructs is in actual fact a fairly easy and easy job
(as proven in
right here
).

Final however not least, as a result of RDS is a binary format, it permits NA_character_, "NA",
NA_real_, and NaN to all be encoded in an unambiguous method, therefore permitting sparklyr
1.5 to keep away from all correctness points detailed above in non-arrow serialization use instances.

Different advantages of RDS serialization

Along with correctness ensures, RDS format additionally presents fairly a couple of different benefits.

One benefit is after all efficiency: for instance, importing a non-trivially-sized dataset
corresponding to nycflights13::flights from R to Spark utilizing the RDS format in sparklyr 1.5 is
roughly 40%-50% quicker in comparison with CSV-based serialization in sparklyr 1.4. The
present RDS-based implementation remains to be nowhere as quick as arrow-based serialization
although (arrow is about 3-4x quicker), so for performance-sensitive duties involving
heavy serialization, arrow ought to nonetheless be the best choice.

One other benefit is that with RDS serialization, sparklyr can import R dataframes containing
uncooked columns straight into binary columns in Spark. Thus, use instances such because the one beneath
will work in sparklyr 1.5

Whereas most sparklyr customers in all probability gained’t discover this functionality of importing binary columns
to Spark instantly helpful of their typical sparklyr::copy_to() or sparklyr::acquire()
usages, it does play a vital position in decreasing serialization overheads within the Spark-based
foreach parallel backend that
was first launched in sparklyr 1.2.
It’s because Spark staff can straight fetch the serialized R closures to be computed
from a binary Spark column as an alternative of extracting these serialized bytes from intermediate
representations corresponding to base64-encoded strings.
Equally, the R outcomes from executing employee closures shall be straight accessible in RDS
format which could be effectively deserialized in R, moderately than being delivered in different
much less environment friendly codecs.

Acknowledgement

In chronological order, we wish to thank the next contributors for making their pull
requests a part of sparklyr 1.5:

We might additionally like to precise our gratitude in the direction of quite a few bug reviews and have requests for
sparklyr from a unbelievable open-source group.

Lastly, the writer of this weblog submit is indebted to
@javierluraschi,
@batpigandme,
and @skeydan for his or her worthwhile editorial inputs.

Should you want to study extra about sparklyr, take a look at sparklyr.ai,
spark.rstudio.com, and among the earlier launch posts corresponding to
sparklyr 1.4 and
sparklyr 1.3.

Thanks for studying!

New Deal On Chevron Refinery Brings Thousands and thousands To Metropolis Of Richmond, California

0


Join every day information updates from CleanTechnica on electronic mail. Or comply with us on Google Information!


“Don’t increase the bridge, decrease the river,” is an expression that celebrates inventive considering. Town of Richmond, California, has had a fraught relationship with Chevron for  greater than a century. On one hand, the Chevron oil refinery in Richmond has been offering jobs for native residents and tax income for town for greater than 100 years. Then again, the air pollution from that refinery has had a destructive influence on the standard of life within the metropolis.

In line with NBC Information in San Francisco, simply hours earlier than a deadline to take away poll initiatives from the record of things voters will take into account in November, the Richmond Metropolis Council accepted a settlement with Chevron that may carry town greater than a half billion {dollars} over 10 years. Authorised by the council on June 18, the poll proposal would have requested voters to determine whether or not town ought to impose a $1 per barrel tax on feedstock (oil or different uncooked materials used within the industrial course of) refined at Chevron’s Richmond refinery for 50 years.

Town’s finance director estimated the per barrel tax would would have added $60 to $90 million per 12 months to town’s coffers. Chevron provided to pay $300 million over 10 years, which was elevated to $550 million if town dropped the poll measure. Simply earlier than the unanimous vote, Richmond Mayor Eduardo Martinez referred to as it a historic second and mentioned the combat over Richmond’s atmosphere isn’t over. “The group of Richmond has created a motion that may echo throughout the nation. What’s occurred right here has demonstrated to the group, to the nation, to the world, that once we as individuals pull collectively, we are able to create change. And that by standing up for environmental justice simply as we’ve got, different communities can also require that their polluters do the proper factor, both by measure or by negotiation.”

Chevron & Richmond Strike A Deal

Town had already been sued by an area resident and a nonprofit over the language within the poll proposal, over which Chevron additionally indicated it will sue. A couple of council member referred to an analogous poll measure handed by the Southern California metropolis of Carson taxing an area refinery in 2017, which remains to be being held up by the courts. Council member Doria Robinson, who grew up in Richmond’s Iron Triangle space, mentioned Carson hasn’t seen “a dime” of that tax cash. “If we had been preventing across the type of air pollution that was going into our air, I might be in it to the bitter finish as a result of it’s concerning the level, however this has at all times been about cash, (it’s) about paying their justifiable share. I believe {that a} half a billion {dollars} for this group is one thing that this group wants.”

Brian Hubinger, senior supervisor of public and authorities affairs at Chevron, informed the council the settlement discovered frequent floor. “This settlement will enable Chevron Richmond to proceed to make use of hundreds of Bay Space residents and stay targeted on offering the inexpensive, dependable and ever-cleaner power this area calls for every single day, whereas additionally supplying town with a lot wanted extra funding to help our group’s wants,” he mentioned.

Chevron pays town $50 million yearly for the primary 5 years and $60 million yearly for the remaining 5 years of the deal. The cash will go into Richmond’s normal fund and Chevron has agreed to not take credit score for the way town spends it. The settlement received’t have an effect on any of the opposite taxes Chevron already pays town, together with property taxes, town’s Measure U enterprise license tax and a utility person tax. Town and Chevron struck an analogous deal in 2010, for much less cash, that expires subsequent 12 months.

How The Deal With Chevron Obtained Completed

In response to that settlement, Politico studies the native activists and metropolis council who initiated the poll initiative course of had not got down to win a compromise. And but they found a brand new supply of leverage that can be utilized to win concessions from giant companies with little political bloodshed. In doing so, they could have inadvertently created a playbook for different native governments by proposing a 50-year tax that will have left Chevron’s enterprise future to the whim of voters.

Within the weeks for the reason that settlement was ratified, different California communities have reached out to Richmond to study extra about how this deal bought carried out. Representatives from Wholesome Martinez, a watchdog group in a close-by Bay Space metropolis that’s house to the Martinez Refining Firm, say they’re exploring their very own tax initiative. Organizers who helped launch Richmond’s measure say no less than two different California cities with main refineries have contacted them for steering.

Chevron officers decrease the Richmond deal as merely the newest “historic wrinkle” in a sequence of tax agreements with town, and one which units no new precedent. However the pace with which the corporate succumbed to political strain has activists asking whether or not the strategy of forcing environmentally damaging industries to defend their enterprise practices earlier than voters may be exported throughout the state, and even the nation. “All of those refinery communities ought to be contemplating this sort of effort,” mentioned Wholesome Martinez member Heidi Taylor.

This Deal Is Not For Each Group

There might be challenges in exporting Richmond’s course of to different jurisdictions, Politico says, It’s not totally uncommon for cities to attempt to pull cash from the large companies inside their jurisdiction. Sarcastically, California legislation is meant to make it tougher to enact a particular tax by requiring voter approval, however that provision could have given Richmond a novel supply of leverage over Chevron by means of the specter of an all-consuming poll combat. In line with David Hackett, the chair of the board of consulting agency Stillwater Associates, there isn’t a assure that different communities have the identical stage of antagonism that Richmond has towards Chevron. Plus, a much less worthwhile refinery may merely fold within the face of a $50 million yearly tax.

The success of the initiative in Richmond was a results of a long time of organizing and analysis by a grassroots base that grew to become formidable sufficient to scare Chevron. The Wholesome Martinez coalition, which fashioned in response to a unique excessive profile refinery catastrophe in 2022, is in its infancy in comparison with what has been in-built Richmond. “The Richmond refinery is a novel asset and the Bay Space is a novel group and political atmosphere,” mentioned Ross Allen, a spokesperson for the corporate. “Chevron maintains steady dialogue with the jurisdictions the place we’ve got belongings. The settlement was fruit of that dialog and displays our long-term dialogue with metropolis officers.”

The organizers of the Richmond marketing campaign insist from the outset they’d extra of their sights than simply taxing Chevron. Their initiative, they level out, would have been referred to as Polluters Pay. “It was at all times our intention to set a precedent,” mentioned Megan Zapata, an area environmental activist.

The Takeaway

Holding polluters accountable is difficult work. Cities, states, and nationwide governments are reluctant to say no to companies that present jobs and pay native taxes. Rich companies have little bother discovering compliant authorities officers who’re prepared to deal with them favorably in trade for sure “issues.” One million {dollars} unfold round amongst politicians can result in billions in earnings so why not do it? One hand washes the opposite, so the place’s the hurt? In addition to, we’d like fossil fuels to stay a lifetime of consolation and comfort. You’ll be able to’t make an omelet with out breaking just a few eggs, proper?

The obverse of that typical considering is that an individual can’t dump their waste on their neighbor’s property with out penalty, so why ought to companies be allowed to dump their waste merchandise into the ambiance, into the soil, and into the rivers totally free? For glorious insights into how enterprise video games the system to make it a “heads we win, tails you lose” state of affairs, learn John Grisham’s The Enchantment. It’s fiction, after all, however then once more, is it actually?

Town of Richmond could have found a brand new lever it might probably pull to guard its residents from air pollution, and that’s excellent news. The world wants as many arrows in its quiver as it might probably get to combat fossil gas air pollution, however it’s only relevant in particular  cases through which the offending business is situated inside sure geographic confines. Nonetheless, it represents a small however necessary victory within the combat for local weather justice.


Have a tip for CleanTechnica? Wish to promote? Wish to counsel a visitor for our CleanTech Speak podcast? Contact us right here.


Newest CleanTechnica.TV Movies

Commercial



 


CleanTechnica makes use of affiliate hyperlinks. See our coverage right here.

CleanTechnica’s Remark Coverage




ios – The identical video is displayed on each layers

0


I am creating two layers for a video and attempting to show my very own URL with a video on every separate layer. What am I doing improper? I’ve solely the primary video displayed within the remaining consequence on the video layer and on the overlay layer.
Challenge: Within the remaining output video, I solely see the video from the primary URL (videoURL) on each the video layer and the overlay layer. The overlay layer doesn’t show the video from overlayVideoURL as anticipated.
What I’ve Tried:
I checked the CALayer frames and confirmed that they’re set accurately.
Verified that AVVideoCompositionCoreAnimationTool is used accurately.
Confirmed that AVMutableComposition has each tracks inserted.
Questions:
Is there a mistake in how I’m organising the video layers?
How can I make sure that every layer shows its respective video accurately?
Extra Particulars:
The movies are accurately loaded and inserted into the AVMutableComposition.
The ultimate composition solely shows the video from videoURL on each layers.

 func makeVideo(videoURL: URL, overlayVideoURL: URL, forName identify: String, onComplete: @escaping (URL?) -> Void) {
            let asset = AVURLAsset(url: videoURL)
            let overlayAsset = AVURLAsset(url: overlayVideoURL)
            let composition = AVMutableComposition()
            guard
                let compositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid),
                let assetTrack = asset.tracks(withMediaType: .video).first,
                let overlayCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid),
                let overlayAssetTrack = overlayAsset.tracks(withMediaType: .video).first
            else {
                print("One thing is improper with the belongings.")
                onComplete(nil)
                return
            }
            do {
                let timeRange = CMTimeRange(begin: .zero, period: asset.period)
                attempt compositionTrack.insertTimeRange(timeRange, of: assetTrack, at: .zero)
                attempt overlayCompositionTrack.insertTimeRange(timeRange, of: overlayAssetTrack, at: .zero)
            } catch {
                print(error)
                onComplete(nil)
                return
            } 
            let videoInfo = orientation(from: assetTrack.preferredTransform)
            let videoSize = videoInfo.isPortrait ? CGSize(width: assetTrack.naturalSize.top, top: assetTrack.naturalSize.width) : assetTrack.naturalSize
            let videoComposition = AVMutableVideoComposition()
            videoComposition.renderSize = videoSize
            videoComposition.frameDuration = CMTime(worth: 1, timescale: 30) 
            let instruction = AVMutableVideoCompositionInstruction()
            instruction.timeRange = CMTimeRange(begin: .zero, period: composition.period)
            let layerInstruction = compositionLayerInstruction(for: compositionTrack, assetTrack: assetTrack)
            let overlayLayerInstruction = compositionLayerInstruction(for: overlayCompositionTrack, assetTrack: overlayAssetTrack)
            instruction.layerInstructions = [layerInstruction, overlayLayerInstruction]
            videoComposition.directions = [instruction]
            let videoLayer = CALayer()
            videoLayer.body = CGRect(origin: .zero, measurement: videoSize)
            let overlayVideoLayer = CALayer()
            overlayVideoLayer.body = CGRect(x: videoSize.width / 4, y: videoSize.top / 4, width: videoSize.width / 2, top: videoSize.top / 2)
            let outputLayer = CALayer()
            outputLayer.body = CGRect(origin: .zero, measurement: videoSize)
            outputLayer.addSublayer(videoLayer)
            outputLayer.addSublayer(overlayVideoLayer)
            videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer, overlayVideoLayer], in: outputLayer)
            guard let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality) else {
                print("Can't create export session.")
                onComplete(nil)
                return
            }
            let videoName = UUID().uuidString
            let exportURL = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(videoName).appendingPathExtension("mov")
            export.videoComposition = videoComposition
            export.outputFileType = .mov
            export.outputURL = exportURL
            export.exportAsynchronously {
                DispatchQueue.essential.async {
                    change export.standing {
                    case .accomplished:
                        onComplete(exportURL)
                    default:
                        print("One thing went improper throughout export.")
                        print(export.error ?? "unknown error")
                        onComplete(nil)
                    }
                }
            }
        }

Anybody Might Evade Airport Safety By way of SQL Injection Assault

0


Researchers highlighted a severe safety menace posed to airports and flight cockpits resulting from a vulnerability within the safety system. Particularly, they discovered an SQL injection flaw that attackers may exploit to bypass airport safety checks and fraudulently enter unauthorized areas like cockpits.

Researchers Demo How a SQL Injection Might Bypass Airport Safety

Two researchers, Ian Carroll and Sam Curry, lately shared insights a couple of severe and trivial safety menace to airport safety. Particularly, they seen how an adversary may bypass airport safety checks by way of SQL injection assaults within the FlyCASS cockpit safety system.

FlyCASS is a devoted web-based cockpit entry safety system that helps airways confirm crew members’ jumpseat eligibility. This software program normally pitches small airways, letting them fulfill the Identified Crewmember (KCM) program and Cockpit Entry Safety System (CASS) – a crew verification and pilot authorization initiative from the Transportation Safety Administration (TSA).

As defined of their submit, the researchers noticed the SQL injection vulnerability affecting the FlyCASS login web page. An adversary may inject malicious SQL queries into the crew members’ database. At this level, the researchers seen additional authentication checks for including new workers to the database. To make certain of the issue, they added a “Check” consumer account, which obtained instant authorization for KCM and CASS use.

Consequently, an adversary may add any consumer within the KCM and CASS database to evade the standard airport screening practices.

The Vulnerability Mounted(?)

Following this discovery, the researchers responsibly disclosed the matter to the Division of Homeland Safety (DHS). The DHS acknowledged their bug report, assuring needed enter within the matter. Consequently, the researchers discovered FlyCASS disabled from the KCM/CASS till the flaw was remedied.

Nonetheless, after the FlyCASS repair, the researchers had an ironic expertise as they didn’t hear farther from the DHS in regards to the vulnerability disclosure. Furthermore, additionally they obtained a press release from TSA denying the precise exploit. In line with Bleeping Laptop, right here’s how TSA’s assertion reads,

In April, TSA turned conscious of a report {that a} vulnerability in a 3rd get together’s database containing airline crewmember info was found and that by way of testing of the vulnerability, an unverified identify was added to an inventory of crewmembers within the database. No authorities information or methods had been compromised and there aren’t any transportation safety impacts associated to the actions.
TSA doesn’t solely depend on this database to confirm the id of crewmembers. TSA has procedures in place to confirm the id of crewmembers and solely verified crewmembers are permitted entry to the safe space in airports. TSA labored with stakeholders to mitigate towards any ​recognized cyber vulnerabilities.

Nonetheless, the researchers stand by their findings, alongside hinting at different assault possibilities threatening the KCM/CASS checks.

Tell us your ideas within the feedback.

VMware Workstation Obtain – 17.6.0

0


Run Home windows, Linux and BSD digital machines on a Home windows or Linux desktop with VMware Workstation Professional.

Construct, take a look at and demo software program throughout an array of various units, platforms and clouds. IT professionals, builders and companies depend on Workstation Professional day-after-day to help their tasks and clients. Workstation Professional makes it straightforward to run advanced native digital environments to simulate working methods, platforms and clouds, all from the identical desktop PC.

What are the totally different editions of VMware Workstation?

The VMware Workstation product line consists of two merchandise: Workstation Professional and Workstation Participant. Collectively they’re known as ‘VMware Workstation,’ and when there are variations, they’re known as out.

What are the variations between Workstation Professional and Workstation Participant?

Workstation Professional and Workstation Participant share the identical hypervisor know-how, however with distinctive person interfaces for various kinds of use.

Workstation Participant is designed for a single graphical VM operation, or for command line operation with ‘vmrun’ or ‘ for operating a unique working system in a safe remoted sandbox on a PC.

Workstation Professional is designed for IT professionals and builders, offering a extra absolutely featured interface with capabilities for operating a number of digital machines on the identical time. Workstation Professional can present and configure digital networking, create clones, connect with vSphere, present a number of VMs on the identical time in a tabbed UI and extra.

Do I must twin boot or repartition the disk?

No, VMware Workstation makes use of your pc’s file system and creates recordsdata that map to a digital machine’s disk drives, so there isn’t any must create a partition for every working system. If you have already got one other OS with twin boot put in in your pc, you need to use VMware Workstation to run the opposite OS in a digital machine in your host working system.

As a substitute of twin booting, you’ll be able to run each working methods concurrently and seamlessly change from one working system to a different with a click on of your mouse.

Options

Run VMs and Containers on a Single PC

Deploy native OCI containers and Kubernetes clusters with VM isolation, digital networking and digital useful resource choices by means of the brand new vctl CLI device.

Develop and Take a look at for Any Platform

Repair extra bugs and ship high quality code on time utilizing Workstation Professional in improvement and testing. Virtualize almost any x86 working system obtainable in the present day on a desktop PC.

Get pleasure from Baked-In vSphere and ESXi Assist

Run ESXi and vCenter as digital machines on the desktop, and connect with a distant vSphere setting for fast VM entry and primary administration duties.

Run Safe and Remoted Desktops

Run a safe second desktop with totally different privateness settings, instruments and networking configurations for on-line safety, or to take “snapshots” that may be restored later.

Broaden the Energy of Your PC with Virtualization

Uncover the true energy and suppleness of your desktop or laptop computer pc with VMware Workstation. Scale back {hardware} prices by 50% or extra by operating a number of working methods concurrently on a single PC. Automate and streamline duties to avoid wasting time and enhance productiveness. Be part of the hundreds of thousands worldwide who use Workstation to:

  • Host legacy functions and overcome platform migration points
  • Configure & take a look at new software program or patches in an remoted setting
  • Automate duties for software program improvement and testing
  • Display multi-tier configurations on a single PC

Use A number of Working Techniques Concurrently on the Similar PC

VMware Workstation makes it easy to create and run a number of digital machines in your desktop or laptop computer pc. You may convert an present bodily PC right into a VMware digital machine, or create a brand new digital machine from scratch. Every digital machine represents a whole PC, together with the processor, reminiscence, community connections and peripheral ports.

VMware Workstation allows you to use your digital machines to run Home windows, Linux and a bunch of different working methods side-by-side on the identical pc. You may change between working methods immediately with a click on of a mouse, share recordsdata between digital machines with drag-and-drop performance and entry all of the peripheral units you depend on.

Take Snapshots & Movies of your Digital Machines

With Workstation, you’ll be able to take a “snapshot” that preserves the state of a digital machine so you’ll be able to return to it at any time. Snapshots are helpful when you’ll want to revert your digital machine to a previous, secure system state. Workstation shows thumbnails of all of your snapshots on a single display, making it straightforward so that you can monitor and revert to a beforehand saved snapshot.

You may even use Workstation 6 to report and play video recordsdata that seize all modifications to a digital machine over a time period. This operate is unique to VMware Workstation and is extremely helpful for software program debugging, Assist Desk forensics, gross sales demonstrations and coaching.

Run an Whole Multi-tier System on a Single Host Laptop

Run multi-tier enterprise functions on a single piece of {hardware} by managing network-connected digital machines with the Groups function of Workstation 6. Groups allow you to create digital community environments that embody shopper, server and database digital machines.

With Workstation Groups, you’ll be able to flip a complete multi-tier setting on and off with a single click on of the mouse button. Workstation shows stay thumbnails of all related digital machines, enabling you to simply determine and change between the digital machines related to a staff.

Clone your Digital Machines to Speed up Deployment

Putting in working methods and functions will be time consuming. With clones, you may make many copies of a digital machine from a single set up and configuration course of. This functionality makes it quick and easy to distribute standardized computing environments to staff and college students, or to create a baseline configuration for testing.

Take Your Digital Machines With You

New to Workstation is the flexibility to create and deploy safe digital machines (known as ACEs) with the ACE Choice Pack. Mobility is without doubt one of the major advantages of this feature pack, because it allows customers to securely take their desktops with them on transportable media units comparable to USB thumb drives.

What’s New

Introducing vmcli

vmcli is a command-line device included with VMware Workstation Professional, enabling customers to work together with the hypervisor instantly from a Linux or macOS terminal, or the Home windows command immediate. With vmcli, you’ll be able to carry out quite a lot of operations comparable to creating new digital machines, producing VM templates, powering on VMs, and modifying varied VM settings. Moreover, you can even create scripts to run a number of instructions sequentially. For extra data, see Utilizing VMware Workstation Professional.

New Visitor Working System Assist

  • Home windows Server 2025
  • Home windows 11 Model 23H2
  • Ubuntu 24.04
  • Fedora 40

New Host Working System Assist

  • Home windows Server 2025
  • Home windows 11 Model 23H2
  • Ubuntu 24.04
  • Fedora 40

Resolved Points

  • VMware KVM crashes whereas operating the ‘vmware-kvm –preferences’ command
  • VMware KVM crashes whenever you attempt to open its Preferences dialog through the use of the ‘vmware-kvm –preferences’ command. This problem is resolved.
  • Digital machines run unusually slowly on Home windows hosts
  • Working digital machines on Home windows hosts as a non-administrator person would possibly end in excessive host CPU utilization and poor visitor efficiency. This problem is resolved.
  • Workstation set up fails on Linux hosts with a compilation error
  • For those who attempt to set up Workstation on Linux hosts with kernel model 6.8, you obtain a compilation error. For particulars, see https://data.broadcom.com/exterior/article?legacyId=80807. This problem is resolved.

Safety Points

  • OpenSSL has been up to date to three.0.14
  • Expat has been up to date to 2.6.2
  • 7zip has been up to date to 23.01

Identified Points

  • CentOS 9 Stream with kernel variations later than 5.14.0-432 isn’t supported on Workstation Professional 17.6 as a bunch
  • Workaround: None
  • Inline product improve from variations sooner than 17.6 won’t be automated for Workstation Professional on Home windows Host
  • As a result of migration from VMware providers to Broadcom, Workstation Professional can’t robotically apply this improve.
  • Workaround: Check with the KB368734 article for downloading and putting in Workstation Professional from the Broadcom help portal.
  • The multi-monitor function may not work accurately in particular topologies
  • In particular conditions, primarily based on totally different {hardware} and topologies, the multi-monitor function doesn’t work as anticipated. You would possibly see points like reverting the topology to a single display, or not biking by means of screens.
  • Workaround: None.