Home Blog

ios – Xcode Builds Efficiently And Then Throws Error Stating Lacking Recordsdata


Re: iOS app.

After I set up pods through CLI to my undertaking for the primary time, launch Xcode, after which run the app, every part works effective – no construct errors.

However after a number of cases of operating the undertaking on my gadget, unexpectedly construct errors appear as if:

/Pods/FirebaseCrashlytics/Crashlytics/Crashlytics/Settings/Fashions/FIRCLSApplicationIdentifierModel.m:19:9 ‘Crashlytics/Shared/FIRCLSByteUtility.h’ file not discovered

/Pods/PostHog/vendor/libwebp/ph_sharpyuv_csp.h /Pods/PostHog/vendor/libwebp/ph_sharpyuv_csp.h: No such file or listing

And I do not know why if it is due to my PodFile or any Construct Settings/Phases/Guidelines, however this retains taking place repeatedly and it is unimaginable to develop something with this.

I’ve tried a string of instructions comparable to pod deintegrate, pod cache clear --all, eradicating PodFile.lock and operating pod set up once more, eradicating derived information, and cleansing construct folder.

I nonetheless maintain operating into the identical construct error and it is all the time after a number of builds this occurs, nothing is lacking prior when the undertaking efficiently builds.

Right here is my PodFile for reference:

# Uncomment the following line to outline a world platform on your undertaking
platform :ios, '17.0'

def google_utilities
  pod 'GoogleUtilities/AppDelegateSwizzler'
  pod 'GoogleUtilities/Atmosphere'
  pod 'GoogleUtilities/ISASwizzler'
  pod 'GoogleUtilities/Logger'
  pod 'GoogleUtilities/MethodSwizzler'
  pod 'GoogleUtilities/NSData+zlib'
  pod 'GoogleUtilities/Community'
  pod 'GoogleUtilities/Reachability'
  pod 'GoogleUtilities/UserDefaults'
finish


goal 'SE' do
  # Remark the following line in case you do not wish to use dynamic frameworks
  use_frameworks!

  # Pods for SE
    pod 'Firebase/Core'
    pod 'Firebase/Firestore'
    pod 'Firebase/Auth'

    google_utilities
finish

goal 'NSE' do
  # Remark the following line in case you do not wish to use dynamic frameworks
  use_frameworks!

  # Pods for NSE
    pod 'Firebase/Messaging'

    google_utilities
finish

goal 'targetApp' do
  # Remark the following line in case you do not wish to use dynamic frameworks
  use_frameworks!

  #Pods for targetApp
    pod 'Firebase/Core'
    pod 'Firebase/Crashlytics'
    pod 'Firebase/Messaging'
    pod 'Firebase/Firestore'
    pod 'Firebase/Storage'
    pod 'Firebase/Capabilities'
    pod 'PromiseKit', '~> 6.0'
    pod 'lottie-ios'
    pod 'GooglePlaces'
    pod 'JWTDecode', '~> 2.4'
    pod 'PostHog'
    pod 'Kingfisher', '~> 8.0'
    pod 'PhoneNumberKit'

    google_utilities

finish

post_install do |installer|

  installer.aggregate_targets.every do |goal|
    goal.xcconfigs.every do |variant, xcconfig|
      xcconfig_path = goal.client_root + goal.xcconfig_relative_path(variant)
      IO.write(xcconfig_path, IO.learn(xcconfig_path).gsub("DT_TOOLCHAIN_DIR", "TOOLCHAIN_DIR"))
    finish
  finish
  installer.pods_project.targets.every do |goal|
    goal.build_configurations.every do |config|
      if config.base_configuration_reference.is_a? Xcodeproj::Venture::Object::PBXFileReference
        xcconfig_path = config.base_configuration_reference.real_path
        IO.write(xcconfig_path, IO.learn(xcconfig_path).gsub("DT_TOOLCHAIN_DIR", "TOOLCHAIN_DIR"))
        config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '17.0'
      finish
    finish
  finish
  installer.pods_project.targets.every do |goal|
    if goal.identify == 'BoringSSL-GRPC'
      goal.source_build_phase.information.every do |file|
        if file.settings && file.settings['COMPILER_FLAGS']
          flags = file.settings['COMPILER_FLAGS'].cut up
          flags.reject! flag
          file.settings['COMPILER_FLAGS'] = flags.be part of(' ')
        finish
      finish
    finish
  finish
finish

And right here is my solely “Run Script” in Construct Phases:

"${PODS_ROOT}/FirebaseCrashlytics/upload-symbols" 
  -gsp "${PROJECT_DIR}/targetApp/GoogleService-Data.plist" 
  -p ios 
  "${DWARF_DSYM_FOLDER_PATH}/${DWARF_DSYM_FILE_NAME}"

This Week’s Superior Tech Tales From Across the Net (By means of April 26)

0


Synthetic Intelligence

Google’s AI Overviews Now Attain Extra Than 1.5 Billion Folks Each MonthJay Peters | The Verge

“Google began to broadly roll out AI Overviews final Could. Regardless of some awkward recommendations discovered shortly after their launch, the corporate has continued to develop upon the instrument with updates, exhibiting AI Overviews for extra kinds of queries, and even formally including advertisements because it goals to compete with different AI-powered search instruments like ChatGPT Search and Perplexity.”

FUTURE

Waymo May Be Keen to Promote You a Self-Driving Automotive, Says Sundar PichaiUmar Shakir | The Verge

“Pichai was requested concerning the long-term enterprise mannequin for Waymo, and he responded that it consists of increasing partnerships prefer it has with Moove in Miami and Uber in Austin and, quickly, Atlanta, but additionally talked about ‘future optionality round private possession.'”

Robotics

Stumbling and Overheating, Most Humanoid Robots Fail to End Half-Marathon in BeijingZeyi Yang | Wired

“Solely six of the 21 robots within the race crossed the end line, highlighting simply how far humanoids are from maintaining with their actual human counterparts. …The quickest robotic, Tiangong Extremely, developed by Chinese language robotics firm UBTech in collaboration with the Beijing Humanoid Robotic Innovation Heart, completed the race in 2 hours and 40 minutes after assistants modified its batteries thrice and it fell down as soon as.”

Tech

OpenAI Forecasts Income Topping $125 Billion in 2029 as Brokers, New Merchandise AcquireSri Muppidi | The Info

“For 2 years, ChatGPT has been OpenAI’s money cow. However by the top of the last decade, the corporate has advised some potential and present buyers it expects mixed gross sales from brokers and different new merchandise to exceed its standard chatbot, lifting whole gross sales to $125 billion in 2029 and $174 billion the following yr, in accordance with paperwork seen by The Info.”

Computing

Meta Is Bringing Good Glasses Dwell Translation and AI to Extra FolksWill Shanklin | Engadget

“Dwell translation, beforehand accessible in early entry, is now rolling out in each area the place Ray-Ban Meta glasses can be found. Helpful for journeys overseas or chats with locals who communicate a unique language, the AI-powered characteristic speaks a translation in your most well-liked language in actual time. You may as well view a translated transcript in your cellphone.”

Tech

The Hottest AI Job of 2023 Is Already Out of dateIsabelle Bousquette | The Wall Avenue Journal

“Immediate engineering jobs, as soon as buzzy and high-paying, have gotten out of date as a result of AI developments. AI fashions now intuit person intent, negating the necessity for specialised immediate engineers. Firms are coaching present workers in AI prompting, additional lowering the demand for devoted roles.”

Future

Slate Truck Is a $20,000 American-Made Electrical Pickup With No Paint, No Stereo, and No TouchscreenTim Stevens | The Verge

“Slate is presenting its truck as minimalist design with DIY goal, an try to not simply go low-cost however to create a brand new class of car with an enormous deal with personalization. That design additionally permits a low-cost method to manufacturing that has caught the attention of main buyers, reportedly together with Jeff Bezos.”

Computing

TSMC Exhibits Off 1.4nm Chip Tech That Will Seem in Future iPhones and Different UnitsSteve Dent | Engadget

“The know-how guarantees a 15 % efficiency increase, plus a 30 % discount in energy draw in comparison with 2nm processors set to enter manufacturing later in 2025, TSMC mentioned. The 1.4nm tech is probably going for use in processors for Apple, Intel, and AMD.”

Synthetic Intelligence

Generative AI Is Reshaping South Korea’s Webcomics TradeMichelle Kim | MIT Know-how Overview

“The digital clone of Lee would generate new comics together with his creative instinct, perceiving its setting and making inventive decisions as he would—maybe even publishing a collection far sooner or later starring Kkachi as a post-human protagonist. ‘Fifty years from now, what sorts of comics would Lee Hyun-se create if he noticed the world then?’ Lee asks. ‘The query fascinates me.'”

Tech

Microsoft Made an Advert With Generative AI and No one SeenDominic Preston | The Verge

“Figuring out that AI was concerned, it’s straightforward sufficient to guess the place—pictures of assembly notes that clearly weren’t hand-written, a Mason jar that’s suspiciously massive, the telling AI sheen to all of it—however with out realizing to search for it, it’s clear that loads of viewers couldn’t spot the distinction. The advert’s fast cuts assist cover the AI output’s flaws, however recommend that in the appropriate fingers, AI instruments are actually highly effective sufficient to go unnoticed.”

Future

XPrize in Carbon Removing Goes to Enhanced Rock WeatheringEmily Waltz | IEEE Spectrum

“The corporate spreads crushed basalt on small farms in India and Africa. The silica-rich volcanic rock improves the standard of the soil for the crops but additionally helps take away carbon dioxide from the air. It does this by reacting with dissolved CO2 within the soil’s water, turning it into bicarbonate ions and stopping it from returning to the environment.”

The ‘Obtain Extra Labels!’ Phantasm in AI Analysis

0


A typical view in present machine studying analysis is that machine studying itself may be used to enhance the standard of AI dataset annotations – significantly picture captions meant to be used in vision-language fashions (VLMs). This line of considering is pushed by the excessive price of human annotation, and the added burden of supervising annotator efficiency.

Arguably that is the AI equal of the early 2000s ‘obtain extra RAM’ meme, which satirized the notion {that a} {hardware} limitation may very well be resolved with a software-based repair.

It is also an under-regarded concern; whereas new AI fashions entice widespread consideration in each public and industrial spheres, annotation typically seems to be a trivial element in machine studying pipelines, overshadowed by the joy surrounding broader frameworks.

In reality, the capability of machine studying methods to acknowledge and reproduce patterns (the central use case of almost all AI methods) is dependent on the standard and consistency of real-world annotations – labels and phrases that are created or adjudicated by actual folks, typically making subjective judgments about particular person information factors in non-ideal circumstances.

Inevitably, methods which search to look at and reproduce patterns in annotator conduct (and thereby change human annotators and facilitate correct labeling at scale) can not hope to carry out nicely on information not contained within the examples taken from human observers. Nothing ‘comparable’ is sort of the identical, and cross-domain equivalency stays a problematic pursuit in pc imaginative and prescient.

The ‘upstream information buck’ has to cease someplace, and on this case, that is precisely the place it stops – with a human cerebellum making some type of subjective distinction with the intention to codify information for a man-made system.

The RAG Commerce

Till not too long ago, the inaccuracies arising from under-curated dataset annotations have been, maybe, seen as acceptable collateral injury within the context of the imperfect however still-marketable outcomes obtained from generative AI methods.

Certainly, solely this yr a examine from Singapore concluded that hallucinations – i.e., the events when AI methods invent issues that undermine our intentions – are inevitable, and certain in with the conceptual structure of such methods.

To counter this, RAG-based brokers – which might ‘confirm’ information by web searches – have gotten standard in analysis and utilized industrial options. Nevertheless, they add to the useful resource price and to the latency in queries; moreover, novel data utilized to a educated mannequin can not compete with the extra intricate and deeply-intertwined connections that characterize the native layers in a educated mannequin.

It might due to this fact be higher if the annotation information that informs these fashions was considerably much less flawed within the first place, even when it can’t be good (not least as a result of this exercise encroaches into the realm of human subjectivity).

RePOPE

A brand new paper from Germany highlights the issues that come up from counting on older, extensively used datasets, focusing particularly on the accuracy and reliability of their picture captions. The researchers’ findings recommend that label errors in benchmarks can masks or misrepresent hallucination in vision-language fashions.

From the new paper, some examples where the original captions failed to correctly identify objects in the MSCOCO dataset of images. The researchers' manual revision of the POPE benchmark dataset addresses these shortcomings, demonstrating the cost of saving money on annotation curation. Source: https://arxiv.org/pdf/2504.15707

From the brand new paper, some examples the place the unique captions didn’t accurately determine objects within the MSCOCO dataset of photos. The researchers’ guide revision of the POPE benchmark dataset addresses these shortcomings, demonstrating the price of saving cash on annotation curation. Supply: https://arxiv.org/pdf/2504.15707

Think about a mannequin is proven a picture of a road scene and requested whether or not there’s a bicycle in it. The mannequin solutions sure. If the benchmark dataset says there is no such thing as a bicycle, the mannequin is marked incorrect. But when a bicycle is clearly seen within the picture, and was merely missed throughout annotation, then the mannequin’s reply was right, and the benchmark has failed. Errors like this will accumulate throughout a dataset, giving a distorted image of which fashions are correct and that are susceptible to hallucination.

Thus, when incorrect or ambiguous annotations are handled as floor reality, fashions could seem to hallucinate when they’re right, or else appear correct when they don’t seem to be, distorting each the measurement of hallucination and the rating of mannequin efficiency, and making it more durable to diagnose or handle the issue with certainty.

The brand new paper revisits a extensively used benchmark referred to as Polling-based Object Probing Analysis (POPE), which checks whether or not vision-language fashions can accurately say what’s or isn’t in a picture.

POPE is predicated on labels from the influential Microsoft COCO: Widespread Objects in Context (MSCOCO) dataset, a group of annotated photos which has lengthy been handled as providing a very good degree of annotation accuracy.

POPE evaluates object hallucination in giant vision-language fashions by reframing the issue as a binary classification job. Fairly than parsing generated captions, the system poses easy sure/no inquiries to the mannequin about whether or not particular objects are current in a picture, utilizing templates equivalent to ‘Is there a .

Examples of object hallucination in vision-language models. Bolded labels indicate objects marked as present in the original annotations, while red labels show objects hallucinated by the models. The left example reflects a traditional instruction-based evaluation, while the three examples on the right are drawn from different POPE benchmark variants.. Source: https://aclanthology.org/2023.emnlp-main.20.pdf

Examples of object hallucination in vision-language fashions. Daring labels point out objects marked as current within the authentic annotations, whereas crimson labels present objects hallucinated by the fashions. The left instance displays a standard instruction-based analysis, whereas the three examples on the fitting are drawn from completely different POPE benchmark variants. Supply: https://aclanthology.org/2023.emnlp-main.20.pdf

Floor-truth objects (reply: Sure) are paired with sampled non-existent objects (reply: No), chosen by random, frequent (standard), or co-occurrence-based (adversarial) methods. This setup permits for extra steady, prompt-insensitive analysis of hallucination with out counting on complicated rule-based caption evaluation.

The authors of the new paper – titled RePOPE: Impression of Annotation Errors on the POPE Benchmark – problem the assumed accuracy of POPE by rechecking the labels on the benchmark’s photos (i.e., MSCOCO) – and discovering {that a} shocking quantity are incorrect or unclear.

Examples from the 2014 MSCOCO dataset. Source: https://arxiv.org/pdf/1405.0312

Examples from the 2014 MSCOCO dataset. Supply: https://arxiv.org/pdf/1405.0312

These errors change the best way fashions are ranked, with some that originally carried out nicely falling behind when judged in opposition to corrected labels.

In checks, the authors evaluated a spread of open-weight vision-language fashions on each the unique POPE benchmark and their re-labeled RePOPE model.

In keeping with the paper, the corrected annotations led to notable modifications in mannequin rankings, significantly in F1 scores, with a number of high-performing fashions underneath POPE dropping in place underneath RePOPE.

The authors contend that this shift illustrates the extent to which annotation errors can obscure the precise hallucination conduct of fashions, they usually current RePOPE as a extra dependable device for assessing hallucination vulnerability.

In another example from the new paper, we see how the original POPE captions fail to discern subtle objects, such as a person sitting beside the cabin of a tram in the rightmost photo, or the chair obscured by the tennis player in the second photo from the left.

In one other instance from the brand new paper, we see how the unique POPE captions fail to discern refined objects, equivalent to an individual sitting beside the cabin of a tram within the rightmost photograph, or the chair obscured by the tennis participant within the second photograph from the left.

Methodology and Checks

The researchers re-labeled all of the annotations within the authentic MSCOCO dataset, with two human labelers assigned to every information occasion. The place ambiguity as to the standard of the unique labels arose (as within the examples beneath), these outcomes have been put aside from the testing spherical.

Ambiguous cases, where labeling inconsistencies in POPE reflect unclear category boundaries. For instance, a teddy bear labeled as a bear, a motorcycle as a bicycle, or airport vehicles as cars. These cases are excluded from RePOPE due to the subjective nature of such classifications, as well as the inconsistencies in MSCOCO's original labels.

Ambiguous instances, the place labeling inconsistencies in POPE replicate unclear class boundaries. As an illustration, a teddy bear labeled as a bear, a bike as a bicycle, or airport autos as automobiles. These instances have been excluded from RePOPE because of the subjective nature of such classifications, in addition to the inconsistencies in MSCOCO’s authentic labels.

The paper states:

‘The unique annotators missed individuals within the background or behind glass, the tennis participant occludes the ‘chairs’ within the background and the cole slaw incorporates solely a small seen stripe of a carrot.

‘For some objects, the COCO annotations are extremely inconsistent probably as a result of differing definitions of these objects utilized by the unique annotators. The classification of a ‘teddy bear’ as a ‘bear’, a bike as a motorized ‘bicycle’, or an airport automobile as a ‘automobile’ is determined by particular definitions, resulting in inconsistencies in POPE floor reality annotations. Subsequently, we annotate the corresponding image-question pairs as ‘ambiguous’.’

Results of the re-annotation: the positive questions are shared across all three POPE variants. Among those labeled 'Yes' in POPE, 9.3 percent were found to be incorrect and 13.8 percent were classified as ambiguous. For the 'No' questions, 1.7 percent were mislabeled and 4.3 percent were ambiguous.

Outcomes of the re-annotation: the optimistic questions are shared throughout all three POPE variants. Amongst these labeled ‘Sure’ in POPE, 9.3 % have been discovered to be incorrect and 13.8 % have been labeled as ambiguous. For the ‘No’ questions, 1.7 % have been mislabeled and 4.3 % have been ambiguous.

The authors evaluated a spread of open-weight fashions on POPE and on RePOPE, throughout numerous architectures and mannequin sizes. The fashions chosen included a few of the main architectures on the OpenVLM leaderboard: InternVL2.5 (8B/26B/38B/78B and 8B-MPO/26B-MPO);  LLaVA-NeXT; Vicuna; Mistral 7b; Llama; LLaVA-OneVision; Ovis2 (1B/2B/4B/8B); PaliGemma-3B; and PaliGemma2 (3B/10B).

Initial results: the high error rate in the original positive labels leads to a sharp drop in true positives across all models. False positives vary across subsets, nearly doubling on the random subset, but remaining largely unchanged on the popular subset, and show a slight decrease on the adversarial subset. The relabeling has a major effect on F1-based rankings. Models like Ovis2-4B and Ovis2-8B, which performed well on the popular and adversarial splits in POPE, also rise to the top on the random subset under RePOPE.. Please refer to the source PDF for better resolution.

Preliminary outcomes: the excessive error price within the authentic optimistic labels results in a pointy drop in true positives throughout all fashions. False positives fluctuate throughout subsets, almost doubling on the random subset, however remaining largely unchanged on the favored subset, and present a slight lower on the adversarial subset. The relabeling has a serious impact on F1-based rankings. Fashions like Ovis2-4B and Ovis2-8B, which carried out nicely on the favored and adversarial splits in POPE, additionally rise to the highest on the random subset underneath RePOPE.. Please seek advice from the supply PDF for higher decision.

The outcomes graphs above illustrate how the variety of true positives and false positives modifications after correcting the labels within the benchmark.

True positives fell throughout all fashions, exhibiting that they have been typically credited for proper solutions when these solutions have been solely right underneath defective labels, whereas false positives adopted a extra various sample.

On the ‘random’ model of POPE, false positives almost doubled for a lot of fashions, indicating {that a} vital variety of objects flagged as hallucinations have been really current within the photos however had been missed within the authentic annotations. On this case, many supposed mannequin errors have been in truth dataset labeling errors.

For the ‘adversarial’ model of POPE, the place questions have been primarily based on objects that continuously co-occur, false positives decreased. This probably displays a better likelihood that the supposedly absent object was really within the picture however left unlabeled.

Though these shifts affected precision and recall, mannequin rankings stayed comparatively steady for each metrics.

The F1 rating – POPE’s foremost analysis measure – was much more delicate to the label corrections. On the random subset, fashions that ranked close to the highest underneath the unique labels, equivalent to InternVL2.5-8B and -26B, dropped to the underside when scored with RePOPE. Others, equivalent to Ovis2-4B and -8B, rose to the highest.

An identical sample emerged within the accuracy scores, although the authors word that these could now be biased, because the corrected dataset incorporates an uneven variety of optimistic and damaging examples.

The authors argue that the robust influence of annotation errors on benchmark outcomes underscores the necessity for high-quality information. To help extra dependable analysis of object hallucination, they’ve launched the corrected labels at GitHub.

Nevertheless, they word that this re-labeling doesn’t absolutely handle the benchmark’s saturation, since many fashions nonetheless obtain true optimistic and true damaging charges above 90%. They recommend that extra benchmarks, equivalent to DASH-B, which makes use of a tougher set of damaging examples, ought to be used alongside RePOPE.

Conclusion

This specific experiment was attainable due to the very small scale of the dataset concerned. Proving the identical speculation on hyperscale datasets would contain engaged on very restricted fragments of the info; in extremely numerous giant datasets, it would show near-impossible to isolate statistically consultant and semantically coherent groupings – probably skewing the outcomes.

Even when it have been attainable, what treatment would there be underneath the present state-of-the-art? The argument strikes again inevitably in direction of the necessity for higher and extra copious human annotation.

On this regard, ‘higher’ and ‘extra copious’ exist as separate issues in their very own proper, since one can acquire a higher quantity of annotations by race-to-the-bottom economies equivalent to Amazon Mechanical Turk (AMT). Clearly, this probably exploitative sub-economy continuously results in inferior outcomes.

Alternatively, one may farm out annotation duties to financial areas the place the identical expenditure would yield a bigger amount of annotations. Nevertheless, the additional eliminated the annotator is from the meant use case of the mannequin their labels will form, the much less probably it’s that the ensuing mannequin will align with the wants or expectations of the goal area.

This due to this fact stays one of the vital persistent and unresolved challenges within the economics of machine studying improvement.

 

First printed Wednesday, April 23, 2025

ios – I’ve error with flutter 3.3.2 flutter run on mac m4 with cocopods 1.15.0


I’ve undertaking flutter 3.3.2 model and cocoapods 1.16.2 on mac m4
i can run undertaking on android however ican’t run undertaking on ios simulator
i strive set up cocoapods 1.15.0 with ruby 2.7.7
however nonetheless error this
when flutter run
present message identical error

flutter run
Altering present working listing to:
/Customers/abdulrazzak/Downloads/projectsapp/flutter_application_2
Launching lib/essential.dart on iPhone 16 Professional Max in debug mode…
Operating Xcode construct…
Xcode construct achieved. 34.7s
Didn’t construct iOS app
Error output from Xcode construct:

— xcodebuild: WARNING: Utilizing the primary of a number of matching locations:
{ platform:iOS Simulator, id:C802A772-DD95-47BE-B5C0-3AC4B1A75A51, OS:18.2,
title:iPhone 16 Professional Max }
{ platform:iOS Simulator, id:C802A772-DD95-47BE-B5C0-3AC4B1A75A51, OS:18.2,
title:iPhone 16 Professional Max }
** BUILD FAILED **

Xcode’s output:

Writing consequence bundle at path:
/var/folders/mb/rk_7njyd4ydf2jc9y101fsnc0000gn/T/flutter_tools.Dq7der/flutt
er_ios_build_temp_dirqUNOwt/temporary_xcresult_bundle

Oops; flutter has exited unexpectedly: "FileSystemException: Can not delete
file, path="/Customers/abdulrazzak/Paperwork/flutter_application_2/construct/ios/Debug-iphonesi
mulator/Flutter.framework/Flutter" (OS Error: Operation not permitted, errno
= 1)".
A crash report has been written to
/Customers/abdulrazzak/Downloads/projectsapp/flutter_application_2/flutter_01.lo
g.
Didn't bundle
/Customers/abdulrazzak/Downloads/projectsapp/flutter_application_2.
Command PhaseScriptExecution failed with a nonzero exit code
/Customers/abdulrazzak/Downloads/projectsapp/flutter_application_2/ios/Runner.xc
odeproj: warning: The iOS Simulator deployment goal
'IPHONEOS_DEPLOYMENT_TARGET' is about to 11.0, however the vary of supported
deployment goal variations is 12.0 to 18.2.99. (in goal 'Runner' from
undertaking 'Runner')
warning: Run script construct section 'Run Script' can be run throughout each construct
as a result of it doesn't specify any outputs. To handle this challenge, both add
output dependencies to the script section, or configure it to run in each
construct by unchecking "Based mostly on dependency evaluation" within the script section. (in
goal 'Runner' from undertaking 'Runner')
warning: Run script construct section 'Skinny Binary' can be run throughout each construct
as a result of it doesn't specify any outputs. To handle this challenge, both add
output dependencies to the script section, or configure it to run in each
construct by unchecking "Based mostly on dependency evaluation" within the script section. (in
goal 'Runner' from undertaking 'Runner')

Couldn’t construct the applying for the simulator.
Error launching software on iPhone 16 Professional Max.

How Agile Improvement Implements the One Supply of Fact Philosophy


In fashionable software program improvement, agility isn’t a matter of velocity — it’s about being adaptable, open, and constant. Whereas a lot of the eye in Agile improvement is concentrated round iterations, sprints, and steady supply, documentation usually falls behind.

However what if the documentation may sustain with the code? That’s precisely what Documentation as Code is about.

At its coronary heart, this method treats documentation like software program code: it’s versioned, re-inspected, examined, and deployed utilizing the identical instruments and workflows.

This idea is in actual alignment with the “Single Supply of Fact” philosophy that makes an attempt to convey and solidify info collectively in a undertaking so it doesn’t create confusion, redundancy, and miscommunication.

Let’s dive into what Documentation as Code (Docs as Code) is and the way it allows this philosophy in Agile settings.

What Is Documentation as Code?

Documentation as Code refers back to the apply of writing and supporting documentation utilizing the identical processes and instruments which are used for writing code.

How Agile Improvement Implements the One Supply of Fact Philosophy

As a substitute of storing technical content material in exterior programs reminiscent of Phrase paperwork or wikis, Docs as Code shops all the things in a model management system, usually Git.

Because of this documentation:

  • Lives in the identical repositories because the supply code.
  • Is written in light-weight markup languages reminiscent of Markdown or AsciiDoc.
  • Follows the identical branching, pull request, and code overview workflows.
  • Is built-in into the CI/CD pipeline, the place documentation could be routinely linted, examined, and deployed.

The important thing rules of the Documentation as Code philosophy collaborate with each other to maintain documentation exact, up-to-date, and straightforward to control.

To start with, by utilizing model management, each change to the documentation is tracked and could be rolled again if wanted, identical to with code.

Automation, in flip, helps simplify the method — builds, previews, and error checks occur independently, which implies much less work concerned and faster supply.

Additional, as a result of the identical device units are used when creating customized software program, collaboration is manner simpler. Builders, product managers, and technical writers may even contribute in keeping with established workflows.

The One Supply of Fact Philosophy

One Supply of Fact implies having info in a one single location, which everyone on the crew can seek advice from.

It’s easy in Agile improvement to have the documentation get unruly — there could be a few of it on wikis, some on shared drives, and a few buried in previous electronic mail threads.

With Documentation as Code, in flip, the Single Supply of Fact turns into the codebase itself. The documentation exists alongside the code, in the identical model management repository, and makes use of the identical workflow.

Put merely, any alteration to the code could be adopted by the matching documentation replace and everybody routinely is aware of about it.

By linking code and documentation collectively, groups keep away from duplication, scale back errors, and facilitate communication. This fashion, it’s a lot simpler to belief documentation — as a result of it will get up to date identical to the code, and by the identical folks.

Advantages of Documentation as Code in Agile Software program Improvement

General, Documentation as Code possesses some compelling advantages in Agile improvement, serving to groups work quicker, wiser, and with fewer misunderstandings.

GitHub, GitLab

First, it retains documentation updated. As a result of it’s being saved and stored with the code, any modifications could be carried out and reviewed directly. No ready for an individual to revise a separate doc or wiki down the road.

Second, it improves teamwork. All of the contributors, from builders and testers to writers and product managers, can contribute to the documentation utilizing the identical instruments they use to code.

Third, it impacts the code’s high quality. Writing and validating technical documentation concurrently with code forces builders to pay extra consideration to how their code behaves, which tends to result in a greater design with fewer bugs.

Fourth, builders can add automated checks and assessments to the CI/CD pipeline. They will routinely discover damaged hyperlinks, misspellings, or out of date references within the docs, the identical manner they’ll discover code.

Lastly, quite a few affluent corporations are already working towards this method. GitLab and Kubernetes initiatives have indicated that placing documentation into the event course of leads to extra steady, useful, and easier-to-use documentation.

The best way to Implement Documentation as Code

Getting Documentation as Code stay isn’t that tough, however it’ll take you to change the way in which your crew operates. One of the best recommendation right here is to start out small and regularly transfer towards making Docs as Code part of your course of.

1. Begin Small

To start with, select a small undertaking or a selected module to doc utilizing Docs as Code. This step permits your crew to get acquainted with the method with out feeling overwhelmed.

Then write documentation for one or two options or parts and retailer it in the identical Git repository as your code. This fashion, you’ll get a really feel for the way documentation could be handled like code.

2. Use the Proper Instruments

Subsequent, double-check you’ve the best instruments at hand. For model management, you’ll need to use a system like Git (with GitHub, GitLab, or Bitbucket).

For writing the documentation itself, you need to use easy markup languages, reminiscent of Markdown, AsciiDoc, or reStructuredText.

Additional, take into account making use of static website mills like MkDocs, Docusaurus, or Hugo to show your Markdown information into knowledgeable, user-oriented documentation web site.

Lastly, combine your documentation into your CI/CD pipeline (e.g., GitHub Actions, GitLab CI, or CircleCI) so as to auto-format checks, spelling checks, and deployment.

Documentation into Your Workflow

3. Incorporate Documentation into Your Workflow

When you’ve the instruments arrange, it’s time to merge documentation into your improvement workflow. Put merely, it means inserting documentation adjustments in the identical pipeline as code adjustments.

If a developer creates code, they have to additionally replace or add the respective documentation in the identical commit. To maintain all of the components organized, it’s essential to create pull requests for documentation adjustments and overview them identical to you’ll overview code adjustments.

4. Educate Your Group

Apart from, it’s vital to teach your crew on how Docs as Code works. This implies explaining the instruments you’ll use, the workflows, and the advantages of getting documentation in model management.

Have interaction builders, product managers, and technical writers to play lively roles in including to documentation. Collective accountability will make documentation a crew effort, not the only real accountability of 1 particular person.

5. Keep away from Frequent Pitfalls

On the identical time, watch out to not fall into widespread traps when going Docs as Code. One mistake to keep away from is over-engineering the documentation course of on the onset. As a substitute, simplify issues first after which add more and more extra advanced instruments or processes.

One other mistake is forgetting documentation after the preliminary setup is full. As a way to maintain your docs up-to-date, embody them within the CI/CD pipeline and encourage common overview. A quick reminder: for those who maintain documentation a low precedence, it’ll simply fall behind.

6. Undertake Steady Enchancment

Lastly, keep in mind that Docs as Code is a steady course of. The extra your crew will get used to the workflow, the extra it is possible for you to to streamline and refine it much more.

You possibly can, for example, add even higher automation, enhance the type, and even add person suggestions to additional improve the documentation. What is crucial is to deal with it as a long-term funding that pays off within the type of higher collaboration and a greater improvement course of.

Challenges and Concerns of Creating Documentation in Software program Improvement

One of many greatest challenges is to maintain the documentation updated when the code is being edited. To resolve this, there’s a must replace the documentation and likewise the code.

Creating Documentation

One other downside is duplication. Programmers prefer to remark within the code how issues are completed, and sometimes this repeats the documentation.

Whereas code feedback are vital, they have to concentrate on technical particulars within the code, and the documentation ought to present clear, high-level info for customers.

Adopting Docs as Code additionally requires remodeling how the crew works, which could be difficult for people used to conventional documentation. A few of the crew members may resist at first, however with time, once they see the advantages, it turns into painless.

Though automation helps, it will probably’t do all the things. Instruments can seek for small errors, however they’ll’t inform if the documentation is unambiguous or useful. That takes human involvement. Reviewing the documentation regularly ensures that it’s worthwhile and correct.

Lastly, as your undertaking progresses, the system documentation can turn out to be outdated or disconnected from the place the product is. To forestall this, it’s vital to have evaluations and updates regularly.

Having an individual who has the accountability of keeping track of particular areas of the documentation may also maintain all the things correct and true to life.

Problem Answer
Maintaining Documentation As much as Date Replace code and documentation collectively.
Duplication of Info Hold code feedback technical; documentation ought to concentrate on high-level information.
Adopting Docs as Code Regularly transition, displaying advantages over time.
Automation Limitations Common human evaluations to make sure readability and accuracy.
Outdated Documentation Common evaluations and updates to align with the newest product model.
Lack of Possession Assign crew members to supervise particular areas of documentation.

Actual-World Use Circumstances of Leveraging Software program Documentation

To get an thought of how Documentation as Code operates in precise environments, let’s analyze some use instances of corporations and initiatives which have efficiently enforced this technique.

The instances beneath research illustrate how Docs as Code aids in enhancing collaboration, sustaining related documentation, and making the event course of extra ample.

1. GitLab: A Chief in Docs as Code

GitLab, a well-known DevOps and CI/CD device, is a superb instance of an organization that has welcomed Documentation as Code.

Its docs are saved in the identical Git repository because the code, and your complete crew works collectively to take care of it as much as the minute.

The corporate makes use of GitLab’s personal CI/CD pipeline to routinely produce and deploy the documentation each time a change is pushed to the codebase.

Because of such an association, all undertaking members can simply entry and leverage the documentation. And since it’s a part of the event course of, GitLab sidesteps the basic downside of documentation rising outdated or forgotten.

It additionally unites your complete crew—builders, technical writers, and everybody else can contribute to the documentation in the identical manner they contribute to the code.

2. Kubernetes: Open Supply Success

Kubernetes is yet one more nice instance of how Documentation as Code is practiced.

All person documentation and API references for Kubernetes are stored in the identical Git repository as their code. This means that each time the software program is being altered, it’s easy to replace the documentation too.

In addition they make use of automation to verify for issues like hyperlink breaks or out of date code examples.

General, because of this course of, numerous contributors repeatedly refine the code and documentation at common intervals, guaranteeing that each piece of labor is present and reliable.

3. Stripe: Maintaining Documentation in Sync with Code

Stripe, a longtime cost firm, additionally makes use of Documentation as Code to maintain its API documentation updated.

As their crew adjusts their API, they modify their documentation together with the code throughout the identical Git repository. This fashion, all edits are immediately mirrored within the official docs.

By together with documentation in the identical course of as coding, Stripe avoids the inconvenience of sustaining separate documentation and supplies customers with the newest and finest info always.

The Position of Technical Writers in Docs as Code

Within the strategy of Docs as Code, technical writers have a particularly vital place. Whereas the builders write down the technical info, technical writers double-check that such info isn’t tough to learn, structured adequately, and straightforward to know for everybody.

Documentation

Technical writers rework difficult technical language into components anybody can perceive, and that is particularly important in Agile initiatives the place each software program and documentation get developed hand-in-hand.

Technical writers additionally work with builders, utilizing the identical Git repository in order that the documentation is at all times updated with the newest code.

Their data makes the documentation correct, well-organized, and useful. They overview updates, accumulate recommendations, and use automation instruments to catch small errors.

Typically, technical writers fill the hole between the technical facet of improvement and the wants of customers, making the entire documentation course of a hit.

FAQ

What’s “Documentation as Code”?

Documentation as Code is a apply of writing and conserving documentation in the identical manner builders write and maintain code — via automation, Git, and model management. It retains the documentation up-to-date, reviewed, and versioned identical to the software program itself.

Do builders write all of the documentation in Docs as Code?

Not essentially. Whereas builders will contribute, technical writers are nonetheless obligatory. They make the data properly organized, readable, and clear to customers, even when it’s written in a code repository.

Why is it essential to maintain documentation in the identical Git repository as code?

Maintaining code and documentation separate signifies that each of them could be up to date on the identical time. It avoids stale info being utilized by groups and ensures customers get the newest and most correct documentation on a regular basis.