Home Blog Page 3841

Hacker faces 81-month jail sentence for faking his loss of life to keep away from baby help funds

0


Facepalm: A federal court docket has sentenced a deadbeat dad to 81 months in jail for hacking authorities methods to pretend his loss of life. A grand jury indicted the person in July 2023 on a number of expenses of pc fraud, aggravated identification theft, and financial institution fraud. He admitted to accessing authorities methods and registering a fraudulent loss of life certificates to keep away from paying his again baby help.

The US Legal professional’s Workplace for the Japanese District of Kentucky prosecuted Somerset resident Jesse Kipf for cybercrimes concentrating on non-public and governmental networks. It started when he manipulated loss of life data in Hawaii’s loss of life registry system, falsely itemizing himself as deceased utilizing a cast digital signature to evade paying baby help. He additionally bought entry to those networks on the darkish internet. The US Legal professional stated Kipf’s crimes resulted in almost $200,000 in damages, disrupted crucial operations, and compromised private data.

“This scheme was a cynical and harmful effort, based mostly partly on the inexcusable objective of avoiding his baby help obligations,” stated United States Legal professional Carlton S. Shier, IV. “Thankfully, by way of the wonderful work of our regulation enforcement companions, this case will function a warning to different cyber criminals, and he’ll face the results of his disgraceful conduct.”

An FBI investigation revealed that Kipf accessed and tried to promote information from networks that included a personal firm and a state authorities registry. Kipf deliberate to make use of the stolen information, which included private identification data, for additional identification theft and fraud. His scheme’s magnitude and class indicated he has important technical abilities.

Authorities additionally charged Kipf with aggravated identification theft, which carries extreme penalties due to the far-reaching penalties for victims. The indictment moreover alleged that Kipf tried to make use of the stolen identities to fill out a number of financial institution mortgage purposes. In accordance with a US Legal professional’s Workplace press launch, his actions posed important dangers to particular person victims and the integrity of monetary and governmental networks.

“Working in collaboration with our regulation enforcement companions, this defendant who hacked a wide range of pc methods and maliciously stole the identification of others for his personal private achieve, will now pay the worth,” stated FBI Particular Agent in Cost Michael E. Stansbury for the Louisville Discipline Workplace. “Victims of identification theft face lifelong affect and for that purpose, the FBI will pursue anybody silly sufficient to have interaction on this cowardly conduct.”

The court docket sentenced Kipf to 81 months in federal jail, including that he should full no less than 85 p.c of his time earlier than turning into eligible for launch. Following his incarceration, Kipf faces three years of carefully supervised parole. His penalties might have been a lot increased had he not copped a plea.

Assistant US Legal professional Kate Ok. Smith reiterated her workplace’s dedication to pursuing justice in cybercrime and identification theft instances, highlighting the significance of safeguarding digital and private data in an more and more interconnected world.

Newest leak particulars the iPhone 16 lineup’s digicam enhancements

0



New Malware PG_MEM Targets PostgreSQL Databases for Crypto Mining


Aug 22, 2024Ravie LakshmananDatabase Safety / Cryptocurrency

New Malware PG_MEM Targets PostgreSQL Databases for Crypto Mining

Cybersecurity researchers have unpacked a brand new malware pressure dubbed PG_MEM that is designed to mine cryptocurrency after brute-forcing their manner into PostgreSQL database situations.

“Brute-force assaults on Postgres contain repeatedly making an attempt to guess the database credentials till entry is gained, exploiting weak passwords,” Aqua safety researcher Assaf Morag mentioned in a technical report.

“As soon as accessed, attackers can leverage the COPY … FROM PROGRAM SQL command to execute arbitrary shell instructions on the host, permitting them to carry out malicious actions akin to information theft or deploying malware.”

Cybersecurity

The assault chain noticed by the cloud safety agency entails focusing on misconfigured PostgreSQL databases to create an administrator function in Postgres and exploiting a function known as PROGRAM to run shell instructions.

As well as, a profitable brute-force assault is adopted by the risk actor conducting preliminary reconnaissance and executing instructions to strip the “postgres” consumer of superuser permissions, thereby proscribing the privileges of different risk actors who would possibly acquire entry by way of the identical technique.

The shell instructions are chargeable for dropping two payloads from a distant server (“128.199.77[.]96”), specifically PG_MEM and PG_CORE, that are able to terminating competing processes (e.g., Kinsing), organising persistence on the host, and finally deploying the Monero cryptocurrency miner.

That is achieved by making use of a PostgreSQL command known as COPY, which permits for copying information between a file and a database desk. It significantly weaponizes a parameter generally known as PROGRAM that permits the server to run the handed command and write this system execution outcomes to the desk.

“Whereas [cryptocurrency mining] is the principle influence, at this level the attacker may run instructions, view information, and management the server,” Morag mentioned.

“This marketing campaign is exploiting web dealing with Postgres databases with weak passwords. Many organizations join their databases to the web, weak password is a results of a misconfiguration, and lack of correct id controls.”

Discovered this text fascinating? Comply with us on Twitter and LinkedIn to learn extra unique content material we submit.



Postman: An Lively Metadata Pioneer – Atlan

0


Unlocking Quick, Assured, Knowledge-driven Selections with Atlan

The Lively Metadata Pioneers sequence options Atlan clients who’ve accomplished an intensive analysis of the Lively Metadata Administration market. Paying ahead what you’ve realized to the following knowledge chief is the true spirit of the Atlan neighborhood! In order that they’re right here to share their hard-earned perspective on an evolving market, what makes up their trendy knowledge stack, revolutionary use instances for metadata, and extra.

On this installment of the sequence, we meet Prudhvi Vasa, Analytics Chief at Postman, who shares the historical past of Knowledge & Analytics at Postman, how Atlan demystifies their trendy knowledge stack, and finest practices for measuring and speaking the influence of knowledge groups.

This interview has been edited for brevity and readability.


Would you thoughts introducing your self, and telling us the way you got here to work in Knowledge & Analytics?

My analytics journey began proper out of school. My first job was at Mu Sigma. On the time, it was the world’s largest pure-play Enterprise Analytics Companies firm. I labored there for 2 years supporting a number one US retailer the place initiatives different from basic reporting to prediction fashions. Then, I went for my larger research right here in India, graduated from IIM Calcutta with my MBA, then labored for a yr with one of many largest corporations in India.

As quickly as I completed one yr, I bought a chance with an e-commerce firm. I used to be interviewing for a product function with them and so they stated, “Hey, I feel you may have a knowledge background. Why don’t you come and lead Analytics?” My coronary heart was at all times in knowledge, so for the following 5 years I used to be dealing with Knowledge & Analytics for an organization referred to as MySmartPrice, a worth comparability web site.

5 years is a very long time, and that’s when my time with Postman started. I knew the founder from faculty and he reached out to say, “We’re rising, and we need to construct our knowledge workforce.” It gave the impression of a really thrilling alternative, as I had by no means labored in a core know-how firm till then. I assumed this could be an ideal problem, and that’s how I joined Postman.

COVID hit earlier than I joined, and we had been all discovering distant work and the right way to alter to the brand new regular, nevertheless it labored out nicely ultimately. It’s been three and a half years now, and we grew the workforce from a workforce of 4 or 5 to virtually a 25-member workforce since.

Again to start with, we had been operating considerably of a service mannequin. Now we’re correctly embedded throughout the group and we’ve an excellent knowledge engineering workforce that owns the end-to-end motion of knowledge from ingestion, transformations, to reverse ETL. Most of it’s completed in-house. We don’t depend on a variety of tooling for the sake of it. Then as soon as the engineers present the information assist and the tooling, the analysts take over. 

The mission for our workforce is to allow each perform with the ability of knowledge and insights, rapidly and with confidence. Wherever someone wants knowledge, we’re there and no matter we construct, we attempt to make it final without end. We don’t need to run the identical question once more. We don’t need to reply the identical query once more. That’s our greatest motto, and that’s why despite the fact that the corporate scales rather more than our workforce, we’re capable of assist the corporate with out scaling linearly together with it. 

It’s been virtually 12 years for me on this business, and I’m nonetheless excited to make issues higher every single day.

Might you describe Postman, and the way your workforce helps the group and mission?

Postman is a B2B SaaS firm. We’re the whole API Growth Platform. Software program Builders and their groups use us to construct their APIs, collaborate on constructing their APIs, check their APIs, and mock their APIs. Folks can uncover APIs and share APIs. With something associated to APIs, we would like individuals to come back to Postman. We’ve been round since 2012, beginning as a facet mission, and there was no trying again after that. 

As for the information workforce, from the beginning, our founders had a neat concept of how they needed to make use of knowledge. At each level within the firm’s journey, I’m proud to say knowledge performed a really pivotal function, answering essential questions on our goal market, the scale of our goal market, and the way many individuals we may attain. Knowledge helped us worth the corporate, and once we launched new merchandise, we used knowledge to know the correct utilization limits for every of the merchandise. There isn’t a single place I may consider the place knowledge hasn’t made an influence.

For instance, we used to have paid plans within the occasion that somebody didn’t pay, we might look ahead to one year earlier than we wrote it off. However once we appeared on the knowledge, we realized that after six months, no person returned to the product. So we had been ready for six extra months earlier than writing them off, and we determined to set it to 6 months. 

Or, let’s say we’ve a pricing replace. We use knowledge to reply questions on how many individuals will likely be glad or sad about it, and what the overall influence could be.

Probably the most impactful factor for our product is that we’ve analytics constructed round GitHub, and may perceive what individuals are asking us to construct and the place individuals are dealing with issues. On daily basis, Product Managers get a report that tells them the place individuals are dealing with issues, which tells them what to construct, what to resolve, and what to answer.

On the subject of how knowledge has been utilized in Postman, I’d say that should you can take into consideration a approach to make use of it, we’ve carried out it.

The necessary factor behind all that is we at all times ask concerning the function of a request. Should you come to us and say “Hey, can I get this knowledge?” then no person goes to answer you. We first want to know the evaluation influence of a request, and what individuals are going to do with the information as soon as we’ve given it to them. That helps us really reply the query, and helps them reply it higher, too. They could even understand they’re not asking the correct query.

So, we would like individuals to assume earlier than they arrive to us, and we encourage that quite a bit. If we simply construct a mannequin and provides it to somebody, with out figuring out what’s going to occur with it, a variety of analysts will likely be disheartened to see their work go nowhere. Affect-driven Analytics is on the coronary heart of all the pieces we do.

What does your stack appear to be?

Our knowledge stack begins with ingestion, the place we’ve an in-house software referred to as Fulcrum constructed on prime of AWS. We even have a software referred to as Hevo for third-party knowledge. If we would like knowledge from Linkedin, Twitter, or Fb, or from Salesforce or Google, we use Hevo as a result of we are able to’t sustain with updating our APIs to learn from 50 separate instruments.

We observe ELT, so we ingest all uncooked knowledge into Redshift, which is our knowledge warehouse, and as soon as knowledge is there, we use dbt as a change layer. So analysts come and write their transformation logic inside dbt. 

After transformations, we’ve Looker, which is our BI software the place individuals can construct dashboards and question. In parallel to Looker, we even have Redash as one other querying software, so if engineers or individuals exterior of the workforce need to do some ad-hoc evaluation, we assist that, too.

We even have Reverse ETL, which is once more home-grown on prime of Fulcrum. We ship knowledge again into locations like Salesforce or e mail advertising marketing campaign instruments. We additionally ship a variety of knowledge again to the product, cowl a variety of suggestion engines, and the search engine throughout the product. 

On prime of all that, we’ve Atlan for knowledge cataloging and knowledge lineage.

Might you describe Postman’s journey with Atlan, and who’s getting worth from utilizing it?

As Postman was rising, probably the most frequent questions we obtained had been “The place is that this knowledge?” or “What does this knowledge imply?” and it was taking a variety of our analysts’ time to reply them. That is the explanation Atlan exists. Beginning with onboarding, we started by placing all of our definitions in Atlan. It was a one-stop resolution the place we may go to know what our knowledge means.

In a while, we began utilizing knowledge lineage, so if we realized one thing was damaged in our ingestion or transformation pipelines, we may use Atlan to determine what property had been impacted. We’re additionally utilizing lineage to find all of the personally identifiable info in our warehouse and decide whether or not we’re masking it appropriately or not.

So far as personas, there are two that use Atlan closely, Knowledge Analysts, who use it to find property and maintain definitions up-to-date, and Knowledge Engineers, who use it for lineage and caring for PII. The third persona that we may see benefitting are all of the Software program Engineers who question with Redash, and we’re engaged on transferring individuals from Redash over to Atlan for that.

What’s subsequent for you and the workforce? Something you’re enthusiastic about constructing within the coming yr?

I used to be at dbt Coalesce a few months again and I used to be interested by this. We have now an necessary pillar of our workforce referred to as DataOps, and we get day by day reviews on how our ingestions are going. 

We are able to perceive if there are anomalies like our quantity of knowledge rising, the time to ingest knowledge, and if our transformation fashions are taking longer than anticipated. We are able to additionally perceive if we’ve any damaged content material in our dashboards. All of that is constructed in-house, and I noticed a variety of new instruments coming as much as tackle it. So on one hand, I used to be proud we did that, and on the opposite, I used to be excited to attempt some new instruments.

We’ve additionally launched a caching layer as a result of we had been discovering Looker’s UI to be slightly non-performant and we needed to enhance dashboard loading occasions. This caching layer pre-loads a variety of dashboards, so each time a shopper opens it, it’s simply accessible to them. I’m actually excited to maintain bringing down dashboard load occasions each week, each month.

There’s additionally a variety of LLMs which have arrived. To me, the largest drawback in knowledge continues to be discovery. Numerous us are attempting to resolve it, not simply on an asset stage, however on a solution or perception stage. Sooner or later, what I hope for is a bot that may reply questions throughout the group, like “Why is my quantity taking place?”. We’re attempting out two new instruments for this, however we’re additionally constructing one thing internally. 

It’s nonetheless very nascent, we don’t know whether or not will probably be profitable or not, however we need to enhance customers’ expertise with the information workforce by introducing one thing automated. A human might not be capable of reply, but when I can prepare someone to reply once I’m not there, that might be nice.

Your workforce appears to know their influence very nicely. What recommendation would you give your peer groups to do the identical?

That’s a really powerful query. I’ll divide this into two items, Knowledge Engineering and Analytics.

The success of Knowledge Engineering is extra simply measurable. I’ve high quality, availability, course of efficiency, and efficiency metrics. 

High quality metrics measure the “correctness” of your knowledge, and the way you measure it will depend on should you observe processes. If in case you have Jira, you may have bugs and incidents, and also you monitor how briskly you’re closing bugs or fixing incidents. Over time, it’s necessary to outline a high quality metric and see in case your rating improves or not.

Availability is comparable. Each time individuals are asking for a dashboard or for a question, are your assets accessible to them? In the event that they’re not, then measure and monitor this, seeing should you’re enhancing over time.

Course of Efficiency addresses the time to decision when someone asks you a query. That’s an important one, as a result of it’s direct suggestions. Should you’re late, individuals will say the information workforce isn’t doing a superb job, and that is at all times recent of their minds should you’re not answering.

Final is Efficiency. Your dashboard might be wonderful, nevertheless it doesn’t matter if it may possibly’t assist somebody after they want it. If somebody opens a dashboard and it doesn’t load, they stroll away and it doesn’t matter how good your work was. So for me, efficiency means how rapidly a dashboard masses. I’d measure the time a dashboard takes to load, and let’s say I’ve a goal of 10 seconds. I’ll see if all the pieces masses in that point, and what elements of it are loading.

On the Analytics facet, a simple option to measure is to ship out an NPS kind and see if individuals are glad along with your work or not. However the different approach requires you to be very process-oriented to measure it, and to make use of tickets.

As soon as each quarter, we return to all of the analytics tickets we’ve solved, and decide the influence they’ve created. I wish to see what number of product adjustments occurred due to our evaluation, and what number of enterprise choices had been made based mostly on our knowledge.

For perception era, we may then say we had been a part of the decision-making course of for 2 gross sales choices, two enterprise operations choices, and three product choices. The way you’ll measure that is as much as you, nevertheless it’s necessary that you just measure it.

Should you’re working in a corporation that’s new, or hasn’t had knowledge groups in a very long time, what occurs is that most of the time, you do 10 analyses, however solely certainly one of them goes to influence the enterprise. Most of your hypotheses will likely be confirmed unsuitable extra usually than they’re proper. You may’t simply say “I did this one factor final quarter,” so documenting and having a course of helps. You want to have the ability to say “I attempted 10 hypotheses, and one labored,” versus saying “I feel we simply had one speculation that labored.”

Attempt to measure your work, and doc it nicely. You and your workforce may be happy with yourselves, at the very least, however you may also talk all the pieces you tried and contributed to.

Picture by Caspar Camille Rubin on Unsplash

Price-Efficient AI Infrastructure: 5 Classes Discovered


As organizations throughout sectors grapple with the alternatives and challenges introduced through the use of massive language fashions (LLMs), the infrastructure wanted to construct, practice, take a look at, and deploy LLMs presents its personal distinctive challenges. As a part of the SEI’s latest investigation into use circumstances for LLMs inside the Intelligence Group (IC), we would have liked to deploy compliant, cost-effective infrastructure for analysis and growth. On this put up, we describe present challenges and cutting-edge of cost-effective AI infrastructure, and we share 5 classes discovered from our personal experiences standing up an LLM for a specialised use case.

The Problem of Architecting MLOps Pipelines

Architecting machine studying operations (MLOps) pipelines is a tough course of with many transferring elements, together with knowledge units, workspace, logging, compute sources, and networking—and all these elements have to be thought-about through the design section. Compliant, on-premises infrastructure requires superior planning, which is commonly a luxurious in quickly advancing disciplines equivalent to AI. By splitting duties between an infrastructure crew and a growth crew who work carefully collectively, venture necessities for conducting ML coaching and deploying the sources to make the ML system succeed might be addressed in parallel. Splitting the duties additionally encourages collaboration for the venture and reduces venture pressure like time constraints.

Approaches to Scaling an Infrastructure

The present cutting-edge is a multi-user, horizontally scalable atmosphere situated on a company’s premises or in a cloud ecosystem. Experiments are containerized or saved in a means so they’re simple to copy or migrate throughout environments. Knowledge is saved in particular person parts and migrated or built-in when obligatory. As ML fashions develop into extra advanced and because the quantity of information they use grows, AI groups may have to extend their infrastructure’s capabilities to keep up efficiency and reliability. Particular approaches to scaling can dramatically have an effect on infrastructure prices.

When deciding scale an atmosphere, an engineer should contemplate components of price, pace of a given spine, whether or not a given venture can leverage sure deployment schemes, and total integration targets. Horizontal scaling is the usage of a number of machines in tandem to distribute workloads throughout all infrastructure obtainable. Vertical scaling gives extra storage, reminiscence, graphics processing items (GPUs), and so forth. to enhance system productiveness whereas decreasing price. This sort of scaling has particular utility to environments which have already scaled horizontally or see an absence of workload quantity however require higher efficiency.

Typically, each vertical and horizontal scaling might be price efficient, with a horizontally scaled system having a extra granular stage of management. In both case it’s doable—and extremely really useful—to establish a set off perform for activation and deactivation of pricey computing sources and implement a system below that perform to create and destroy computing sources as wanted to reduce the general time of operation. This technique helps to scale back prices by avoiding overburn and idle sources, which you’re in any other case nonetheless paying for, or allocating these sources to different jobs. Adapting sturdy orchestration and horizontal scaling mechanisms equivalent to containers, gives granular management, which permits for clear useful resource utilization whereas decreasing working prices, notably in a cloud atmosphere.

Classes Discovered from Undertaking Mayflower

From Could-September 2023, the SEI performed the Mayflower Undertaking to discover how the Intelligence Group would possibly arrange an LLM, customise LLMs for particular use circumstances, and consider the trustworthiness of LLMs throughout use circumstances. You’ll be able to learn extra about Mayflower in our report, A Retrospective in Engineering Massive Language Fashions for Nationwide Safety. Our crew discovered that the power to quickly deploy compute environments based mostly on the venture wants, knowledge safety, and guaranteeing system availability contributed on to the success of our venture. We share the next classes discovered to assist others construct AI infrastructures that meet their wants for price, pace, and high quality.

1. Account in your belongings and estimate your wants up entrance.

Think about each bit of the atmosphere an asset: knowledge, compute sources for coaching, and analysis instruments are just some examples of the belongings that require consideration when planning. When these parts are recognized and correctly orchestrated, they will work collectively effectively as a system to ship outcomes and capabilities to finish customers. Figuring out your belongings begins with evaluating the information and framework the groups might be working with. The method of figuring out every element of your atmosphere requires experience from—and ideally, cross coaching and collaboration between—each ML engineers and infrastructure engineers to perform effectively.

memoryusageestimategraphic_05132024

2. Construct in time for evaluating toolkits.

Some toolkits will work higher than others, and evaluating them is usually a prolonged course of that must be accounted for early on. In case your group has develop into used to instruments developed internally, then exterior instruments could not align with what your crew members are accustomed to. Platform as a service (PaaS) suppliers for ML growth provide a viable path to get began, however they could not combine effectively with instruments your group has developed in-house. Throughout planning, account for the time to judge or adapt both software set, and examine these instruments towards each other when deciding which platform to leverage. Price and value are the first components you need to contemplate on this comparability; the significance of those components will differ relying in your group’s sources and priorities.

3. Design for flexibility.

Implement segmented storage sources for flexibility when attaching storage parts to a compute useful resource. Design your pipeline such that your knowledge, outcomes, and fashions might be handed from one place to a different simply. This method permits sources to be positioned on a standard spine, guaranteeing quick switch and the power to connect and detach or mount modularly. A standard spine gives a spot to retailer and name on massive knowledge units and outcomes of experiments whereas sustaining good knowledge hygiene.

A observe that may help flexibility is offering an ordinary “springboard” for experiments: versatile items of {hardware} which can be independently highly effective sufficient to run experiments. The springboard is much like a sandbox and helps fast prototyping, and you may reconfigure the {hardware} for every experiment.

For the Mayflower Undertaking, we carried out separate container workflows in remoted growth environments and built-in these utilizing compose scripts. This technique permits a number of GPUs to be referred to as through the run of a job based mostly on obtainable marketed sources of joined machines. The cluster gives multi-node coaching capabilities inside a job submission format for higher end-user productiveness.

4. Isolate your knowledge and shield your gold requirements.

Correctly isolating knowledge can remedy quite a lot of issues. When working collaboratively, it’s simple to exhaust storage with redundant knowledge units. By speaking clearly along with your crew and defining an ordinary, frequent, knowledge set supply, you’ll be able to keep away from this pitfall. Because of this a major knowledge set have to be extremely accessible and provisioned with the extent of use—that’s, the quantity of information and the pace and frequency at which crew members want entry—your crew expects on the time the system is designed. The supply ought to have the ability to help the anticipated reads from nonetheless many crew members may have to make use of this knowledge at any given time to carry out their duties. Any output or reworked knowledge should not be injected again into the identical space by which the supply knowledge is saved however ought to as an alternative be moved into one other working listing or designated output location. This method maintains the integrity of a supply knowledge set whereas minimizing pointless storage use and allows replication of an atmosphere extra simply than if the information set and dealing atmosphere weren’t remoted.

5. Save prices when working with cloud sources. 


Authorities cloud sources have totally different availability than industrial sources, which regularly require extra compensations or compromises. Utilizing an present on-premises useful resource may help cut back prices of cloud operations. Particularly, think about using native sources in preparation for scaling up as a springboard. This observe limits total compute time on costly sources that, based mostly in your use case, could also be way more highly effective than required to carry out preliminary testing and analysis.

figure1_05132024

Determine 1: On this desk from our report A Retrospective in Engineering Massive Language Fashions for Nationwide Safety, we offer data on efficiency benchmark exams for coaching LlaMA fashions of various parameter sizes on our customized 500-document set. For the estimates within the rightmost column, we outline a sensible experiment as LlaMA with 10k coaching paperwork for 3 epochs with GovCloud at $39.33/ hour, LoRA (r=1, α=2, dropout = 0.05), and DeepSpeed. On the time of the report, Prime Secret charges had been $79.0533/hour.

Wanting Forward

Infrastructure is a significant consideration as organizations look to construct, deploy, and use LLMs—and different AI instruments. Extra work is required, particularly to fulfill challenges in unconventional environments, equivalent to these on the edge.

Because the SEI works to advance the self-discipline of AI engineering, a robust infrastructure base can help the scalability and robustness of AI methods. Particularly, designing for flexibility permits builders to scale an AI answer up or down relying on system and use case wants. By defending knowledge and gold requirements, groups can make sure the integrity and help the replicability of experiment outcomes.

Because the Division of Protection more and more incorporates AI into mission options, the infrastructure practices outlined on this put up can present price financial savings and a shorter runway to fielding AI capabilities. Particular practices like establishing a springboard platform can save time and prices in the long term.