Home Blog

ios – Flutter scheduled (time-based) notifications don’t hearth on Android — fast notifications work


I’m constructing my first Flutter app for Android and iOS and I’m caught with scheduled notifications on Android.

  • What works: fast notifications (present(…)) show high quality after the consumer allows notifications.
  • What doesn’t: time-based (scheduled/weekly) notifications by no means hearth on the chosen weekday & time.
  • Permissions: On Android, the app asks for notification permission and I’ve additionally enabled Alarms & reminders (Schedule actual alarms) in system settings. Each are ON.

Surroundings

  • flutter_local_notifications: ^19.4.0
  • timezone: ^0.10.1
  • flutter_timezone: ^4.1.1
  • Gadget(s): [Android Studio + Medium Device API 36, Samsung Galaxy S23 + Android 15

What I expect
When the user picks a weekday and time (e.g., Monday at 09:00), a notification should appear then, and repeat weekly.

What actually happens
Nothing fires at the scheduled time. No crashes. Immediate notifications using the same channel work reliably.

Code:
Settings_Screen

import 'package:my_app/config/notifications.dart';

 // --- Notifications ---
          const Divider(height: 32),
          SwitchListTile(
            title: const Text('Benachrichtigungen aktivieren'),
            value: _notifEnabled,
            onChanged: _toggleNotifications,
          ),

          if (_notifEnabled) ...[
            _buildExactStatusTile(), // neu

            const SizedBox(height: 8),
            const Text('Wochentag', style: TextStyle(fontWeight: FontWeight.bold)),
            Wrap(
              spacing: 8,
              children: List.generate(7, (i) {
                final int day = i + 1; // 1=Mo ... 7=So
                const labels = ['Mo','Di','Mi','Do','Fr','Sa','So'];
                return ChoiceChip(
                  label: Textual content(labels[i]),
                  chosen: _notifWeekday == day,
                  onSelected: (_) async {
                    setState(() => _notifWeekday = day);
                    await _persistNotificationPrefs();
                    await _applyNotificationSchedule();
                  },
                );
              }),
            ),

            const SizedBox(top: 16),
            ListTile(
              main: const Icon(Icons.access_time),
              title: Textual content(
                _notifTime == null
                    ? 'Uhrzeit auswählen'
                    : 'Uhrzeit: ${_notifTime!.format(context)}',
              ),
              onTap: _pickNotificationTime,
            ),

            if (_notifWeekday == null || _notifTime == null)
              const Padding(
                padding: EdgeInsets.solely(prime: 8),
                baby: Textual content(
                  'Hinweis: Bitte Wochentag und Uhrzeit wählen, damit die Erinnerung geplant wird.',
                  model: TextStyle(coloration: Colours.gray),
                ),
              ),
          ],
        ],
      ),

notifications

import 'bundle:flutter_local_notifications/flutter_local_notifications.dart';
import 'bundle:timezone/timezone.dart' as tz;
import 'bundle:flutter_timezone/flutter_timezone.dart';
import 'bundle:timezone/information/latest_all.dart' as tzdata;

static tz.TZDateTime _nextWeekdayTime({
    required int weekday, 
    required int hour,
    required int minute,
    Length grace = const Length(seconds: 30),
  }) {
    ultimate now = tz.TZDateTime.now(tz.native);
    ultimate todayAt = tz.TZDateTime(tz.native, now.12 months, now.month, now.day, hour, minute);
    ultimate daysUntil = (weekday - now.weekday) % 7;
    var scheduled = todayAt.add(Length(days: daysUntil));
    if (scheduled.isBefore(now.add(grace))) {
      scheduled = scheduled.add(const Length(days: 7));
    }
    return scheduled;
  }

  static Future scheduleWeekly({
    required int id,
    required String title,
    required String physique,
    required int weekday,
    required int hour,
    required int minute,
    String? payload,
  }) async {
    ultimate when = _nextWeekdayTime(weekday: weekday, hour: hour, minute: minute);
    ultimate mode = await _pickAndroidScheduleMode();

most important

import 'bundle:timezone/information/latest_all.dart' as tz;
import 'bundle:timezone/timezone.dart' as tz;
import 'bundle:flutter_timezone/flutter_timezone.dart';
import 'bundle:my_app/config/notifications.dart' present NotificationsService;

void most important() async {
  WidgetsFlutterBinding.ensureInitialized();
  tz.initializeTimeZones();
  ultimate identify = await FlutterTimezone.getLocalTimezone(); // "Europe/Berlin" and so on.
  tz.setLocalLocation(tz.getLocation(identify));
  await NotificationsService.init();

AndroidManifest

    
    
    
    

What I’ve tried

  • Confirmed Notifications permission is granted (Android 13+).

  • Enabled Alarms & reminders (Schedule actual alarms) in system settings (Android 12+).

  • Examined with androidAllowWhileIdle: true.

  • Verified the channel just isn’t blocked and significance is max.

  • Disabled battery optimizations for the app.

  • Examined on emulator and bodily machine.

  • Double-checked that the computed scheduled time is sooner or later and has the proper weekday.

  • Confirmed Notifications permission is granted (Android 13+).

  • Enabled Alarms & reminders (Schedule actual alarms) in system settings (Android 12+).

  • Examined with androidAllowWhileIdle: true.

  • Verified the channel just isn’t blocked and significance is max.

  • Disabled battery optimizations for the app.

  • Examined on emulator and bodily machine.

  • Double-checked that the computed scheduled time is sooner or later and has the proper weekday.

An AI ‘Nerd Knob’ Each Community Engineer Ought to Know


Alright, my associates, I’m again with one other publish based mostly on my learnings and exploration of AI and the way it’ll match into our work as community engineers. In right now’s publish, I wish to share the primary (of what is going to seemingly be many) “nerd knobs” that I feel all of us ought to concentrate on and the way they are going to affect our use of AI and AI instruments. I can already sense the thrill within the room. In any case, there’s not a lot a community engineer likes greater than tweaking a nerd knob within the community to fine-tune efficiency. And that’s precisely what we’ll be doing right here. High quality-tuning our AI instruments to assist us be simpler.

First up, the requisite disclaimer or two.

  1. There are SO MANY nerd knobs in AI. (Shocker, I do know.) So, should you all like this sort of weblog publish, I’d be completely satisfied to return in different posts the place we have a look at different “knobs” and settings in AI and the way they work. Properly, I’d be completely satisfied to return as soon as I perceive them, no less than. 🙂
  2. Altering any of the settings in your AI instruments can have dramatic results on outcomes. This consists of rising the useful resource consumption of the AI mannequin, in addition to rising hallucinations and reducing the accuracy of the data that comes again out of your prompts. Think about yourselves warned. As with all issues AI, go forth and discover and experiment. However achieve this in a protected, lab surroundings.

For right now’s experiment, I’m as soon as once more utilizing LMStudio operating regionally on my laptop computer relatively than a public or cloud-hosted AI mannequin. For extra particulars on why I like LMStudio, take a look at my final weblog, Making a NetAI Playground for Agentic AI Experimentation.

Sufficient of the setup, let’s get into it!

The affect of working reminiscence dimension, a.ok.a. “context”

Let me set a scene for you.

You’re in the midst of troubleshooting a community subject. Somebody reported, or seen, instability at a degree in your community, and also you’ve been assigned the joyful job of attending to the underside of it. You captured some logs and related debug info, and the time has come to undergo all of it to determine what it means. However you’ve additionally been utilizing AI instruments to be extra productive, 10x your work, impress your boss, you recognize all of the issues which might be occurring proper now.

So, you resolve to see if AI can assist you’re employed by way of the info sooner and get to the basis of the difficulty.

You fireplace up your native AI assistant. (Sure, native—as a result of who is aware of what’s within the debug messages? Finest to maintain all of it protected in your laptop computer.)

You inform it what you’re as much as, and paste within the log messages.

Asking an AI assistant to help debug a network issue.Asking an AI assistant to help debug a network issue.
Asking AI to help with troubleshooting

After getting 120 or so traces of logs into the chat, you hit enter, kick up your ft, attain in your Arnold Palmer for a refreshing drink, and await the AI magic to occur. However earlier than you may take a sip of that iced tea and lemonade goodness, you see this has instantly popped up on the display screen:

AI Failure! Context length issueAI Failure! Context length issue
AI Failure! “The AI has nothing to say”

Oh my.

“The AI has nothing to say.”!?! How might that be?

Did you discover a query so tough that AI can’t deal with it?

No, that’s not the issue. Try the useful error message that LMStudio has kicked again:

“Attempting to maintain the primary 4994 tokens when context the overflows. Nonetheless, the mannequin is loaded with context size of solely 4096 tokens, which isn’t sufficient. Attempt to load the mannequin with a bigger context size, or present shorter enter.”

And we’ve gotten to the basis of this completely scripted storyline and demonstration. Each AI instrument on the market has a restrict to how a lot “working reminiscence” it has. The technical time period for this working reminiscence is “context size.” Should you attempt to ship extra information to an AI instrument than can match into the context size, you’ll hit this error, or one thing prefer it.

The error message signifies that the mannequin was “loaded with context size of solely 4096 tokens.” What’s a “token,” you surprise? Answering that might be a subject of a completely totally different weblog publish, however for now, simply know that “tokens” are the unit of dimension for the context size. And the very first thing that’s executed whenever you ship a immediate to an AI instrument is that the immediate is transformed into “tokens”.

So what can we do? Properly, the message provides us two potential choices: we are able to enhance the context size of the mannequin, or we are able to present shorter enter. Generally it isn’t an enormous deal to offer shorter enter. However different instances, like once we are coping with giant log recordsdata, that choice isn’t sensible—all the information is necessary.

Time to show the knob!

It’s that first choice, to load the mannequin with a bigger context size, that’s our nerd knob. Let’s flip it.

From inside LMStudio, head over to “My Fashions” and click on to open up the configuration settings interface for the mannequin.

Accessing Model SettingsAccessing Model Settings
Accessing Mannequin Settings

You’ll get an opportunity to view all of the knobs that AI fashions have. And as I discussed, there are a number of them.

Default configuration settingsDefault configuration settings
Default configuration settings

However the one we care about proper now’s the Context Size. We are able to see that the default size for this mannequin is 4096 tokens. However it helps as much as 8192 tokens. Let’s max it out!

Maxing out the Context LengthMaxing out the Context Length
Maxing out the Context Size

LMStudio supplies a useful warning and possible purpose for why the mannequin doesn’t default to the max. The context size takes reminiscence and assets. And elevating it to “a excessive worth” can affect efficiency and utilization. So if this mannequin had a max size of 40,960 tokens (the Qwen3 mannequin I exploit typically has that prime of a max), you won’t wish to simply max it out immediately. As an alternative, enhance it by a bit at a time to seek out the candy spot: a context size large enough for the job, however not outsized.

As community engineers, we’re used to fine-tuning knobs for timers, body sizes, and so many different issues. That is proper up our alley!

When you’ve up to date your context size, you’ll must “Eject” and “Reload” the mannequin for the setting to take impact. However as soon as that’s executed, it’s time to benefit from the change we’ve made!

The extra context length allows the AI to analyze the dataThe extra context length allows the AI to analyze the data
AI absolutely analyzes the logs

And have a look at that, with the bigger context window, the AI assistant was capable of undergo the logs and provides us a pleasant write-up about what they present.

I notably just like the shade it threw my method: “…take into account searching for help from … a certified community engineer.” Properly performed, AI. Properly performed.

However bruised ego apart, we are able to proceed the AI assisted troubleshooting with one thing like this.

AI helps put a timeline of the problem togetherAI helps put a timeline of the problem together
The AI Assistant places a timeline collectively

And we’re off to the races. We’ve been capable of leverage our AI assistant to:

  1. Course of a major quantity of log and debug information to establish potential points
  2. Develop a timeline of the issue (that can be tremendous helpful within the assist desk ticket and root trigger evaluation paperwork)
  3. Determine some subsequent steps we are able to do in our troubleshooting efforts.

All tales should finish…

And so you might have it, our first AI Nerd Knob—Context Size. Let’s evaluation what we realized:

  1. AI fashions have a “working reminiscence” that’s known as “context size.”
  2. Context Size is measured in “tokens.”
  3. Oftentimes instances an AI mannequin will assist a better context size than the default setting.
  4. Rising the context size would require extra assets, so make modifications slowly, don’t simply max it out fully.

Now, relying on what AI instrument you’re utilizing, chances are you’ll NOT be capable of alter the context size. Should you’re utilizing a public AI like ChatGPT, Gemini, or Claude, the context size will rely on the subscription and fashions you might have entry to. Nonetheless, there most positively IS a context size that can issue into how a lot “working reminiscence” the AI instrument has. And being conscious of that reality, and its affect on how you need to use AI, is necessary. Even when the knob in query is behind a lock and key. 🙂

Should you loved this look below the hood of AI and wish to find out about extra choices, please let me know within the feedback: Do you might have a favourite “knob” you want to show? Share it with all of us. Till subsequent time!

PS… Should you’d prefer to study extra about utilizing LMStudio, my buddy Jason Belk put a free tutorial collectively known as Run Your Personal LLM Regionally For Free and with Ease that may get you began in a short time. Test it out!

 

Join Cisco U. | Be part of the  Cisco Studying Community right now free of charge.

Study with Cisco

X | Threads | Fb | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to affix the dialog.

Learn subsequent:

Making a NetAI Playground for Agentic AI Experimentation

Take an AI Break and Let the Agent Heal the Community

Share:



Views on Generative AI in Software program Engineering and Acquisition


Within the realm of software program engineering and software program acquisition, generative AI guarantees to enhance developer productiveness and fee of manufacturing of associated artifacts, and in some instances their high quality. It’s important, nonetheless, that software program and acquisition professionals learn to apply AI-augmented strategies and instruments of their workflows successfully. SEI researchers addressed this matter in a webcast that targeted on the way forward for software program engineering and acquisition utilizing generative AI applied sciences, similar to ChatGPT, DALL·E, and Copilot. This weblog publish excerpts and flippantly edits parts of that webcast to discover the professional views of making use of generative AI in software program engineering and acquisition. It’s the newest in a collection of weblog posts on these subjects.

Moderating the webcast was SEI Fellow Anita Carleton, director of the SEI Software program Options Division. Collaborating within the webcast have been a bunch of SEI thought leaders on AI and software program, together with James Ivers, principal engineer; Ipek Ozkaya, technical director of the Engineering Clever Software program Techniques group; John Robert, deputy director of the Software program Options Division; Douglas Schmidt, who was the Director of Operational Check and Analysis on the Division of Protection (DoD) and is now the inaugural dean of the College of Computing, Knowledge Sciences, and Physics at William & Mary; and Shen Zhang, a senior engineer.

Anita: What are the gaps, dangers, and challenges that you just all see in utilizing generative AI that must be addressed to make it simpler for software program engineering and software program acquisition?

Shen: I’ll give attention to two particularly. One which is essential to the DoD is explainability. Explainable AI is crucial as a result of it permits practitioners to achieve an understanding of the outcomes output from generative AI instruments, particularly after we use them for mission- and safety-critical purposes. There’s a whole lot of analysis on this subject. Progress is gradual, nonetheless, and never all approaches apply to generative AI, particularly relating to figuring out and understanding incorrect output. Alternatively, it’s useful to make use of prompting strategies like chain of thought reasoning, which decomposes a fancy process right into a sequence of smaller subtasks. These smaller subtasks can extra simply be reviewed incrementally, lowering the chance of performing on incorrect outputs.

The second space is safety and disclosure, which is particularly crucial for the DoD and different high-stakes domains similar to well being care, finance, and aviation. For most of the SEI’s DoD sponsors and companions, we work at impression ranges of IL5 and past. In such a setting, customers can’t simply take that info—be it textual content, code, or any sort of enter—and go it right into a business service, similar to ChatGPT, Claude, or Gemini, that doesn’t present enough controls on how the info are transmitted, used, and saved.

Industrial IL5 choices can mitigate issues about knowledge dealing with, as they will use of native LLMs air-gapped from the web. There are, nonetheless, trade-offs between use of highly effective business LLMs that faucet into assets across the net and extra restricted capabilities of native fashions. Balancing functionality, safety, and disclosure of delicate knowledge is essential.

John: A key problem in making use of generative AI to growth of software program and its acquisition is guaranteeing correct human oversight, which is required no matter which LLM is utilized. It’s not our intent to interchange individuals with LLMs or different types of generative AI. As an alternative, our purpose is to assist individuals deliver these new instruments into their software program engineering and acquisition processes, work together with them reliably and responsibly, and make sure the accuracy and equity of their outcomes.

I additionally need to point out a priority about overhyped expectations. Many claims made right now about what generative AI can do are overhyped. On the similar time, nonetheless, generative AI is offering many alternatives and advantages. For instance, we’ve got discovered that making use of LLMs for some work on the SEI and elsewhere considerably improves productiveness in lots of software program engineering actions, although we’re additionally painfully conscious that generative AI gained’t resolve each downside each time. For instance, utilizing generative AI to synthesize software program check instances can speed up software program testing, as talked about in latest research, similar to Automated Unit Check Enchancment utilizing Massive Language Fashions at Meta. We’re additionally exploring utilizing generative AI to assist engineers study testing and analyze knowledge to seek out strengths and weaknesses in software program assurance knowledge, similar to points or defects associated to security or safety as outlined within the paper Utilizing LLMs to Adjudicate Static-Evaluation Alerts.

I’d additionally like point out two latest SEI articles that additional cowl the challenges that generative AI wants to deal with to make it simpler for software program engineering and software program acquisition:

Anita: Ipek, how about some gaps, challenges, and dangers out of your perspective?

Ipek: I believe it’s vital to debate the dimensions of acquisition methods in addition to their evolvability and sustainability elements. We’re at a stage within the evolution of generative-AI-based software program engineering and acquisition instruments the place we nonetheless don’t know what we don’t know. Specifically, the software program growth duties the place generative AI had been utilized up to now are pretty slender in scope, for instance, interacting with a comparatively small variety of strategies and courses in standard programming languages and platforms.

In distinction, the sorts of software-reliant acquisition methods we take care of on the SEI are considerably bigger and extra complicated, containing hundreds of thousands of traces of code and 1000’s of modules and utilizing a variety of legacy programming languages and platforms. Furthermore, these methods might be developed, operated, and sustained over many years. We due to this fact don’t know but how properly generative AI will work with the general construction, conduct, and structure of those software-reliant methods.

For instance, if a workforce making use of LLMs to develop and maintain parts of an acquisition system makes adjustments in a single specific module, how constantly will these adjustments propagate to different, comparable modules? Likewise, how will the fast evolution of LLM variations have an effect on generated code dependencies and technical debt? These are very difficult issues, and whereas there are rising approaches to deal with a few of them, we shouldn’t assume that every one of those issues have been—or might be—addressed quickly.

Anita: What are some alternatives for generative AI as we take into consideration software program engineering and software program acquisition?

James: I have a tendency to consider these alternatives from a number of views. One is, what’s a pure downside for generative AI, the place it’s a extremely good match, however the place I as a developer am much less facile or don’t need to commit time to? For instance, generative AI is usually good at automating extremely repetitive and customary duties, similar to producing scaffolding for an online utility that provides me the construction to get began. Then I can are available in and actually flesh out that scaffolding with my domain-specific info.

When most of us have been simply beginning out within the computing subject, we had mentors who gave us good recommendation alongside the way in which. Likewise, there are alternatives now to ask generative AI to supply recommendation, for instance, what components I ought to embrace in a proposal for my supervisor or how ought to I method a testing technique. A generative AI instrument might not at all times present deep domain- or program-specific recommendation. Nonetheless, for builders who’re studying these instruments, it’s like having a mentor who offers you fairly good recommendation more often than not. In fact, you possibly can’t belief the whole lot these instruments let you know, however we didn’t at all times belief the whole lot our mentors instructed us both!.

Doug: I’d prefer to riff off of what James was simply saying. Generative AI holds vital promise to rework and modernize the static, document-heavy processes widespread in large-scale software program acquisition applications. By automating the curation and summarization of huge numbers of paperwork, these applied sciences can mitigate the chaos usually encountered in managing in depth archives of PDFs and Phrase information. This automation reduces the burden on the technical employees, who usually spend appreciable time making an attempt to regain an understanding of present documentation. By enabling faster retrieval and summarization of related paperwork, AI can improve productiveness and cut back redundancy, which is crucial when modernizing the acquisition course of.

In sensible phrases, the applying of generative AI in software program an can streamline workflows by offering dynamic, information-centric methods. As an example, LLMs can sift by means of huge knowledge repositories to establish and extract pertinent info, thereby simplifying the duty of managing giant volumes of documentation. This functionality is especially helpful for holding up-to-date with the evolving necessities, structure, and check plans in a venture, guaranteeing all workforce members have well timed entry to probably the most related info.

Nonetheless, whereas generative AI can enhance effectivity dramatically, it’s essential to keep up the human oversight John talked about earlier to make sure the accuracy and relevancy of the data extracted. Human experience stays important in decoding AI outputs, significantly in nuanced or crucial decision-making areas. Making certain these AI methods are audited frequently—and that their outputs may be (and are) verified—helps safeguard in opposition to errors and ensures that integrating AI into software program acquisition processes augments human experience somewhat than replaces it.

Anita: What are a few of the key challenges you foresee in curating knowledge for constructing a trusted LLM for acquisition within the DoD area? Do any of you could have insights from working with DoD applications right here?

Shen: Within the acquisition area, as a part of the contract, a number of buyer templates and customary deliverables are imposed on distributors. These contracts usually place a considerable burden on authorities groups to evaluate deliverables from contractors to make sure they adhere to these requirements. As Doug talked about, right here’s the place generative AI will help by scaling and effectively validating that vendor deliverables meet these authorities requirements.

Extra importantly, generative AI gives an goal overview of the info being analyzed, which is essential to enhancing impartiality within the acquisition course of. When coping with a number of distributors, for instance in reviewing responses to a broad company announcement (BAA), it’s crucial that there’s objectivity in assessing submitted proposals. Generative AI can actually assist right here, particularly when instructed with acceptable immediate engineering and immediate patterns. In fact, generative AI has its personal biases, which circles again to John’s admonition to maintain knowledgeable and cognizant people within the loop to assist mitigate dangers with LLM hallucinations.

Anita: John, I do know you could have labored a terrific take care of Navy applications and thought you may need some insights right here as properly.

John: As we develop AI fashions to boost and modernize software program acquisition actions within the DoD area, sure domains current early alternatives, such because the standardization of presidency insurance policies for guaranteeing security in plane or ships. These in depth regulatory paperwork usually span a number of hundred pages and dictate a variety of actions that acquisition program workplaces require builders to undertake to make sure security and compliance inside these areas. Security requirements in these domains are steadily managed by specialised authorities groups who have interaction with a number of applications, have entry to related datasets, and possess skilled personnel.

In these specialised acquisition contexts, there are alternatives to both develop devoted LLMs or fine-tune present fashions to fulfill particular wants. LLMs can function worthwhile assets to reinforce the capabilities of those groups, enhancing their effectivity and effectiveness in sustaining security requirements. For instance, by synthesizing and decoding complicated regulatory texts, LLMs will help groups by offering insights and automatic compliance checks, thereby streamlining the customarily prolonged and complicated technique of assembly governmental security laws.

These domain-specific purposes characterize some near-term alternatives for LLMs as a result of their scope of utilization is bounded by way of the sorts of wanted knowledge. Likewise, authorities organizations already accumulate, arrange, and analyze knowledge particular to their space of governance. For instance, authorities car security organizations have years of information related to software program security to tell regulatory coverage and requirements. Gathering and analyzing huge quantities of information for a lot of potential makes use of is a big problem within the DoD for varied causes, a few of which Doug talked about earlier. I due to this fact assume we should always give attention to constructing trusted LLMs for particular domains first, show their effectiveness, and then lengthen their knowledge and makes use of extra broadly after that.

James: With respect to your query about constructing trusted LLMs, we should always do not forget that we don’t have to put all our belief within the AI itself. We’d like to consider workflows and processes. Specifically, if we put different safeguards—be they people, static evaluation instruments, or no matter—in place, then we don’t at all times want absolute belief within the AI to have faith within the end result, so long as they’re complete and complementary views. It’s due to this fact important to take a step again and take into consideration the workflow as a complete. Will we belief the workflow, the method, and other people within the loop? could also be a greater query than merely Will we belief the AI?

Future Work to Handle Generative AI Challenges in Acquisition and Software program Engineering

Whereas generative AI holds nice promise, a number of gaps should be closed in order that software program engineering and acquisition organizations can make the most of generative AI extra extensively and constantly. Particular examples embrace:

  • Accuracy and belief: Generative AI can create hallucinations, which will not be apparent for much less skilled customers and may create vital points. A few of these errors may be partially mitigated with efficient immediate engineering, constant testing, and human oversight. Organizations ought to undertake governance requirements that constantly monitor generative AI efficiency and guarantee human accountability all through the method.
  • Knowledge safety and privateness: Generative AI operates on giant units of knowledge or knowledge, together with knowledge that’s non-public or should be managed. Generative AI on-line providers are primarily supposed for public knowledge, and due to this fact sharing delicate or proprietary info with these public providers may be problematic. Organizations can handle these points by creating safe generative AI deployment configurations, similar to non-public cloud infrastructure, air-gapped methods, or knowledge privateness vaults.
  • Enterprise processes and value: Organizations deploying any new service, together with generative AI providers, should at all times contemplate adjustments to the enterprise processes and monetary commitments past preliminary deployment. Generative AI prices can embrace infrastructure investments, mannequin fine-tuning, safety monitoring, upgrading with new and improved fashions, and coaching applications for correct use and use instances. These up-front prices are balanced by enhancements in growth and analysis productiveness and, doubtlessly, high quality.
  • Moral and authorized dangers: Generative AI methods can introduce moral and authorized challenges, together with bias, equity, and mental property rights. Biases in coaching knowledge might result in unfair outcomes, making it important to incorporate human overview of equity as mitigation. Organizations ought to set up pointers for moral use of generative AI, so contemplate leveraging assets just like the NIST AI Danger Administration Framework to information accountable use of generative AI.

Generative AI presents thrilling prospects for software program engineering and software program acquisition. Nonetheless, it’s a fast-evolving expertise with completely different interplay types and input-output assumptions in comparison with these accustomed to software program and acquisition professionals. In a latest IEEE Software program article, Anita Carleton and her coauthors emphasised how software program engineering and software program and acquisition professionals want coaching to handle and collaborate with AI methods successfully and guarantee operational effectivity.

As well as, John and Doug participated in a latest webinar, Generative Synthetic Intelligence within the DoD Acquisition Lifecycle, with different authorities leaders who additional emphasised the significance of guaranteeing generative AI is match to be used in high-stakes domains similar to protection, healthcare, and litigation. Organizations can solely profit from generative AI by understanding the way it works, recognizing its dangers, and taking steps to mitigate them.

Detection and Restore: The Price of Remediation


Bringing an current codebase into compliance with the SEI CERT Coding Customary requires a price of effort and time. The everyday approach of assessing this value is to run a static evaluation software on the codebase (noting that putting in and sustaining the static evaluation software might incur its personal prices). A easy metric for estimating this value is subsequently to depend the variety of static evaluation alerts that report a violation of the CERT pointers. (This assumes that fixing anyone alert sometimes has no impression on different alerts, although usually a single challenge might set off a number of alerts.) However those that are accustomed to static evaluation instruments know that the alerts will not be at all times dependable – there are false positives that have to be detected and disregarded. Some pointers are inherently simpler than others for detecting violations.

This yr, we plan on making some thrilling updates to the SEI CERT C Coding Customary. This weblog submit is about one in every of our concepts for bettering the usual. This alteration would replace the requirements to higher harmonize with the present cutting-edge for static evaluation instruments, in addition to simplify the method of supply code safety auditing.

For this submit, we’re asking our readers and customers to offer us with suggestions. Would the adjustments that we suggest to our Threat Evaluation metric disrupt your work? How a lot effort would they impose on you, our readers? If you want to remark, please ship an electronic mail to information@sei.cmu.edu.

The premise for our adjustments is that some violations are simpler to restore than others. Within the SEI CERT Coding Customary, we assign every guideline a Remediation Price metric, which is outlined with the next textual content:

Remediation Price — How costly is it to adjust to the rule?

Worth

That means

Detection

Correction

1

Excessive

Guide

Guide

2

Medium

Computerized

Guide

3

Low

Computerized

Computerized

Moreover, every guideline additionally has a Precedence metric, which is the product of the Remediation Price and two different metrics that assess severity (how consequential is it to not adjust to the rule) and probability (how probably that violating the rule results in an exploitable vulnerability?). All three metrics may be represented as numbers starting from 1 to three, which may produce a product between 1 and 27 (that’s, 3*3*3), the place low numbers indicate larger value.

The above desk may very well be alternately represented this manner:

Is Mechanically…

Not Repairable

Repairable

Not Detectable

1 (Excessive)

1 (Excessive)

Detectable

2 (Medium)

3 (Low)

This Remediation Price metric was conceived again in 2006 when the SEI CERT C Coding Customary was first created. We didn’t use extra exact definitions of detectable or repairable on the time. However we did assume that some pointers can be robotically detectable whereas others wouldn’t. Likewise, we assumed that some pointers can be repairable whereas others wouldn’t. Lastly, a tenet that was repairable however not detectable can be assigned a Excessive value on the grounds that it was not worthwhile to restore code if we couldn’t detect whether or not or not it complied with a tenet.

We additionally reasoned that the questions of detectability and repairability must be thought of in principle. That’s, is a passable detection or restore heuristic doable? When contemplating if such a heuristic exists, you may ignore whether or not a industrial or open supply product claims to implement the heuristic.

At this time, the scenario has modified, and subsequently we have to replace our definitions of detectable and repairable.

Detectability

A latest main change has been so as to add an Automated Detection part to each CERT guideline. This identifies the evaluation instruments that declare to detect – and restore – violations of the rule. For instance, Parasoft claims to detect violations of each rule and advice within the SEI CERT C Coding Customary. If a tenet’s Remediation Price is Excessive, indicating that the rule is non-detectable, does that create incompatibility with all of the instruments listed within the Automated Detection part?

The reply is that the instruments in such a tenet could also be topic to false positives (that’s, offering alerts on code that truly complies with the rule), or false negatives (that’s, failing to report some really noncompliant code), or each. It’s straightforward to assemble an analyzer with no false positives (merely by no means report any alerts) or false negatives (merely alert that each line of code is noncompliant). However for a lot of pointers, detection with no false positives and no false negatives is, in principle, undecidable. Some attributes are simpler to research, however on the whole sensible analyses are approximate, affected by false positives, false negatives, or each. (A sound evaluation is one which has no false negatives, although it might need false positives. Most sensible instruments, nonetheless, have each false negatives and false positives.) For instance, EXP34-C, the C rule that forbids dereferencing null pointers, is just not robotically detectable by this stricter definition. As a counterexample, violations of rule EXP45-C (don’t carry out assignments in choice statements) may be detected reliably.

An appropriate definition of detectable is: Can a static evaluation software decide if code violates the rule with each a low false constructive price and low false unfavourable price? We don’t require that there can by no means be false positives or false negatives, however we will require that they each be small, which means {that a} software’s alerts are full and correct for sensible functions.

Most pointers, together with EXP34-C, will, by this definition, be undetectable utilizing the present crop of instruments. This doesn’t imply that instruments can’t report violations of EXP34-C; it simply implies that any such violation is likely to be a false constructive, the software would possibly miss some violations, or each.

Repairability

Our notation of what’s repairable has been formed by latest advances in Automated Program Restore (APR) analysis and expertise, such because the Redemption undertaking. Particularly, the Redemption undertaking and power contemplate a static evaluation alert repairable no matter whether or not it’s a false constructive. Repairing a false constructive ought to, in principle, not alter the code habits. Moreover, in Redemption, a single restore must be restricted to an area area and never distributed all through the code. For example, altering the quantity or forms of a operate’s parameter checklist requires modifying each name to that operate, and performance calls may be distributed all through the code. Such a change would subsequently not be native.

With that stated, our definition of repairable may be expressed as: Code is repairable if an alert may be reliably fastened by an APR software, and the one modifications to code are close to the location of the alert. Moreover, repairing a false constructive alert should not break the code. For instance, the null-pointer-dereference rule (EXP34-C) is repairable as a result of a pointer dereference may be preceded by an robotically inserted null test. In distinction, CERT rule MEM31-C requires that each one dynamic reminiscence be freed precisely as soon as. An alert that complains that some pointer goes out of scope with out being freed appears repairable by inserting a name to free(pointer). Nevertheless, if the alert is a false constructive, and the pointer’s pointed-to reminiscence was already freed, then the APR software might have simply created a double-free vulnerability, in essence changing working code into susceptible code. Subsequently, rule MEM31-C is just not, with present capabilities, (robotically) repairable.

The New Remediation Price

Whereas the earlier Remediation Price metric did deal with detectability and repairability as interrelated, we now imagine they’re unbiased and fascinating metrics by themselves. A rule that was neither detectable nor repairable was given the identical remediation value as one which was repairable however not detectable, and we now imagine these two guidelines ought to have these variations mirrored in our metrics. We’re subsequently contemplating changing the previous Remediation Price metric with two metrics: Detectable and Repairable. Each metrics are easy sure/no questions.

There may be nonetheless the query of the best way to generate the Precedence metric. As famous above, this was the product of the Remediation Price, expressed as an integer from 1 to three, with two different integers from 1 to three. We are able to subsequently derive a brand new Remediation Price metric from the Detectable and Repairable metrics. The obvious answer can be to assign a 1 to every sure and a 2 to every no. Thus, now we have created a metric much like the previous Remediation Price utilizing the next desk:

Is Mechanically…

Not Repairable

Repairable

Not Detectable

1

2

Detectable

2

4

Nevertheless, we determined {that a} worth of 4 is problematic. First, the previous Remediation Price metric had a most of three, and having a most of 4 skews our product. Now the very best precedence can be 3*3*4=36 as an alternative of 27. This may additionally make the brand new remediation value extra vital than the opposite two metrics. We determined that changing the 4 with a 3 solves these issues:

Is Mechanically…

Not Repairable

Repairable

Not Detectable

1

2

Detectable

2

3

Subsequent Steps

Subsequent will come the duty of inspecting every guideline to interchange its Remediation Price with new Detectable and Repairable metrics. We should additionally replace the Precedence and Stage metrics for pointers the place the Detectable and Repairable metrics disagree with the previous Remediation Price.

Instruments and processes that incorporate the CERT metrics might want to replace their metrics to replicate CERT’s new Detectable and Repairable metrics. For instance, CERT’s personal SCALe undertaking gives software program safety audits ranked by Precedence, and future rankings of the CERT C guidelines will change.

Listed below are the previous and new metrics for the C Integer Guidelines:

Rule

Detectable

Repairable

New REM

Previous REM

Title

INT30-C

No

Sure

2

3

Guarantee that unsigned integer operations don’t wrap

INT31-C

No

Sure

2

3

Guarantee that integer conversions don’t end in misplaced or misinterpreted knowledge

INT32-C

No

Sure

2

3

Guarantee that operations on signed integers don’t end in overflow

INT33-C

No

Sure

2

2

Guarantee that division and the rest operations don’t end in divide-by-zero errors

INT34-C

No

Sure

2

2

Do not shift an expression by a unfavourable variety of bits or by larger than or equal to the variety of bits that exist within the operand

INT35-C

No

No

1

2

Use appropriate integer precisions

INT36-C

Sure

No

2

3

Changing a pointer to integer or integer to pointer

On this desk, New REM (Remediation Price) is the metric we’d produce from the Detectable and Repairable metrics, and Previous REM is the present Remediation Price metric. Clearly, solely INT33-C and INT34-C have the identical New REM values as Previous REM values. Which means that their Precedence and Stage metrics stay unchanged, however the different guidelines would have revised Precedence and Stage metrics.

As soon as now we have computed the brand new Threat Evaluation metrics for the CERT C Safe Coding Guidelines, we’d subsequent deal with the C suggestions, which even have Threat Evaluation metrics. We might then proceed to replace these metrics for the remaining CERT requirements: C++, Java, Android, and Perl.

Auditing

The brand new Detectable and Repairable metrics additionally alter how supply code safety audits must be carried out.

Any alert from a tenet that’s robotically repairable may, actually, not be audited in any respect. As a substitute, it may very well be instantly repaired. If an automatic restore software is just not out there, it may as an alternative be repaired manually by builders, who might not care whether or not or not it’s a true constructive. A corporation might select whether or not to use the entire potential repairs or to assessment them; they may apply further effort to assessment automated repairs, however this may increasingly solely be essential to fulfill their requirements of software program high quality and their belief within the APR software.

Any alert from a tenet that’s robotically detectable also needs to, actually, not be audited. It must be repaired robotically with an APR software or despatched to the builders for guide restore.

This raises a possible query: Detectable pointers ought to, in principle, virtually by no means yield false positives. Is that this really true? The alert is likely to be false as a consequence of bugs within the static evaluation software or bugs within the mapping (between the software and the CERT guideline). We may conduct a sequence of supply code audits to verify {that a} guideline really is robotically detectable and revise pointers that aren’t, actually, robotically detectable.

Solely pointers which can be neither robotically detectable nor robotically repairable ought to really be manually audited.

Given the massive variety of SA alerts generated by most code within the DoD, any optimizations to the auditing course of ought to end in extra alerts being audited and repaired. This can reduce the trouble required in addressing alerts. Many organizations don’t deal with all alerts, they usually consequently settle for the danger of un-resolved vulnerabilities of their code. So as an alternative of decreasing effort, this improved course of reduces danger.

This improved course of may be summed up by the next pseudocode:

  • For every alert:
    • If alert is repairable
      • If now we have an APR software to restore alert:
        • Use APR software to restore alert
      • else (No APR software)
        • Ship alert to builders for guide restore
    • else (Alert is just not repairable)
      • if alert is detectable:
        • Ship alert to builders for guide restore
      • else (Alert is just not detectable)

Your Suggestions Wanted

We’re publishing this particular plan to solicit suggestions. Would these adjustments to our Threat Evaluation metric disrupt your work? How a lot effort would they impose on you? If you want to remark, please ship an electronic mail to information@sei.cmu.edu.

How We Automate Complicated Enterprise Workflows with Camunda BPM and Spring Boot


Enterprise software program, as proven by Statista, is developed to meet the wants of enormous organizations. Versus shopper software program made for private use, enterprise options should be targeting scalability, integration into programs, and dealing with giant quantities of knowledge.

In apply, this implies the software program should help refined enterprise processes, a number of consumer profiles, and altering enterprise processes however stay adaptable, fast-working, and auditable.

However enterprise software growth that takes all of this into consideration, from advanced workflows and integration with current programs to strict compliance necessities, could be enormously difficult and time-consuming.

Improvement groups usually grapple with fragmented automation, restricted perception into enterprise processes, and problem in protecting tempo with rising enterprise necessities.

That’s the place Camunda BPM proves its worth. When Camunda BPM turns into part of Spring Boot growth companies, it offers a strong instrument for organizing and automating enterprise processes. It helps groups clearly map out workflows, enhance visibility, and make it simpler to adapt enterprise functions as wants change.

What Is BPM?

BPM stands for Enterprise Course of Administration. It’s a toolset and a self-discipline that helps organizations automate, monitor, optimize, and design their enterprise processes.

Moderately than being depending on guide processes or stand-alone software program, BPM gives a scientific methodology to visually outline workflows, run them robotically with software program, monitor how they carry out, and revise them as enterprise circumstances change.

A BPM system usually consists of:

  • Course of modeling: Utilizing commonplace diagrams (similar to BPMN — Enterprise Course of Mannequin and Notation) to graphically depict each step in a process.
  • Course of automation: Software program that autonomously executes and manages these processes.
  • Monitoring and analytics: Performance to look at course of efficiency in actual time and collect information for optimization.
  • Steady enchancment: Easy modification and optimization of processes based mostly on information and rising necessities.

How We Automate Complicated Enterprise Workflows with Camunda BPM and Spring Boot

How Enterprise Course of Automation Works With out BPM

Earlier than adopting a BPM resolution like Camunda, many enterprises battle with course of automation that’s fragmented and onerous to handle. This introduces a number of challenges:

Many Processes Operating at As soon as

Giant organizations have an excessive amount of issues occurring on the identical time. Generally these processes appear separate, however usually they overlap or rely on shared information and assets.

For instance, when an organization handles an order, the gross sales, stock, billing, and transport groups all have their very own workflows that have to work collectively.

With out a clear course of for coordinating these simultaneous actions, groups are certain to finish up doing duplicate work or undergo from delays when handovers throughout processes are poorly outlined.

Outdated and Unclear Processes

Many processes have grown through the years with out being clearly written down. As a substitute, they’re usually based mostly on how folks have finished issues for a very long time.

For instance, a mere approval of an expense should be achieved by sending of emails and dealing on spreadsheets as a substitute of an correct, automated course of.

This manner, it turns into onerous to know or enhance the workflow, and new workers could discover it complicated.

Many Completely different Folks Concerned

Enterprise processes often contain a lot of folks, every with totally different roles and entry to info.

As an example, in a mortgage approval course of, mortgage officers, danger managers, and compliance groups all perform totally different jobs and see totally different elements of the info. This manner, managing who can do what and who can see what’s problematic with out the fitting instruments.

Difficulties in Integration with Exterior and Inside Techniques

Enterprises hardly ever function in isolation; their work usually is determined by totally different software program programs, similar to billing, buyer administration, or HR, that want to connect with make processes work as demanded.

For instance, processing a buyer order would possibly require reside stock checks, fee authorization, and transport label era throughout totally different programs.

With out correct course of administration, these integrations can change into fragile factors liable to errors or delays.

Why Camunda BPM? Transparency, Optimization, and Full Management

Camunda is an open-source platform for automating workflows and enterprise selections. It helps groups mannequin, run, and monitor advanced processes utilizing commonplace notations, similar to BPMN 2.0 (for workflows), DMN (for selections), and CMMN (for case administration).

Why Camunda BPM

With Camunda, it’s simpler to attach programs, automate duties, and maintain full visibility into how enterprise operations run.

One in every of Camunda’s principal strengths is its use of BPMN 2.0, which permits groups to explain enterprise processes in a transparent, visible manner. This makes it simpler to formalize workflows, spot flaws, and talk processes throughout each technical and non-technical groups:

Earlier than: Groups usually automate simply particular person steps with out seeing the total image. This makes it onerous to plan the work or enhance the method as an entire.

After: With Camunda, the total course of is mapped out visually utilizing BPMN 2.0. Groups can spot weak factors, optimize the workflow, after which automate the steps that matter most.

Earlier than: Course of descriptions are saved in separate paperwork that shortly go outdated.

After: The BPMN diagram lives contained in the system and acts because the real-time supply of fact. The system runs precisely as proven within the diagram, so the documentation is all the time updated.

Earlier than: Making adjustments to a course of is dangerous and time-consuming as a result of it isn’t clear how a change could have an effect on all the pieces else.

After: Adjustments could be made proper within the BPMN diagram, making it simpler to know their influence and replace the method safely.

Earlier than: Monitoring how processes are working usually requires constructing customized instruments.

After: Camunda consists of instruments like Camunda Cockpit, which let groups monitor processes and acquire statistics out of the field.

Earlier than: It’s onerous to construction processes, outline when sure steps ought to occur, or management who can see and do what.

After: Camunda makes it straightforward to set guidelines for step execution, consumer permissions, and information visibility, all in a transparent and manageable manner.

Earlier than Camunda After Camunda Integration
Guide step-by-step automation Finish-to-end course of modeling and optimization
Exterior and outdated documentation BPMN diagrams as reside, executable documentation
Expensive course of adjustments Visible change administration inside diagrams
Customized monitoring options Constructed-in instruments like Camunda Cockpit
Poor visibility into course of roles Outlined entry, decision-making, and information visibility per consumer group

Actual Advantages: Earlier than and After Camunda

How Camunda BPM Integration Works in Follow: BM4A Strategy

The BM4A method offers a pre-built integration module that connects your software’s enterprise logic with Camunda.

This module acts as an middleman between your principal system and the Camunda workflow engine, conducting information alternate, consumer job processing, and repair orchestration.

With this setup, you now not have to develop low-level integrations from scratch; Camunda turns into an natural extension of your software’s structure.

How Camunda BPM Integration Works in Practice

Let’s take a look on the order of steps by which the combination course of usually happens with BM4A:

Necessities Assortment

The method begins with the gathering of each technical and enterprise necessities. Key workflows focused for automation are recognized early within the starting stage.

Specification and BPMN Modeling

An in depth specification is ready, which incorporates enterprise course of diagrams modeled in BPMN 2.0. These diagrams provide a transparent and visible illustration of workflows and function a reference for each technical groups and enterprise stakeholders.

Course of Evaluate and Optimization

The proposed workflows are reviewed collaboratively with stakeholders. They determine alternatives for optimization, outline the degrees of element, and set up course of priorities.

Preliminary System Deployment (inside 2 days)

A primary software model, built-in with Camunda by way of BM4A, is deployed inside two days of undertaking initiation. This model features a functioning interface and backend, offering early entry to the system.

Workflow Implementation

BPMN diagrams are embedded into the system, and enterprise logic is added to every step. Duties could be configured to set off particular code executions, system integrations, or human actions, relying on the diagram’s construction.

Common Suggestions and Iteration

Frequent demonstrations and suggestions classes are carried out. Stakeholders can monitor course of execution visually and suggest adjustments while not having to assessment code. Changes to workflows or job priorities are applied promptly.

MVP Launch and Person Testing

An MVP (Minimal Viable Product) is launched, and preliminary consumer teams are onboarded. Actual-world suggestions is collected to validate course of flows and determine crucial enhancements.

Ongoing Enhancements

Enhancements are made based mostly on consumer suggestions. New logic or circumstances could be built-in into current workflows with out altering the core structure, because of Camunda’s versatile engine.

Embedded Documentation and Coaching

BPMN diagrams throughout the system function dwelling documentation. This simplifies coaching, onboarding, and upkeep by making certain that operational processes are all the time aligned with precise system habits.

Help and Enlargement

Put up-release, ongoing help is offered together with the addition of recent options, processes, or integrations. The system stays scalable and adaptable to evolving enterprise wants.

Important Benefits

On this manner, utilizing the BM4A + Camunda method, organizations usually reap the next advantages:

  • Clear understanding of enterprise processes and their deployment
  • Agile growth schedules with early system entry
  • Choice to iterate and refine workflows with out affecting the structure
  • Documentation is built-in inside for ease of use and upkeep
  • Versatile base for long-term course of automation tasks

Moreover, it’s necessary to say that this sample is appropriate for tasks starting from inside automation tooling to large-scale enterprise programs.

Conclusion

Utilizing Camunda BPM along with BM4A makes enterprise software program growth sooner, extra adaptable, and manner simpler to handle. Because of ready-to-use BM4A modules, it’s potential to chop each growth time and prices.

The system’s structure stays versatile and scalable, which suggests it will probably develop and adapt as wanted. It additionally turns into far more easy to estimate the effort and time required for brand spanking new options.

Most significantly, the method turns into extra clear for everybody concerned. Stakeholders keep within the loop, can clearly see how issues are progressing, and assist form the result.

General, it’s a sensible approach to construct programs which can be environment friendly now and prepared for the long run.