11 C
New York
Tuesday, April 1, 2025
Home Blog Page 3802

Android Builders Weblog: #WeArePlay | 4 tales of founders constructing apps for the LGBTQIA+ group



Android Builders Weblog: #WeArePlay | 4 tales of founders constructing apps for the LGBTQIA+ group

Posted by Robbie McLachlan, Developer Advertising

Android Builders Weblog: #WeArePlay | 4 tales of founders constructing apps for the LGBTQIA+ group

#WeArePlay celebrates the inspiring journeys of individuals behind apps and video games on Google Play. In honor of Satisfaction Month, we’re highlighting founders who’ve constructed instruments to empower the LGBTQIA+ group. From courting apps to psychological well being instruments, to storytelling platforms – these founders are paving the best way for extra inclusive expertise.

npckc is a recreation creator from Kanto, Japan whose tales painting the trans expertise

npckc – Game Creator, Kanto, Japan

Born in Hong Kong and raised in Canada, npckc is a trilingual translator primarily based in Japan. A self-taught programmer, they create video games that function tales and characters which are sometimes from marginalized communities. One such recreation is “one night time, scorching springs” the place gamers observe Haru, a trans lady, as she embarks on a go to to the new springs. Gamers have praised the sport’s lifelike portrayal of trans experiences and the stress-free music composed by npckc’s accomplice, sdhizumi. As a finalist in Google Play’s Indie Video games Competition in Japan, they hope to attend extra gaming conventions to attach with fellow builders in individual.

Anshul and Rohan from Mumbai, India constructed a psychological well being assist app geared to the LGBTQIA+ group’s wants

Anshul and Rohan – App Creators, Mumbai, India

After Anshul returned to India from London, he met Rohan and the pair bonded over their psychological well being struggles. Collectively they shared a dream; to create one thing within the wellness house. This grew to become Evolve, an app with guided meditations, respiration workouts, and each day affirmations. When the pandemic hit, the pair noticed first-hand how underserved the LGBTQIA+ group was in psychological well being assist. For Rohan, who identifies as a homosexual man, this realization hit near dwelling. Collectively, Anshul and Rohan redeveloped Evolve in direction of the LGBTQIA+ group’s particular wants – constructing a protected house the place customers can share their experiences, search mentorship, and construct a supportive group.

BáiYù from Indiana, U.S. created a platform to publish genuine, queer visible novels and indie video games

BáiYù – Game Creator, Indiana, USA

Queer developer BáiYù loves writing tales, and began making video games at age 16. A part of a game-development group, BáiYù wished an reasonably priced manner to assist get their creations out. In order that they arrange Mission Ensō, publishing queer visible novels and narrative indie video games. With 10 titles on Google Play, BáiYù helps different builders from under-represented teams to share their very own genuine tales on Mission Ensō, even sharpening their video games earlier than launch. The preferred title on Mission Ensō is “Craving: A Homosexual Story”, through which players play a newly-out homosexual man navigating his freshman 12 months of faculty. BáiYù’s efforts have had a profound impression on gamers, with many sharing how these video games have positively remodeled their lives.

Alex and Jake from Nevada, U.S. constructed an inclusive courting app and social group for everybody

BáiYù – Game Creator, Indiana, USA

Alex and Jake grew up in an surroundings that didn’t settle for the LGBTQIA+ group. They began constructing apps collectively after a mutual buddy launched them. After they realized that queer folks had been searching for a platform that provided assist and significant connections, they created Taimi. Taimi is not only a courting app for LGBTQIA+ folks; it is also a social community the place they’ll bond, construct group, and really feel protected. Alex and Jake are additionally proud to accomplice with NGOs that present psychological well being assist for the group.

Uncover extra tales of app and recreation creators in #WeArePlay.


How helpful did you discover this weblog put up?



Navigating chaotic instances: forecasting amid the pandemic | Weblog | bol.com


An enormous quantity of knowledge

Initially from Poland, Eryk studied in Rotterdam, took a detour again to his dwelling nation, and as soon as once more returned to the Netherlands. “Earlier than becoming a member of bol, I labored at a Fintech startup. It was an amazing expertise, however one the place I used to be carrying many alternative hats. I used to be wanting to significantly concentrate on machine studying and I additionally needed to hitch a extra mature group. That’s when bol caught my eye.”

Eryk continues: “Moreover the maturity and the dimensions, the probabilities when it comes to tech and knowledge at bol actually fascinated me. There’s a large quantity of knowledge accessible right here, principally well-documented, clear and well-maintained. With round 40 million objects reaching tens of millions of individuals in The Netherlands and Belgium, there may be a lot for me to work with. My job is to translate this knowledge into forecasts for a number of use circumstances, like buyer wants, logistics and merchandise. To offer you a sensible instance: with this type of info bol can plan the staffing in our warehouses exactly, even 20 weeks forward.”

Chaotic instances

Becoming a member of bol within the fall of 2020, Eryk confronted a very difficult interval. He shares: “The pandemic utterly disrupted bol´s current forecasting fashions. We may now not depend on previous occasions and needed to cope with a brand new scenario with none management or accessible historic knowledge. It was an attention-grabbing scenario to be a part of and one I had, clearly, by no means skilled earlier than.”

Regardless of the difficult instances, Eryk realized an amazing deal. “We organized common brainstorming periods, developed fast fixes, and pursued a long-term answer – all on the identical time. Ultimately, we included the impression of COVID-19 utilizing a tailored function for our forecasting fashions, which efficiently restored our accuracy ranges. It was a very distinctive time to be a part of, and it made me develop so much as knowledgeable.”

Understanding LoRA with a minimal instance



Understanding LoRA with a minimal instance

LoRA (Low-Rank Adaptation) is a brand new approach for effective tuning massive scale pre-trained
fashions. Such fashions are normally educated on basic area information, in order to have
the utmost quantity of knowledge. As a way to receive higher ends in duties like chatting
or query answering, these fashions might be additional ‘fine-tuned’ or tailored on area
particular information.

It’s attainable to fine-tune a mannequin simply by initializing the mannequin with the pre-trained
weights and additional coaching on the area particular information. With the growing measurement of
pre-trained fashions, a full ahead and backward cycle requires a considerable amount of computing
assets. Nice tuning by merely persevering with coaching additionally requires a full copy of all
parameters for every job/area that the mannequin is tailored to.

LoRA: Low-Rank Adaptation of Massive Language Fashions
proposes an answer for each issues by utilizing a low rank matrix decomposition.
It will possibly cut back the variety of trainable weights by 10,000 instances and GPU reminiscence necessities
by 3 instances.

Methodology

The issue of fine-tuning a neural community might be expressed by discovering a (Delta Theta)
that minimizes (L(X, y; Theta_0 + DeltaTheta)) the place (L) is a loss perform, (X) and (y)
are the info and (Theta_0) the weights from a pre-trained mannequin.

We study the parameters (Delta Theta) with dimension (|Delta Theta|)
equals to (|Theta_0|). When (|Theta_0|) may be very massive, reminiscent of in massive scale
pre-trained fashions, discovering (Delta Theta) turns into computationally difficult.
Additionally, for every job it’s essential to study a brand new (Delta Theta) parameter set, making
it much more difficult to deploy fine-tuned fashions you probably have greater than a
few particular duties.

LoRA proposes utilizing an approximation (Delta Phi approx Delta Theta) with (|Delta Phi| << |Delta Theta|).
The remark is that neural nets have many dense layers performing matrix multiplication,
and whereas they sometimes have full-rank throughout pre-training, when adapting to a particular job
the burden updates could have a low “intrinsic dimension”.

A easy matrix decomposition is utilized for every weight matrix replace (Delta theta in Delta Theta).
Contemplating (Delta theta_i in mathbb{R}^{d instances ok}) the replace for the (i)th weight
within the community, LoRA approximates it with:

[Delta theta_i approx Delta phi_i = BA]
the place (B in mathbb{R}^{d instances r}), (A in mathbb{R}^{r instances d}) and the rank (r << min(d, ok)).
Thus as an alternative of studying (d instances ok) parameters we now must study ((d + ok) instances r) which is definitely
loads smaller given the multiplicative side. In observe, (Delta theta_i) is scaled
by (frac{alpha}{r}) earlier than being added to (theta_i), which might be interpreted as a
‘studying price’ for the LoRA replace.

LoRA doesn’t improve inference latency, as as soon as effective tuning is finished, you may merely
replace the weights in (Theta) by including their respective (Delta theta approx Delta phi).
It additionally makes it less complicated to deploy a number of job particular fashions on prime of 1 massive mannequin,
as (|Delta Phi|) is far smaller than (|Delta Theta|).

Implementing in torch

Now that now we have an thought of how LoRA works, let’s implement it utilizing torch for a
minimal downside. Our plan is the next:

  1. Simulate coaching information utilizing a easy (y = X theta) mannequin. (theta in mathbb{R}^{1001, 1000}).
  2. Practice a full rank linear mannequin to estimate (theta) – this can be our ‘pre-trained’ mannequin.
  3. Simulate a special distribution by making use of a metamorphosis in (theta).
  4. Practice a low rank mannequin utilizing the pre=educated weights.

Let’s begin by simulating the coaching information:

library(torch)

n <- 10000
d_in <- 1001
d_out <- 1000

thetas <- torch_randn(d_in, d_out)

X <- torch_randn(n, d_in)
y <- torch_matmul(X, thetas)

We now outline our base mannequin:

mannequin <- nn_linear(d_in, d_out, bias = FALSE)

We additionally outline a perform for coaching a mannequin, which we’re additionally reusing later.
The perform does the usual traning loop in torch utilizing the Adam optimizer.
The mannequin weights are up to date in-place.

practice <- perform(mannequin, X, y, batch_size = 128, epochs = 100) {
  choose <- optim_adam(mannequin$parameters)

  for (epoch in 1:epochs) {
    for(i in seq_len(n/batch_size)) {
      idx <- pattern.int(n, measurement = batch_size)
      loss <- nnf_mse_loss(mannequin(X[idx,]), y[idx])
      
      with_no_grad({
        choose$zero_grad()
        loss$backward()
        choose$step()  
      })
    }
    
    if (epoch %% 10 == 0) {
      with_no_grad({
        loss <- nnf_mse_loss(mannequin(X), y)
      })
      cat("[", epoch, "] Loss:", loss$merchandise(), "n")
    }
  }
}

The mannequin is then educated:

practice(mannequin, X, y)
#> [ 10 ] Loss: 577.075 
#> [ 20 ] Loss: 312.2 
#> [ 30 ] Loss: 155.055 
#> [ 40 ] Loss: 68.49202 
#> [ 50 ] Loss: 25.68243 
#> [ 60 ] Loss: 7.620944 
#> [ 70 ] Loss: 1.607114 
#> [ 80 ] Loss: 0.2077137 
#> [ 90 ] Loss: 0.01392935 
#> [ 100 ] Loss: 0.0004785107

OK, so now now we have our pre-trained base mannequin. Let’s suppose that now we have information from
a slighly totally different distribution that we simulate utilizing:

thetas2 <- thetas + 1

X2 <- torch_randn(n, d_in)
y2 <- torch_matmul(X2, thetas2)

If we apply out base mannequin to this distribution, we don’t get a superb efficiency:

nnf_mse_loss(mannequin(X2), y2)
#> torch_tensor
#> 992.673
#> [ CPUFloatType{} ][ grad_fn =  ]

We now fine-tune our preliminary mannequin. The distribution of the brand new information is simply slighly
totally different from the preliminary one. It’s only a rotation of the info factors, by including 1
to all thetas. Which means the burden updates are usually not anticipated to be complicated, and
we shouldn’t want a full-rank replace so as to get good outcomes.

Let’s outline a brand new torch module that implements the LoRA logic:

lora_nn_linear <- nn_module(
  initialize = perform(linear, r = 16, alpha = 1) {
    self$linear <- linear
    
    # parameters from the unique linear module are 'freezed', so they don't seem to be
    # tracked by autograd. They're thought-about simply constants.
    purrr::stroll(self$linear$parameters, (x) x$requires_grad_(FALSE))
    
    # the low rank parameters that can be educated
    self$A <- nn_parameter(torch_randn(linear$in_features, r))
    self$B <- nn_parameter(torch_zeros(r, linear$out_feature))
    
    # the scaling fixed
    self$scaling <- alpha / r
  },
  ahead = perform(x) {
    # the modified ahead, that simply provides the outcome from the bottom mannequin
    # and ABx.
    self$linear(x) + torch_matmul(x, torch_matmul(self$A, self$B)*self$scaling)
  }
)

We now initialize the LoRA mannequin. We’ll use (r = 1), that means that A and B can be simply
vectors. The bottom mannequin has 1001×1000 trainable parameters. The LoRA mannequin that we’re
are going to effective tune has simply (1001 + 1000) which makes it 1/500 of the bottom mannequin
parameters.

lora <- lora_nn_linear(mannequin, r = 1)

Now let’s practice the lora mannequin on the brand new distribution:

practice(lora, X2, Y2)
#> [ 10 ] Loss: 798.6073 
#> [ 20 ] Loss: 485.8804 
#> [ 30 ] Loss: 257.3518 
#> [ 40 ] Loss: 118.4895 
#> [ 50 ] Loss: 46.34769 
#> [ 60 ] Loss: 14.46207 
#> [ 70 ] Loss: 3.185689 
#> [ 80 ] Loss: 0.4264134 
#> [ 90 ] Loss: 0.02732975 
#> [ 100 ] Loss: 0.001300132 

If we take a look at (Delta theta) we are going to see a matrix stuffed with 1s, the precise transformation
that we utilized to the weights:

delta_theta <- torch_matmul(lora$A, lora$B)*lora$scaling
delta_theta[1:5, 1:5]
#> torch_tensor
#>  1.0002  1.0001  1.0001  1.0001  1.0001
#>  1.0011  1.0010  1.0011  1.0011  1.0011
#>  0.9999  0.9999  0.9999  0.9999  0.9999
#>  1.0015  1.0014  1.0014  1.0014  1.0014
#>  1.0008  1.0008  1.0008  1.0008  1.0008
#> [ CPUFloatType{5,5} ][ grad_fn =  ]

To keep away from the extra inference latency of the separate computation of the deltas,
we might modify the unique mannequin by including the estimated deltas to its parameters.
We use the add_ technique to switch the burden in-place.

with_no_grad({
  mannequin$weight$add_(delta_theta$t())  
})

Now, making use of the bottom mannequin to information from the brand new distribution yields good efficiency,
so we are able to say the mannequin is tailored for the brand new job.

nnf_mse_loss(mannequin(X2), y2)
#> torch_tensor
#> 0.00130013
#> [ CPUFloatType{} ]

Concluding

Now that we realized how LoRA works for this easy instance we are able to assume the way it might
work on massive pre-trained fashions.

Seems that Transformers fashions are principally intelligent group of those matrix
multiplications, and making use of LoRA solely to those layers is sufficient for lowering the
effective tuning value by a big quantity whereas nonetheless getting good efficiency. You may see
the experiments within the LoRA paper.

After all, the concept of LoRA is straightforward sufficient that it may be utilized not solely to
linear layers. You may apply it to convolutions, embedding layers and really some other layer.

Picture by Hu et al on the LoRA paper

Robots-Weblog | Epson veröffentlicht die Roboter-Programmiersoftware RC+ 8.0

0


Eine einzige Plattform zur einfacheren Automatisierung

Epson, ein weltweit führendes Unternehmen in der Robotik- und Automatisierungstechnologie, kündigt die Einführung seiner neuen Roboter-Programmiersoftware RC+ 8.0 an. Diese leistungsstarke, intuitive Plattform wurde entwickelt, um die Fähigkeiten der gesamten Roboter-Produktreihe von Epson zu erweitern und bietet unübertroffene Funktionalität und Erweiterungsmöglichkeiten für Systemintegratoren und Endnutzer.

Robots-Weblog | Epson veröffentlicht die Roboter-Programmiersoftware RC+ 8.0

Die Software program RC+ 8.0 setzt neue Maßstäbe in der Roboter Programmierung und löst die Vorgängerversion RC+ 7.0 ab. Es handelt sich um eine einzige, allumfassende Softwareplattform, die die gesamte Palette der Epson-Roboter unterstützt, einschließlich SCARA, 6-Achsen-Robotern sowie weiterer spezialisierter Produkte. Diese einheitliche Plattform vereinfacht den Programmierprozess und macht ihn für Benutzer aus allen Industriebereichen zugänglicher und effizienter.

Produktmerkmale und Vorteile

Leistungsstarke und intuitive Benutzeroberfläche:  

Die Software program RC+ 8.0 wird auf einer benutzerfreundlichen Home windows-Oberfläche dargestellt, die durch eine offene Struktur und integrierte Bildverarbeitung erweitert wurde. Das macht das Programmieren von Anwendungen auch für Personen mit begrenztem Robotik-Know-how einfach. Der intuitive Editor für Befehle und Syntax mit Hilfefunktionalität und farbcodierter Prüfung minimiert Fehler und vereinfacht die Programmentwicklung.

Integrierter 3D-Simulator:  

Zu den herausragenden Merkmalen der RC+ 8.0 gehört der integrierte 3D-Simulator, der ohne Zusatzkosten im Lieferumfang enthalten ist. Dieses Instrument ermöglicht es Benutzern, mit der Programmierung ihrer Anwendungen zu beginnen, bevor die {Hardware} überhaupt eingetroffen ist, was eine Machbarkeitsanalyse und Validierung von Ideen für das Maschinendesign ermöglicht. Anwender können auch CAD-Daten von Maschinen oder anderen Geräten hinzufügen, um umfassende und genaue Simulationen zu gewährleisten.

Umfassendes Projektmanagement und -entwicklung:  

Epson RC+ 8.0 wurde für die Entwicklung leistungsstarker Roboter-Automatisierungslösungen entwickelt, unterstützt eine Vielzahl von Peripheriegeräten und bietet vollständig integrierte Optionen wie die Bildverarbeitungssoftware Imaginative and prescient Information, den Teilezuführungsassistenten Half Feeding Information und das Kraftsensorsystem Power Information. Diese Integration ermöglicht es, alle Komponenten nahtlos in einer einzigen Entwicklungsumgebung untereinander zu verbinden.

Verbesserte Benutzererfahrung:  

Bei der Entwicklung der RC+ 8.0 wurde größter Wert auf Benutzerfreundlichkeit gelegt und dabei die neuesten Erfahrungen im Betrieb berücksichtigt. Das neue UI-Design, der verbesserte Editor, der Simulator und die GUI-Funktionen stellen sicher, dass Benutzer einfach navigieren und die Software program nutzen können, wodurch die Einführungshürden für neue Kunden reduziert werden.

Wie wir den Herausforderungen der Kunden begegnen

Epson erkennt die Herausforderungen, mit denen Kunden konfrontiert sind, darunter das mangelhafte Robotik-Fachwissen, die Nutzung unterschiedlicher Software program bei der Erstellung von Lösungen und die Anforderung, Ideen für das Maschinendesign ohne {Hardware} testen und validieren zu können. Die Software program RC+ 8.0 begegnet diesen Herausforderungen durch:

– Abbau technischer Barrieren: Das verbesserte UI-Design und die erweiterten Funktionalitäten erleichtern das Verständnis und die Handhabung der Software program.

– Unterstützung umfassender Lösungen: Mit der RC+ 8.0 lassen sich nicht nur die Roboter, sondern auch das Imaginative and prescient-System, das Zuführsystem und weitere Peripheriegeräte programmieren, was den Integrationsprozess vereinfacht.

– Machbarkeitsanalyse: Der mitgelieferte 3D-Simulator ermöglicht es Benutzern, das Programmierte zu validieren, bevor sie in {Hardware} investieren.

Mit ihren intuitiven und unbegrenzt erweiterbaren Programmiermöglichkeiten unterstützt die Software program RC+ 8.0 zwei Programmieransätze, um sowohl Experten als auch Einsteiger zufrieden zu stellen. Sie wurde entwickelt, um Anwendungserweiterungen und Lösungen von Drittanbietern zu unterstützen und stellt sicher, dass Benutzer weiterhin innovativ sein und ihre Automatisierungsprozesse verbessern können.

Volker Spanier, Head of Automation bei Epson Europe, kommentiert:  

„Die Markteinführung von RC+ 8.0 markiert einen wichtigen Meilenstein in unserem Bestreben, branchenführende Robotiklösungen anzubieten. Diese umfassende Software program stellt sicher, dass unsere Kunden eine nahtlose Automatisierung erreichen, technische Barrieren abbauen und ihre Gesamtproduktivität steigern können. RC+ 8.0 beweist unser Engagement für Innovation und benutzerorientiertes Design.“

Verfügbarkeit

Die Software program Epson RC+ 8.0 wird bei jedem Kauf eines Roboters mitgeliefert. So haben alle Kunden Zugriff auf diese leistungsstarke und intuitive Programmierplattform. Weitere Informationen erhalten Sie unter [ Roboter | Epson Deutschland], oder wenden Sie sich an Ihren Epson Fachhändler.



Clots and vaccines | Ferniglab Weblog


Clots and vaccines

Blood clots, for instance, deep vein thromboses or pulmonary embolisms, are severe and we must always rightly be involved about these. With ~ 17 M doses of the AZ vaccine delivered into individuals, we have now studies of 15 instances of deep vein thrombosis and 22 instances of pulmonary embolism. Deep vein thrombosis happens at fee of 0.1% (so 1 in 1000) throughout all age teams, growing with age. So every single day meaning round 47 instances in a inhabitants of 17 million – in truth will probably be extra, as a result of these vaccinated will not be consultant of the inhabitants, however an older section.

If we take into account that vaccination has been occurring at a excessive fee for a little bit over 2 months, then we count on a minimum of 2,800 instances of deep vein thrombosis.

We are able to draw some restricted conclusions concerning clots, the AZ and different vaccines.

  1. The reporting mechanism could also be very, very poor, and solely a fraction of affected person outcomes are reported. That is in stark distinction to a scientific trial, the place ALL outcomes re reported – recall that one poor particular person on the Pfizer/BioNT trial was killed by a lightening strike, and within the AZ trial there have been 2 deaths within the vaccination group and 4 within the placebo group. These and different deaths had been unrelated to the vaccine.

Selective reporting results in fantasy.

  1. Reporting isn’t so poor, so the correlation is that vaccination protects in opposition to thrombosis. With out realizing the standard of reporting of hostile results and the chance {that a} good many clinicians will take into account many circumstances in vaccinated sufferers to be unrelated to the vaccine, this intriguing chance stays a chance and no extra.
  2. “Clot”, a minimum of for UK audio system of English has a second which means, akin to the US ‘blockhead’. It could be that the AZ vaccine prompts clots in authorities.

At current the most definitely causal relation is (3). It’s because we have now ample proof impartial of vaccination programmes that show an typically increased focus of clots in authorities and within the higher reaches of many greasy poles than one would count on had been altitude to correlate not directly with capacity.

Word: H/T @archer_rs who jogged my memory that the one vaccine that’s non-profit is the AZ vaccine. The historical past of medication teaches us that the revenue motive results in hiding actual and severe hostile occasions.